Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
54 Assessment 3 Assessment is commonly thought of as the means to find out whether individuals have learned somethingâthat is, whether they can demonstrate that they have learned the information, concepts, skills, procedures, etc., tar- geted by an educational effort. In school, examinations or tests are a standard feature of studentsâ experience, intended to measure the degree to which someone has, for example, mastered a subtraction algorithm, developed a mental model of photosynthesis, or appropriately applied economic theory to a set of problems. Other products of student work, such as reports and essays, also serve as the basis for systematic judgments about the nature and degree of individual learning. Informal settings for science learning typically do not use tests, grades, class rankings, and other practices commonly used in schools and work- place settings to document achievement. Nevertheless, the informal science community has embraced the cause of assessing the impact of out-of-school learning experiences, seeking to understand how everyday, after-school, museum, and other types of settings contribute to the development of sci- entific knowledge and capabilities. This chapter discusses the evidence for outcomes from engagement in informal environments for science learning, â The educational research community generally makes a distinction between assessmentâ the set of approaches and techniques used to determine what individuals learn from a given instructional programâand evaluationâthe set of approaches and techniques used to make judgments about a given instructional program, approach, or treatment, improve its effective- ness, and inform decisions about its development. Assessment targets what learners have or have not learned, whereas evaluation targets the quality of the intervention.
Assessment 55 focusing on the six strands of scientific learning introduced earlier and ad- dressing the complexities associated with what people know based on their informal learning experiences. In both informal and formal learning environments, assessment requires plausible evidence of outcomes and, ideally, is used to support further learn- ing. The following definition reflects current theoretical and design standards among many researchers and practitioners (Huba and Freed, 2000, p. 8): Assessment is the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences; the process culminates when assess- ment results are used to improve subsequent learning. Whether assessments have a local and immediate effect on learning activi- ties or are used to justify institutional funding or reform, most experts in assessment agree that the improvement of outcomes should lie at the heart of assessment efforts. Yet assessing learning in ways that are true to this intent often proves difficult, particularly in informal settings. After reviewing some of the practical challenges associated with assessing informal learning, this chapter offers an overview of the types of outcomes that research in informal environments has focused on to date, how these are observed in research, and grouping these outcomes according to the strands of science learning. Appendix B includes discussion of some technical issues related to assessment in informal environments. DIFFICULTIES IN ASSESSING SCIENCE LEARNING IN INFORMAL ENVIRONMENTS Despite general agreement on the importance of collecting more and better data on learning outcomes, the field struggles with theoretical, tech- nical, and practical aspects of measuring learning. For the most part, these difficulties are the same ones confronting the education community more broadly (Shepard, 2000; Delandshere, 2002; Moss, Giard, and Haniford, 2006; Moss, Pullin, Haertel, Gee, and Young, in press; Wilson, 2004; National Research Council, 2001). Many have argued that the diversity of informal learning environments for science learning further contributes to the dif- ficulties of assessment in these settings; they share the view that one of the main challenges is the development of practical, evidence-centered means for assessing learning outcomes of participants across the range of science learning experiences (Allen et al., 2007; Falk and Dierking, 2000; COSMOS Corporation, 1998; Martin, 2004). For many practitioners and researchers, concerns about the appropriate- ness of assessment tasks in the context of the setting are a major constraint
56 Learning Science in Informal Environments on assessing science learning outcomes (Allen et al., 2007; Martin, 2004). It stands roughly as a consensus that the standardized, multiple-choice testâwhat Wilson (2004) regrets has become a âmonocultureâ species for demonstrating outcomes in the K-12 education systemâis at odds with the types of activities, learning, and reasons for participation that characterize informal experiences. Testing can easily be viewed as antithetical to common characteristics of the informal learning experience. Controlling participantsâ experiences to isolate particular influences, to arrange for pre- and post- tests, or to attempt other traditional measures of learning can be impractical, disruptive, and, at times, impossible given the features, norms, and typical practices in informal environments. To elaborate: Visits to museums and other designed informal settings are typically short and isolated, making it problematic to separate the effects of a single visit from the confluence of factors contributing to positive sci- ence learning outcomes. The very premise of engaging learners in activities largely for the purposes of promoting future learning experiences beyond the immediate environment runs counter to the prevalent model of assess- ing learning on the basis of a well-defined educational treatment (e.g., the lesson, the unit, the yearâs math curriculum). In addition, many informal learning spaces, by definition, provide participants with a leisure experience, making it essential that the experience conforms to expectations and that events in the setting do not threaten self-esteem or feel unduly critical or controllingâfactors that can thwart both participation and learning (Shute, 2008; Steele, 1997). Other important features of informal environments for science learning include the high degree to which contingency typically plays a role in the unfolding of eventsâthat is, much of what happens in these environments emerges during the course of activities and is not prescribed or predetermined. To a large extent, informal environments are learner-centered specifically because the agenda is mutually set across participantsâincluding peers, family members, and any facilitators who are presentâmaking it difficult to consistently control the exposure of participants in the setting to particular treatments, interventions, or activities (Allen et al., 2007). It may well be that contingency, insofar as it allows for spontaneous alignment of personal goals and motivations to situational resources, lies at the heart of some of the most powerful learning effects in the informal domain. Put somewhat differently, the freedom and flexibility that participants have in working with people and materials in the environment often make informal learning set- tings particularly attractive. Another feature that makes many informal learning environments at- tractive is the consensual, collaborative aspect of deciding what counts as success: for example, what children at a marine science camp agree is a â This is also an issue of great importance among educators and education researchers concerned with classroom settings.
Assessment 57 good design for a submersible or an adequate method for measuring salinity. In some instances, determining a workable standard for measuring success ahead of timeâthat is, before the learning activities among participants take placeâcan be nearly impossible. The agenda that arises, say, in a family visit to a museum may include unanticipated episodes of identity reinforcement, the telling of stories, remindings of personal histories, rehearsals of new forms of expression, and other nuanced processesâall of which support learning yet evade translation into many existing models of assessment. The type of shared agency that allows for collaborative establishment of goals and standards for success can extend to multiple aspects of informal learning activities. Participants in summer camps, science centers, family activities, hobby groups, and such are generally encouraged to take full advantage of the social resources available in the setting to achieve their learning goals. The team designing a submersible in camp or a playgroup engineering a backyard fort can be thought of as having implicit permission to draw on the skills, knowledge, and strengths of those present as well as any additional resources available to get their goals accomplished. âDoing wellâ in informal settings often means acting in concert with others. Such norms are generally at odds with the sequestered nature of the isolated performances characteristic of school. Research indicates that these sequestered assess- ments lead to systematic undermeasurement of learning precisely because they fail to allow participants to draw on material and human resources in their environment, even though making use of such resources is a hallmark of competent, adaptive behavior (Schwartz, Bransford, and Sears, 2005). Despite the difficulties of assessing outcomes, researchers have managed to do important and valuable work. In notable ways, this work parallels the âauthentic assessmentâ approaches taken by some school-based researchers, employing various types of performances, portfolios, and embedded assess- ments (National Research Council, 2000, 2001). Many of these approaches rely on qualitative interpretations of evidence, in part because researchers are still in the stages of exploring features of the phenomena rather than quantitatively testing hypotheses (National Research Council, 2002). Yet, as a body of work, assessment of learning in informal settings draws on the full breadth of educational and social scientific methods, using questionnaires, structured and semistructured interviews, focus groups, participant observa- tion, journaling, think-aloud techniques, visual documentation, and video and audio recordings to gather data. Taken as a whole, existing studies provide a significant body of evidence for science learning in informal environments as defined by the six strands of science learning described in this report. TYPES OF OUTCOMES A range of outcomes are used to characterize what participants learn about science in informal environments. These outcomesâusually described
58 Learning Science in Informal Environments as particular types of knowledge, skills, attitudes, feelings, and behaviorsâcan be clustered in a variety of ways, and many of them logically straddle two or more categories. For example, the degree to which someone shows persis- tence in scientific activity could be categorized in various ways, because this outcome depends on the interplay between multiple contextual and personal factors, including the skills, disposition, and knowledge the person brings to the environment. Similarly, studies focusing on motivation might emphasize affect or identity-related aspects of participation. In Chapter 2, we described the goals of science learning in terms of six interweaving conceptual strands. Here our formulation of the strands focuses on the science-related behaviors that people are able to engage in because of their participation in science learning activities and the ways in which researchers and evaluators have studied them. Strand 1: Developing Interest in Science Nature of the Outcome Informal environments are often characterized by peopleâs excitement, interest, and motivation to engage in activities that promote learning about the natural and physical world. A common characteristic is that participants have a choice or a role in determining what is learned, when it is learned, and even how it is learned (Falk and Storksdieck, 2005). These environments are also designed to be safe and to allow exploration, supporting interac- tions with people and materials that arise from curiosity and are free of the performance demands that are characteristic of schools (Nasir, Rosebery, Warren, and Lee, 2006). Engagement in these environments creates the opportunity for learners to experience a range of positive feelings and to attend to and find meaning in relation to what they are learning (National Research Council, 2007). Participation is often discussed in terms of interest, conceptualized as both the state of heightened affect for science and the predisposition to reengage with science (see Hidi and Renninger, 2006). Interest includes the excitement, wonder, and surprise that learners may experience and the knowledge and values that make the experience relevant and meaningful. Recent research on the relationship between affect and learning shows that the emotions associated with interest are a major factor in thinking and learning, helping people learn as well as helping with what is retained and how long it is remembered (National Research Council, 2000). Interest may even have a neurological basis (termed âseeking behavior,â Panksepp, â Whereas motivation is used to describe the will to succeed across multiple contexts (see Eccles, Wigfield, and Schiefele, 1998), interest is not necessarily focused on achievement and is always linked to a particular class of objects, events, or ideas, such as science (Renninger, Hidi, and Krapp, 1992; Renninger and Wozniak, 1985).
Assessment 59 1998), suggesting that all individuals can be expected to have and to be able to develop interest. In addition, interest is an important filter for selecting and focusing on relevant information in a complex environment (Falk and Dierking, 2000). In this sense, the psychological state of mind referred to as interest can be viewed as an evolutionary adaptation to select what is per- ceived as important or relevant from the environment. People pay attention to the things that interest them, and hence interest becomes a strong filter for what is learned. When people have a more developed interest for scienceâsometimes described in terms of hobbies or personal excursions (Azevedo, 2006), is- lands of expertise (Crowley and Jacobs, 2002), passions (Neumann, 2006), or identity-related motivations (Ellenbogen, Luke, and Dierking, 2004; Falk and Storksdieck, 2005; Falk, 2006)âthey are inclined to draw more heavily on available resources for learning and use systematic approaches to seek answers (Engle and Conant, 2002; Renninger, 2000). This line of research suggests that the availability or existence of stimulating, attractive learning environments can generate the interest that leads to participation (Falk et al., 2007). People with an interest in science are also likely to be motivated learners in science; they are more likely to seek out challenge and difficulty, use effective learning strategies, and make use of feedback (Csikszentmihalyi, Rathunde, and Whalen, 1993; Lipstein and Renninger, 2006; Renninger and Hidi, 2002). These outcomes help learners continue to develop interest, further engaging in activity that promotes enjoyment and learning. People who come to informal environments with developed interests are likely to set goals, self-regulate, and exert effort easily in the domains of their interests, and these behaviors often come to be habits, supporting their ongoing engage- ment (Lipstein and Renninger, 2006; Renninger and Hidi, 2002; Renninger, Sansone, and Smith, 2004). Methods of Researching Strand 1 Outcomes Although self-report data are susceptible to various forms of bias on the part of the research participant, they are nonetheless frequently used in studying outcomes with affective and attitudinal components because of the subjective nature of these outcomes. Self-report studies are typically based on questionnaires or structured interviews developed to target attitudes, beliefs, and interests regarding science among respondents in particular age groups, with an emphasis on how these factors relate to school processes and outcomes (e.g., Renninger, 2003; Moore and Hill Foy, 1997; Weinburgh and Steele, 2000). Methods linking prior levels of interest and motivation to outcomes have been used in research as well. â It should be noted that all normatively functioning individuals might be expected to have interest; Travers (1978) points out that lack of interest accompanies pathology.
60 Learning Science in Informal Environments Researchers have also used self-report techniques to investigate whether prior levels of interest were related to learning about conservation (Falk and Adelman, 2003; Taylor, 1994). Falk and Adelman (2003), for example, showed significant differences in knowledge, understanding, and attitudes for subgroups of participants based on their prior levels of knowledge and attitudes. Researchers replicated this approach with successful results in a subsequent study at Disneyâs Animal Kingdom (Dierking et al., 2004). Studies of public understanding of science have used questionnaires to assess levels of interest on particular topics. For example, they have documented variation in peopleâs reported levels of interest in science top- ics: The general adult population in both the United States and Europe is mildly interested in space exploration and nuclear energy; somewhat more than mildly interested in new scientific discoveries, new technologies, and environmental issues; and fairly interested in medical discoveries (European Commission, 2001; National Science Board, 2002). An important component of interest, as noted, is positive affect (Hidi and Renninger, 2006). Whereas positive affect toward science is often regarded as a primary outcome of informal learning, this outcome is notoriously dif- ficult to assess. Positive affect can be transient and can develop even when conscious attention is focused elsewhere making it difficult for an observer to assess. Various theoretical models have attempted to map out a space of emotional responses, either in terms of a small number of basic emotions or emotional dimensions, such as pleasure, arousal, and dominance, and to apply these in empirical research (Plutchik, 1961; Russell and Mehrabian, 1977; Isen, 2004). Analysis of facial expressions has been a key tool in studying affect, with mixed results. Ekmanâs seven facial expressions have been used to assess fleeting emotional states (Ekman and Rosenberg, 2005). Dancu (2006) used this method in a pilot study to assess emotional states of children as they engaged with exhibits and compared these observations to reports by chil- dren and their caregivers, finding low agreement among all measures. Kort, Reilly, and Picard (2001) have created a system of analyzing facial expressions suited to capturing emotions relevant to learning (such as flow, frustration, confusion, eureka), but her methods require special circumstances (e.g., the subject must sit in a chair) and do not allow for naturalistic study in large spaces, thus complicating application of this approach many informal set- tings. Ma (2006) used a combination of open-ended and semantic-differential questions, in conjunction with a self-assessment mannequin. Physiological measures (skin conductance, posture, eye movements, EEG, EKG) relevant to learning are being developed (Mota and Picard, 2003; Lu and Graesser, in press; Jung, Makeig, Stensmo, and Seinowski, 1997). Discourse analysis has been another important method for naturalistic study of emotion during museum visits. Allen (2002), for example, coded visitorsâ spontaneous articulations of their emotions using three categories of
Assessment 61 affect: positive, negative, and neutral. Both spontaneous comments and com- ments elicited by researchers have similarly been coded to show differences in emotional response during museum visits. Clipman (2005), for example, used the Positive and Negative Affect Schedule to show that visitors leaving a Chihuly exhibit of art glass reported being more happy and inspired than visitors to a quilting exhibit in the same museum (Clipman, 2005). Myers, Saunders, and Birjulin (2004) used Likert and semantic-differential measures to show that zoo visitors had stronger emotional responses to gorillas than other animals on display. Raphling and Serrell (1993) asked visitors to complete the sentence âIt reminded me that . . .â as a part of an exit questionnaire for exhibitions on a range of topics, and they reported that this prompt tends to elicit affective responses from visitors, including wonderment, imagining, reminiscences, convictions, and even spiritual connection (such as references to the power of God or nature). In studies of informal learning, interest and related positive affect are also often inferred on the basis of behavior displayed. That is, participants who seem engaged in informal learning activities are presumed to be interested. In this sense, interest and positive affect are often not treated as outcomes, but rather as preconditions for engagement. Studies that document children spontaneously asking âwhyâ questions, for example, take as a given that children are curious about, interested in, and positively predisposed to en- gaging in activity that entails learning about the natural world (e.g., Heath, 1999). Studies that focus on adult behavior, such as engaging in hobbies, are predicated on a similar assumptionâthat interest can be assumed for the people and the context being studied (e.g., Azevedo, 2006). A meta-analysis of the types of naturally occurring behavior thought to provide evidence of individualsâ interest in informal learning activities could be useful for develop- ing systematic approaches to studying interest. Such an analysis also could be useful in showing how interest is displayed and valued among participants in informal learning environments, providing an understanding of interest as it emerges and is made meaningful in social interaction. Strand 2: Understanding Science Knowledge Nature of the Outcome As progressively more research shows, learning about natural phenom- ena involves ordinary, everyday experiences for human beings from the earliest ages (National Research Council, 2007). The types of experiences common across the spectrum of informal environments, including everyday settings, do more than provide enjoyment and engagement: they provide substance on which more systematic and coherent conceptual understand- ing and content structures can be built. Multiple models exist of the ways in which scientific understanding is built over time. Some (e.g., Vosniadou
62 Learning Science in Informal Environments and Brewer, 1992) argue that learners build coherent theories, much like scientists, by integrating their experiences, and others (e.g., diSessa, 1988) argue that scientific knowledge is often constructed of many small fragments that are brought to mind in relevant situations. Either way, small pieces of insight, inferences, or understanding are accepted as vital components of scientific knowledge-building. Most traditionally valued aspects of science learning fall into this strand: models, fact, factual recall, and application of memorized principles. These aspects of science learning can be abstract and highly curriculum-driven; they are often not the primary focus of informal environments. Assessments that focus on Strand 2 frequently show little or no positive change of Strand 2 outcomes for learners. However, there are several studies that have shown positive learning outcomes, suggesting that even a single visit to an informal learning setting (e.g., an exhibition) may support development or revision of knowledge (Borun, Massey, and Lutter, 1993; Fender and Crowley, 2007; Guichard, 1995; Korn, 2003; McNamara, 2005). At the same time, studies of informal environments for science learning have explored cognitive outcomes that are more compatible with experiential and social activities: perceiving, noticing, and articulating new aspects of the natural world, understanding concepts embedded in interactive experiences, making connections between scientific ideas or experiences and everyday life, reinforcing prior knowledge, making inferences, and building an expe- riential basis for future abstractions to refer to. Informal experiences have also been shown to be quite memorable over time (see, e.g., Anderson and Piscitelli, 2002; Anderson and Shimizu, 2007). While the knowledge of most learners is often focused on topics of per- sonal interest, it is important to note that most people do not learn a great deal of science in the context of a single, brief âtreatment.â However, this ought not to be considered an entirely negative finding. Consider that learn- ing in school is rarely assessed on the basis of a one- or two-hour class, yet science learning in informal environments is often assessed after exposures that do not exceed one to two hours. Falk and Storksdieck (2005) found that a single visit to an exhibition did increase the scientific content knowledge of at least one-third of the adult visitors, particularly those with low prior knowledge. However, even participants whose learning is not evident in a pre-post design may take away something important: The potential to learn laterâwhat How People Learn refers to as preparation for future learning (National Research Council, 2000). For example, visitors whose interest is sparked (Strand 1) presumably are disposed to build on this experience in the months that follow a science center visit by engaging in other informal learning experiences.
Assessment 63 Methods of Researching Strand 2 Outcomes Outcomes in this category can be the most âloadedâ for learners. If not carefully designed, assessments of content knowledge can make learners feel inadequate, and this throws into question the validity of the assessment, going against the expectations of learners in relation to norms of the setting and the social situation. The traditional method for measuring learning (or science literacy) has been to ask textbook-like questions and to judge the nearness of an individualâs answer to the expertâs version of the scientific story. In terms of what researchers know about the nature of learning, this is a limited approach to documenting what people understand about the world around them. This outcome category is also vulnerable to false nega- tives, because cognitive change is highly individual and difficult to assess in a standardized way. An essential element of informal environments is that learners have some choice in what they attend to, what they take away from an experience, what connections they make to their own lives. Consequently, testing students only on recall of knowledge can cause researchers to miss key learning outcomes for any particular learner, since these outcomes are based on the learnerâs own experience and prior knowledge. To avert the ethical, practical, and educational pitfalls related to assessing content knowledge, many researchers and evaluators working in informal environments put effort into generating assessments that have nonthreatening content, a breadth of possible responses, comfortable delivery mechanisms, a conversational tone, and appropriateness to the specific audience being targeted. Also, these assessments leave room for unexpected and emergent outcomes. Questions asked with an understanding of the ways in which people are likely to have incorporated salient aspects of a scientific idea into their own lives appropriately measure their general level of science knowledge and understanding. Yet we also acknowledge that while such measures are well aligned with the goals of informal environments, they lack objectivity of standardized measures. An important method for assessing scientific knowledge and understand- ing in informal environments is the analysis of participantsâ conversations. Researchers interested in everyday and after-school settings study science- related discourse and behavior as it occurs in the course of ordinary, ongoing activity (Bell, Bricker, Lee, Reeve, and Zimmerman, 2006; Callanan, Shrager, and Moore, 1995; Sandoval, 2005). Researchers focused on museums and other designed environments have used a variety of schemes to classify these conversations into categories that show that people are doing cognitive work and engaging in sense-making. The categories used in these classification schemes have included: identify, describe, interpret/apply (Borun, Chambers, and Cleghorn, 1996); list, personal synthesis, analysis, synthesis, explanation (Leinhardt and Knutson, 2004); perceptual, conceptual, affective, connecting, strategic (Allen, 2002); and levels of metacognition (Anderson and Nashon,
64 Learning Science in Informal Environments 2007). Most of these categorizations have some theoretical basis, but they are also partly emergent from the data. A great deal of research has been conducted on the new information, ideas, concepts, and even skills acquired in museums and other designed settings. Some museum researchers have measured content knowledge us- ing think-aloud protocols. In these protocols, a participant goes through a learning experience and talks into a microphone while doing it. OâNeil and Dufresne-Tasse (1997) used a talk-aloud method to show that visitors were very cognitively active when looking at objects, even objects passively dis- played. The principal limitation of this method is that it is likely to disrupt the learning processes to some degree, not least of which is the elimination of conversation in a visiting group. Beaumont (2005) used a variation of this technique with whole groups by inviting families to think aloud âwhen appropriateâ during their visit to an exhibition. When studying children, clinical interviews may be helpful for eliciting the ways in which they think about concepts embedded in exhibits, as well as the ways in which their understanding may be advanced or hindered. For example, Feher and Rice (1987, 1988) interviewed children using a series of museum exhibits about light and color, to identify common conceptions and suggest modifications to the exhibit. Several methods are used to elicit the concepts, explanations, arguments, models and facts related to science that participants generate, understand, and remember after engaging in science learning experiences. These include structured self-reports, in the form of questionnaires, interviews, and focus groups (see Appendix B for a discussion of individual and group interviews). Self-reports can be used to assess understanding and recall of an individualâs experiences, syntheses of big ideas, and information that the respondent says he or she ânever knew.â For example, a summative evaluation of Search for Life (Korn, 2006) showed that visitors had understood a challenging big idea (that the search for life on other planets begins by looking at extreme environments on Earth that may be similar) and also showed they had not thought deeply about issues regarding space exploration or life on other planets. Researchers also sometimes engage visitors to museums, science centers, and other designed environments in conversations; they ask them to talk about their experience in relation to particular issues of interest to the institution to better understand the overlap between the agendas of the institutionâs staff and the visitors. For example, for each of an exhibitionâs five primary themes, Leinhardt and Knutson (2004) gave visitors a picture and a statement and coded the ensuing discussion as part of their assessment of learning in the exhibition. Rubrics have been used to code the quality of visitorsâ descriptions of a particular topic or concept of interest. Perry called these âknowledge hierarchiesâ (1993) and used them to characterize both baseline understandings and learning from an exhibition. One important underlying assumption in this research is the relationship between thought
Assessment 65 and language. However, mapping the relationship between language and thought is complex and not fully developed. Several types of learning outcomes assessments used in museums and other designed spaces engage participants in activities that require them to demonstrate what they learned by producing a representation or artifact. Concept maps are often used to characterize an individualâs knowledge structure before and after a learning experience. They are particularly well suited to informal environments in that they allow for personalization of both prior knowledge and knowledge-building during the activity and are less threatening than other cognitive assessments. However, they require a longer time commitment than a traditional exit interview, are time-consuming to code, are difficult to administer and standardize, and may show a bias unless a control group has been used (see Appendix B). While a variety of concept mapping strategies have been used in these settings (Anderson, Lucas, Ginns, and Dierking, 2000; Gallenstein, 2005; Van Luven and Miller, 1993), perhaps the most commonly used in museum exhibitions is Personal Meaning Mapping (Falk, Moussouri, and Coulson, 1998), in which the di- mensions of knowledge assessed are extent, breadth, depth, and mastery. Personal Meaning Mapping is typically presented to learners in paper format, although Thompson and Bonney (2007) created an online version to assess the impact of a citizen science project. Drawing tasks can be an important way to broaden research partici- pantsâ modes of communication and may enable some to articulate ideas and observations that they could not in spoken or written language. Draw- ings can capture visitorsâ memories of their experience (e.g., map study), or show their understanding of a science concept (Guichard, 1995). Typically, a drawing is annotated or discussed so that the meaning of the various parts is clear to the researcher. Moussouri (1997) has shown how drawings can be used to capture different stages of childrenâs reasoning. Jackson and Leahy (2005) have similarly used drawing and creative writing tasks to study how a museum theater experience may influence childrenâs learning. Sorting tasks, which typically involve cards, photos, or other objects, are yet another means through which participants can demonstrate their con- ceptual learning after visiting a museum, zoo, or other designed setting. To be compelling proof of learning, this method requires some kind of control group and preferably also a pretest. Sorting tasks have the advantage that they do not publicly reveal that a given answer is scientifically incorrect and can usually be done with the same participants more than once. E-mail or phone interviews, often done weeks, months, or even years after a visit or program, are particularly important in informal learning envi- ronments because they are often the only way to test two key assumptions: (1) that the experiences are highly memorable and (2) that learners integrate the experiences into the rest of their lives and build on them over time. Typi- cal follow-up questions probe these two aspects of the learning by asking
66 Learning Science in Informal Environments what the participants remember about their experience and what they have done in relation to the content since. For example, Falk, Scott, Dierking, Rennie, and Cohen-Jones (2004) used follow-up interviews to explore how the cognitive outcomes of a visit to a museum varied over time. Anderson and Shimizu (2007) showed that many people remembered details of what they had done at a worldâs fair or exposition decades previously, and Allen (2004) found that it was not unusual for visitors to say that a single exhibit experience changed the way they think about something in their lives. Spock (2000) lists some of the trade-offs of doing follow-up interviews soon versus long after the event and points to the connection between more profound potential outcomes and a longer time frame. When learners are participating in an extended program (e.g., docents or watchers of a TV series), it may be feasible to conduct pre- and posttests of conceptual learning, similar to those used in schools, to test their learn- ing of formal concepts. For example, Rockman Et Al (1996) used a series of multiple-choice questions to show that children who watched Bill Nye the Science Guy made significant gains in understanding that Bernoulliâs principle explains how airplanes fly. Another means by which researchers have assessed learning over extended time frames is by asking participants to write reflections in a journal, possibly to discuss with others and to share with researchers. Leinhardt, Tittle, and Knuston (2002) used this method to showcase the deep connections and knowledge-building done by frequent museum-goers. Strand 3: Engaging in Scientific Reasoning Nature of the Outcome This strand focuses on the activities and skills of scienceâincluding in- quiry and reasoning skills, which are intimately related and often explored in research simultaneously with conceptual knowledge. However, we focus here on the ways in which researchers go after activities and skills of science specifically. Informal environments often provide opportunities for learners to engage in authentic inquiry using a range of resources, without pressure to cover particular content, yet with access to engaging phenomena and staff ready to support them in their own explorations and discoveries. The out- comes in this strand include scientific inquiry skills, such as asking questions, exploring, experimenting, applying ideas, predicting, drawing conclusions from evidence, reasoning, and articulating oneâs thinking in conversation with others. Other outcomes are skills related to learning in the particular informal environment: how to use an interactive exhibit, how to navigate a website, how to draw relevant information from a large body of text, how to learn effectively with others of different skill levelsâsharing resources, teaching, scaffolding, negotiating activity.
Assessment 67 Methods of Researching Strand 3 Outcomes Developmental studies based on observations of childrenâs spontaneous behavior show that their approach to natural phenomena shows similarities to science: exploratory, inquiry-oriented, evidence-seeking (Beals, 1993; Callanan and Oakes, 1992). Controlled studies result in similar findings, indi- cating that everyday thinking entails reasoning about causality and complex relations among variables as discussed in Chapter 4. This strand of outcomes is almost always assessed by examining the participantâs learning process rather than a pre-post measure of outcome. This is because the only way to do a pre-post measurement requires that learners demonstrate what they are able to do in the âpreâ condition. Pretesting re- quires that learners be put on the spot in a manner that is inconsistent with the leisure-oriented and learner-centered nature of most informal environments. Instead, skills are usually assessed as they are practiced, and the assumption is made that practicing a skill leads to greater expertise over time. Research focused on assessing practical and discursive inquiry skills in informal environments often rely on video and audio recordings made during activities that are later analyzed for evidence of such skills as questioning, interpreting, inferring, explaining, arguing, and applying ideas, methods, or conjectures to new situations (see Appendix B for a discussion of video- and audiotaping). For example, Humphrey and Gutwill (2005), analyzing the kinds of questions visitors asked each other and the ways they answered them, found that visitors using âactive prolonged engagementâ exhibits asked more questions that focused on using or understanding the exhibits than visitors using the more traditional planned discovery exhibits. Randol (2005), assessing visitorsâ use of scientific inquiry skills at a range of interactive ex- hibits, found that the inquiry could be characterized equally well by holistic measures or small-scale behavioral indicators (such as âdraws a conclusionâ) as long as the sophistication of the behaviors was measured rather than their number. Meisner et al. (2007) and vom Lehn, Heath, and Hindmarsh (2001, 2002) studied short fragments of video to reveal the ways in which exhibits enable particular forms of coparticipation, modeling, and interactions with strangers. Researchers have used video analysis to investigate a large range of behaviors related to how learners make sense of the natural and physical world, including interacting appropriately with materials and showing others how to do something. Stevens and colleagues (Stevens, 2007; Stevens and Hall, 1997; Stevens and Toro-Martell, 2003) used a video annotation system on the museum floor to prompt visitors to reflect on how they and others interacted with an interactive science display, leaving a durable video trace of their activity and reflections for others to explore and discuss as they come to the display. The traces then serve as data for subsequent interactional analysis of learning. Researchers have also asked learners after participation in science learn-
68 Learning Science in Informal Environments ing activities to provide self-reports of their own (or each otherâs) skill levels. Sometimes museum visitors will spontaneously report that they or a member of their group (typically a child) learned a new skill while participating in the activity. While this approach wants for direct evidence to back up such claims, it may be the only kind of evidence of a change in skill level that can be collected given the social norms of many environments and, in certain cases, without risking discomfort for participants. Although it may be possible to pretest the skill levels of learners in certain settings, in general such testing is a high-risk assessment practice for informal environments. Campbell (2008) points out the dangers of doing this in youth programs, in which learners may experience themselves as failing and consequently never return. Strand 4: Reflecting on Science Nature of the Outcome A fundamental goal of science education is to improve learnersâ under- standing of what science isâthat is, to increase understanding of the nature of the scientific enterprise. The outcomes targeted in this strand address issues related to how scientific knowledge is constructed, and how people, includ- ing the learner herself, come to know about natural phenomena and how the learnerâs ideas change. Direct experience with the process of knowledge construction through the types of inquiry-based activities characteristic of informal environments can serve as an important point of departure for the outcomes in this strand: recognizing that people are involved in the interpre- tive aspects of evaluating theories, evidence, and the relationship between the two; that scientific knowledge is uncertain and changeable; and that a diversity of strategies and methods are employed in scientific research. Whether or not a person becomes a professional scientist, the forms of scientific understanding associated with Strand 4 outcomes are considered by many to be crucial for having an informed citizenry given public debates about political issues related to science (American Association for the Ad- vancement of Science, 1993). Although lay people will always rely on the work of professional scientists, a view of scientific knowledge as fundamen- tally constructed from evidence rather than merely factual or received from authoritative sources can provide a critical stance from which the public can evaluate claims in relation to evidence (Brossard and Shanahan, 2003; Miller, 2004). Presumably, such a public can thereby make better judgments about public policy related to such issues as global warming or the teaching of intelligent design. The body of research on the topic indicates that young children, youth, and even adults do not have a strong understanding of the nature of science per se and what is entailed by disciplinary methods of knowing and learning (Osborne et al., 2003). There is evidence that such limits in understanding derive from a lack
Assessment 69 of exposure to appropriate opportunities to learn in these areas (American Association for the Advancement of Science, 1993). When people are pro- vided with opportunities to learn about the problematic nature of scientific knowledge construction (Smith, Maclin, Houghton, and Hennessey, 2000), to understand the processes of modeling and testing (Penner, Giles, Lehrer, and Schauble, 1997), or to reflect on or explicitly investigate epistemological issues (Bell and Linn, 2002), their understanding of the nature of scientific practice, process, and knowledge improves. Research into practical or ev- eryday epistemologies provides some preliminary evidence suggesting that informal environments provide appropriate opportunities for learning about the nature of science (Sandoval, 2005). The degree to which they promote these outcomes has not been heavily researched, but the inquiry-oriented experiences afforded by most informal environments may provide cultural and educational resources for promoting better understanding of the nature of science. Methods of Researching Strand 4 Outcomes Studies regarding conceptions of the nature of science, typically using either questionnaires or structured interview protocols, have been conducted in schools, often with the aim of drawing relationships between childrenâs conceptions of what real scientists do and their own classroom activities (Abd-El-Khalick and Lederman, 2000; Bartholomew, Osborne, and Ratcliffe, 2004; Schwartz and Lederman, 2002). These studies generate information about childrenâs epistemological reasoning that ostensibly reflects how they individually think about the nature of knowledge and warrants for claims, regardless of the activity setting. In their study of 9-, 12-, and 16-year-olds, Driver, Leach, Millar, and Scott (1996), for example, used interview data based on specific probes to identify three levels of reasoning about the nature of science. According to their analysis, at the lowest level, studentsâ reasoning is grounded in phenomena; at the mid-level, students reason about the rela- tionships between quantities or variables; and at the highest level, students reason with and about imagined models. Interestingly, the researchers were able to engage only the 16-year-olds in discussions of science as a social enterprise. Similar studies show the difficulty with which younger adolescents and children conceive of science as a social process (Abd-El-Khalick and Lederman, 2000; Bartholomew et al., 2004; Schwartz and Lederman, 2002). Some researchers have specifically tried to link conceptions of science and scientific practice to the learning setting (Bell and Linn, 2002; Carey and Smith, 1993; Hammer and Elby, 2003; Rosenberg, Hammer, and Phelan, 2006; Sandoval, 2005; Songer and Linn, 1991). Sandoval specifies four types of difficulties students have understanding the constructed and changeable nature of disciplinary science in the school setting, positing âpractical epis- temologiesâ that inhere in the organizational structures of institutions and
70 Learning Science in Informal Environments activities rather than trait-like or stage-like personal epistemologies that belong to individuals. Sandoval argues, in effect, that the practical epistemologies at play in everyday settings allow students to take a more self-reflective and nuanced view of scientific process. Strand 5: Engaging in Scientific Practices Nature of the Outcome This strand builds on and expands the notion of participation discussed in Taking Science to School (National Research Council, 2007). In that report participation meant learners participating in normative scientific practices akin to those that take place in and govern scientific work. For example, whereas young learners may understand argumentation in a range of contexts outside of science (e.g., resolving conflicts at home or on the playground), they typically must learn how to argue in scientific ways (e.g, using evidence to support claims). Participating in science meant, among other things, ap- propriating scientific ways of arguing. As that report established, there is a substantial body of evidence that illustrates how even young learners can develop the knowledge, skills, and commitments necessary to participate in a classroom scientific culture. That literature also indicates that learning to participate in science requires that learners have copious opportunities to do science plus substantial instructional support over long periods of time. An important difference in the construal of participation in this report is that we are focusing on nonschool settings where the development of shared norms and practices is typically not afforded by the goals and constraints of the educational experience. Thus, we take a broader and admittedly somewhat less clearly defined view of participation in order to capture important ways in which informal environments can contribute to this goal. Participation in informal learning environments is generally voluntary at many scales (coming to an event, staying for its duration, using an exhibit thoroughly or repeatedly, returning to more events, etc.). By analogy with measuring time spent, attendance can be used as a measure of learning, either as a necessary minimal condition or as an indicator (assuming learn- ing increases with number of returns or as a direct assessment of learning as participation in a community). For this reason, environments for science learning pay particular attention to keeping track of the demographics, motivations, and expectations of the people who arrive and return to use their educational offerings. St. John and Perry (1993) take this argument to a much broader scale, arguing that the entire infrastructure of environments for science learning should be assessed, at least in part, on the basis of its voluntary usage by the public as a learning resource. A common goal across informal contexts is for participants to experi- ence pleasure while working with tasks that allow exploration and do not
Assessment 71 overwhelm (e.g., Allen, 2004; Martin, 2004). The objective is for participants to have conversations, explore, and have fun in and around science. The expectation is that participation in informal contexts involves learning science and that science learning will follow. In other words, if there is participa- tion, then learning is assumed to be occurring (see Lave, 1996); if there is enjoyment, then return to science and possible identification with science is anticipated. Recent work by Falk et al. (2007) suggests that visitors to zoos and aquariums who already identify themselves as participants in science learning anticipate that their visits will enhance and strengthen this identityâwhich appears to be the case. While short-term participation in well-defined programs is relatively easy to assess, long-term and cumulative progressions are much more challenging to document, due primarily to the difficulties of tracking learners across time, space, and range of activity. Nevertheless, researchers must accept this chal- lenge, because a key assumption in the field (e.g., Crowley and Jacobs, 2002) is that effective lifelong learning is a cumulative process that incorporates a huge variety of media and settings (everyday life in the home, television, Internet, libraries, museum programs, school courses, after-school programs, etc.). Thus, longitudinal studies are particularly useful. In assessing Strand 5 outcomes, culturally responsive evaluation tech- niques help to maximize validity, since members of a community may identify their levels of participation in quite different ways from researchers who may be outside it. For example, in a study by Garibay (2006) researchers had to broaden their definitions of âparent involvementâ to fit the norms of a com- munity they were unfamiliar with. Methods of Researching Strand 5 Outcomes Because learner choice is such a key element in most informal learning environments and the extent to which learners engage in science over time is a key element of learning to participate in science, data on who enrolls in a program, attends an event or offering, joins science clubs and related affinity groups, or uses websites or other forms of media or tools for sci- ence learning is important to track. Often, researchers collect demographic data (e.g., Diamond, 1999) in conjunction with attendance data. Collecting accurate data on participation, especially degrees of participation, is notori- ously difficult in many informal settings, such as after-school programs and community-based organizations (Chaput, Little, and Weiss, 2004). To study participation at a finer scale, researchers interested in de- signed settingsâmuseums, science centers, community gardens, and other community-based organizationsârecord the detailed movements of visitors through a public space or exhibit, showing their degree of engagement throughout the area as well as the relative attracting and holding powers of the individual designed elements (see Appendix B for a discussion of hold-
72 Learning Science in Informal Environments ing time). Although tracking studies have been done for nearly a century (Robinson, 1928; Melton, 1935), Serrellâs (1998) meta-analysis served to standardize some of the methods and definitions, including a âstopâ (plant- ing the feet and attending to an exhibit for at least 2-3 seconds), a âsweep rateâ (the speed with which visitors move through a region of exhibits), and a âpercentage of diligent visitorsâ (the percentage of visitors who stop at more than half of the elements). It also suggests benchmarks of success for various types of exhibit format (dioramas, interactives, etc.). Some researchers have modified the traditional âtiming and trackingâ approach, creating an unobtrusive structured observation based on holistic measures. These measures recognize that although the amount of time spent in an exhibition is a good quantitative indicator of visitorsâ use of a gallery space or exhibit element, it often poorly reflects the quality of their experi- ence with an exhibition. Therefore, to complement quantitative measures, researchers have developed a ranking scale with which they can assess the quality of interactions that visitors have in various sections of an exhibition or at specific exhibit components (Leinhardt and Knutson, 2004). The scale involves time to some degree but not solely. Participantsâ submissions to websites, through comment cards, and even via visitor guest books provide evidence that learners are willing and able to participate in a dialogue with the institution or people who generated the learning resource. Feedback mechanisms have become well established in museums and have been increasingly displayed openly rather than collected through a comment box or other means for staff to review privately. These methods have been assisted by the development of technological systems for automatically caching and displaying a select number of visitor responses, as well as wiki models of distributed editing. For example, the Association of Science-Technology Centers hosts ExhibitFiles, a community site for designers and developers to share their work; the Liberty Science Center has created Exhibit Commons, a website that invites people to submit contributions for display in the museum; and the Tech Museum of Innovation is using Second Life as an open source platform for exhibit design, with plans to replicate some of the best exhibits in its real-world museum. These means of collecting data may be useful for research as well as for institutional and practical reasons, so it is important to be clear when they are appropriately construed in a science learning framework. Showing up is important and the scale of research of informal learning institutions speaks to their capacity, but making claims about participation in science is not the same as making claims about how many people passed through a particular setting. Issues of accessibility are important when assessing participation rates in informal environments. Participation may be reduced because activities or environments are inaccessible to some learners, physically or intellectu- ally. Reich, Chin, and Kunz (2006) and Moussouri (2007) suggest ways to
Assessment 73 build relationships with museum visitors with disabilities who can serve as testers or codevelopers, as well as techniques for conducting interviews with these audiences in particular, to determine participatory outcomes. Similarly, Garibay (2005) suggests ways to design assessment techniques to be cultur- ally responsive to a target audience, even for a single activity. Ways of assessing participation in media-based activity vary. Web resource usage can be assessed by number of users, duration of use, pages viewed, path of exploration, and entry points from other sites (e.g., Rockman Et Al, 2007). Surveys are used to assess broadcast audiences for TV and radio. Ways to assess depth of participation or integration of experiences are especially important, and these methods are varied. One aspect of progression in an activity is personal ownership and creativityâthat is, not just going through the motions of a predefined activity but creating something original in it. For example, Gration and Jones (2008) developed a coding scheme for innova- tion. Others have focused on evidence of creativity or self-initiated activity. To document participation across settings, events, media, and programs, Ellenbogen (2002) conducted case studies showing examples of families who use many resources in a highly integrated fashion. Some researchers have investigated extended engagement in science practices by studying home discussions or activities related to science. For example, Ellenbogen showed that frequent users of a science museum contin- ued their discussions and activities in the home and other settings, engaging in integrated, multisetting learning. Other researchers have taken a prospec- tive approach to studying anticipated actions. Clipman (2005) has designed and tested a Visit Inspiration Checklist that asks visitors to anticipate what actions they might take following their visit, including further resources they might use, connections they might make, and activities they might undertake to extend their experience. Taking a longitudinal approach to data collection allows researchers to get a more complete picture of the role of these learning experiences in peoplesâ lives. Researchers have repeatedly shown that many of the conver- sations that begin in the museum continue once families are back at home (see Astor-Jack et al., 2007). Ethnographic case studies that involved a long-term relationship between the researcher and a set of families who visited museums frequently, allowing for repeated observations and interviews before, during, and after museum visits (Ellenbogen, 2002, 2003), have suggested that conversational connec- tions between museum experiences and real-world contexts are frequent yet must be examined carefully, since the connections are not always obvious to those outside the family. Perhaps the most important and interesting work on participatory structures in informal environments is ethnographic, allowing for an analysis of particular discourse practices in relation to cultural norms and meanings that are enacted in the setting (Rogoff, 2003; McDermott and Varenne, 1995).
74 Learning Science in Informal Environments Strand 6: Identifying with the Scientific Enterprise Scientific identity typically refers to a personâs concept of herself as a potential scientist (Brickhouse, Lowery, and Schultz, 2000, 2001; Calabrese Barton, 2003). Research in this strand also pertains to the ways in which people experience and recognize their own agency in relation to activities associated with learning or doing science (Holland, Lachicotte, Skinner, and Cain, 1998; Hull and Greeno, 2006). Identity is often equated with a subjective sense of belongingâto a community, in a setting, or in an activity related to science. The changes in community affiliation and related behaviors that can signal changes in identity usually require extended time frames of involve- ment with a program or community (e.g., Beane and Pope, 2002; McCreedy, 2005). Brossard, Lewenstein, and Bonney (2005) showed that citizen scientists not only increased their knowledge, but also were able to suggest revisions to scientistsâ protocols when they did not work. Identity changes often are reflected in the behaviors of others in the learnersâ lives, such as parents, caregivers, and the institutional staff involved. A sense of agency or belonging can be experienced retrospectively when reflecting on past events, it can be experienced in relation to current activities, and it can be projected into the future through imaginative acts regarding what one might become. To a greater or lesser degree, identity can be more a matter of embodied experience than of explicit labels for what someone can do or who one is. A child, for example, may engage fluently and comfortably with her familyâs gardening practices, yet not think of herself or be referred to by others as a gardener, a budding botanist, etc. Another might gain qualitative understandings of Newtonian mechanics based on observations of everyday phenomena, and, as a consequence, engage in activities that build on this understanding, but not make explicit associations to various possible labels relating to her capabilities. Although researchers in the field generally agree that identity affects sci- ence participation and learning (National Research Council, 2007; Leinhardt and Knutson, 2004; Falk, 2006; Anderson, 2003), there are varied and disparate theoretical frameworks that address issues of identity. Some conceptions of identity emphasize personal beliefs and attitudes, for example, measured by the degree to which participants endorse such statements as âI have a good feeling toward scienceâ or âI could be a good scientistâ (Roth and Li, 2005; Weinburgh and Steele, 2000). Other conceptions of identity focus on the way that identity is created through talk and other features of moment-to-moment interactions that position people among the roles and statuses available in particular situations (Jacoby and Gonzales, 1991; Brown, Reveles, and Kelly, 2004; Hull and Greeno, 2006; Holland, Lachicotte, Skinner, and Cain, 1998; Holland and Lave, 2001; Rounds, 2006). This latter conception emphasizes that the type of person one can be in a settingâe.g., competent, skilled, creative, or lacking in these qualitiesâboth depends on the way these types
Assessment 75 are defined in social context and determines the possible identities some- one can have. The ways that people interact with material resources (e.g., instruments, tools, notebooks, media) and other participants (e.g., through speaking, gesture, reading, writing) combine to assign individuals to the available identities (Hull and Greeno, 2006; Jacoby and Gonzales, 1991; Brown, Reveles, and Kelly, 2004). There seems to be a strong relationship between science-related identity and the kinds of activities people engage in, usually with others. GutiÃ©rrez and Rogoff (2003), for example, emphasize the repertoires of practice (ways of participating in activities) that people come to know through participa- tion in diverse communities, each with its own goals, needs, routines, and norms. These repertoires of practice serve as resources and help define who a person is, in terms of their social identity, in any given situation. Brownâs research (2004) demonstrates the links between communication practices and the building of scientific identity, charting the complexities of negotiating between in-school and out-of-school practices and identities. Hull and Greeno (2006) describe identity changes for workers in a circuit board factory that co-occurred with the introduction of a new system of participation, symbolic representation, communication, management, and personal recognition at the site. This body of work illustrates the importance of considering the practical, experiential, and embodied aspects of scientific identity. Gener- ally, the research on scientific identity emphasizes the opportunities that learners have to encounter and make use of the ideas, images, communities, resources, and pathways that can lead to progressively greater involvement in the practices of science. Methods of Researching Strand 6 Outcomes In many cases, research on scientific identity has relied on questionnaires and structured interviews regarding beliefs about oneself, oneâs experiences, and the supports for science learning that exist in oneâs school and community (Barron, 2006; Beane and Pope, 2002; Moore and Hill Foy, 1997; Schreiner and Sjoberg, 2004; Weinburgh and Steele, 2000). Longer term studies focusing on changes in behavior or community affiliation have also been conducted using self-report measures based on questionnaires and structured interviews (Fadigan and Hammrich, 2004; Falk, 2008; Gupta and Siegel, 2008). In settings where long-term participation has led to evidence of changes in learnersâ identity, parents, caregivers, and the institutional staff have provided self- reports on how these changes were related to their own perceptions and behaviors (Barron, 2006; McCreedy, 2005; Falk, 2008). Studies of increasing levels of involvement and interest have included questionnaires, interviews, ethnographic methods, and analysis of learner artifacts (e.g., Barron, 2006; Bell et al., 2006; Brown, 2004; Nasir, 2002; Warren, Ballenger, Ogonowski, Rosebery, and Hudicourt-Barnes, 2001). Zoos and aquariums, which are
76 Learning Science in Informal Environments particularly interested in documenting behavior change related to conserva- tion and the environment, typically question visitors about their intended behaviors, following up with phone calls or Internet-based interviews. The effect of science experience on career choice for children is a ma- jor Strand 6 outcome, but it is also very difficult to assess because the time frame involved is so long. Logistical difficulties include tracking individuals, securing long-term funding, and the many intervening factors that can alter the research plan (Allen et al., 2007). In most circumstances, it may be more feasible to look at the immediate choices that lead toward a potential sci- ence career, such as choice of school courses, after-school activities, reading material, games and hobbies, and the like. Some researchers have capital- ized on extant datasets to conduct longitudinal analyses. In looking at career paths of youth first questioned in middle school and then followed into their adult lives, Tai, Liu, Maltese, and Fan (2006) document the importance of career expectations for young adolescents and suggest that early elementary experiences (before eighth grade) may be of importance. This research also supports the idea that the labels or plans people appropriate for themselves may be an important motivator for participation in activities associated with the label. Sachatello-Sawyer et al. (2002) suggest that being labeled a âmu- seum loverâ motivates attendance for adult program participants. PERSPECTIVES, DIRECTIONS, AND CONCLUSIONS The outcomes discussed in this chapter represent a broad view of the ways in which practitioners and researchers characterize and measure the effects of science learning experiences. The six strands cover a wide range of approaches to studying and understanding individual learning, from those most focused on cognitive and conceptual change to those most focused on shifts in participation and identity. Although there is a diversity of thought in the informal science learning community about what outcomes are most important and what means of measurement are most appropriate, a rough and emerging consensus exists around some core assumptions about the nature of informal science learning outcomes. Outcomes can include a broad range of behaviors. We have noted many of the key types of individual outcomes investigated. This kind of research could be designed to allow for varied personal learning tra- jectories and outcomes that are complex and holistic, rather than only those that are narrowly defined. Outcomes can be unanticipated. Outcomes can be based on the goals and objectives of a program (and therefore closely tied to its design),
Assessment 77 or they can be unplanned and unanticipated, developing contingently on the basis of what is most valuable to the participant. In informal set- tings, outcomes are often guided by the learners themselves. Research can target outcomes that emerge in these experiences, not only those that are defined a priori. Outcomes can become evident at different points in time. Short-term outcome measures have long been used to assess the impact of informal learning experiences, but these experiences can also have enduring, long-term impacts that differ from the short-term ones. Outcomes can occur at different scales. Outcomes defined on the level of individual participants answer the question: How is the indi- vidual influenced by the experience? Most of the outcomes discussed in this chapter and in the literature generally focus at this level. But it is also useful to ask: How is the entire social group in the environment influenced? For example, did group members learn about one another, reinforce group identity and history, or develop new strategies for col- laborating together? We can also define outcomes on the community scale: How does the activity, exhibition, or program influence the local community? These assumptions regarding outcomes align with three high-level cri- teria that the evidence suggests are essential in the development of assess- ments appropriate for science learning in informal environments. First, the assessments must address not only cognitive outcomes, but also the range of intellectual, attitudinal, behavioral, social, and participatory capabilities that informal environments effectively promote (Jolly, Campbell, and Perlman, 2004; Hein, 1998; Schauble et al., 1995; Csikszentmihalyi and Hermanson, 1995). Second, assessments should fit with the kind of participant experi- ences that make these environments attractive and engaging; that is, any assessment activities undertaken in informal settings should not undermine the features that make for effective learning there (Allen, 2002; Martin, 2004). Third, the assessments used must be valid, measuring what they purport to be measuringâthat is, outcomes from those science learning experiences (National Research Council, 2001). Assessment must also be valid in terms of construct validityâthat it measures what it purports to measureâand in terms of the ecological validityâthat it aligns with the opportunities for learning that are present in the learning environment (Moss et al., in press). In light of the tendency to use conventional academic outcomes to study learning in informal set- tings, it is important for researchers and practitioners to carefully consider ecological validity of such measures for informal settings. Measures must ensure that the same kinds of material, social, cognitive, and other features
78 Learning Science in Informal Environments of the activities designed to promote learning in an informal setting should be part of the assessment, serving as cues for activating the capabilities and dispositions that participants have or might have learned. Before drawing conclusions about whether the informal experiences have led to particular outcomes, researchers and practitioners should ask themselves: Are the as- sessment activities similar in relevant ways to the learning activities in the environment? Are the assessments based on the same social norms as those that promote engagement in the learning activities? Overall, is it clear that learners in a setting have had ample opportunity to both learn and dem- onstrate desired outcomes? Without such clarity, it is difficult to make fair inferences about what has been learned or the effectiveness of the environ- ment for promoting learning. To a significant extent, the ability to answer these questions depends on how well the research community is able to describe the nature of par- ticipantsâ experience in particular types of informal learning environments, with an eye to eventually understanding what is consistent and systematic across these environments. An in-depth understanding of key features of the environments (e.g., what are the physical and social resources? What are the norms of behavior?), ways in which learning is framed or organized (e.g., what activities are presumed to lead to learning? How is learning supported? What does it mean to be knowledgeable in this setting?), and the capacities being built (e.g., what skills, knowledge, or concepts are learners engaging with?) can lead to critical insights regarding the particular contributions of informal experiences to science learning, therefore highlighting the outcomes one would most expect and want to see. As important as it is to document the unique and valuable contributions of informal opportunities for learning, there is a tension in the field regard- ing the degree to which one can or should try to direct outcomes. On one hand, the field has an overarching commitment to valuing the great diversity of ways in which informal learning experiences can positively affect par- ticipants. Researchers and practitioners are receptive to acknowledging the many types of outcomes, anticipated or not, that emerge from the interplay of people and resources as they engage in science learning activities. This receptivity to contingencies, George Hein explains, is âa matter of ideologyâ (1995, p. 199). By framing the questions as we do, we leave ourselves open for the broader responses, for noting unexpected behaviors, and we do not shut out the possibility of documenting learning that is distinct from the teaching in- tended. By leaving our list of issues deliberately vague and general, we do not exclude the possibility of learning something about the . . . experience that may be outside the framework of . . . expectations. Heinâs formulation suggests that informal environments are oriented toward providing learning experiences that are relevant to the interests and needs
Assessment 79 of the people they serve. One can argue then, that, as institutions, informal environments for science learning are characterized by a flexibility and open- ness to changes in the communities, societies, and cultures of which they are a part. In order to do justice to both informal environments and those served by them, efforts to identify, measure, and document learning should be expansive enough to accommodate the full range of what and how they may help people learn. At the same time, researchers and practitioners recognize the importance of building consensus in the field regarding standards for research methods and learning outcomes (Bitgood, Serrell, and Thompson, 1994; Loomis, 1989). Without a common framework specifying outcomes and approaches, it is difficult to show gains in learning that occur across localities or across time frames, and attempts to portray the contributions of infrastructure for science learning that exists across varied institutions and activities will continue to be hindered. Efforts to create more rigorous, meaningful, and equitable op- portunities for science learning depend on understanding what opportunities for science learning exist across the educational landscape, what the nature of this learning is in the variety of environments, how outcomes currently complement and build on one another, and how designs, processes, and practices for supporting learning can be improved in the future. Developing new ways to document learning outcomes that are both appropriate in infor- mal environments and useful across the range of them would create greater opportunity to leverage their potency to improve science learning for all. REFERENCES Abd-El-Khalick, F., and Lederman, N.G. (2000). The influence of history of science courses on studentsâ views of nature of science. Journal of Research in Science Teaching, 37 (10), 1057-1095. Allen, S. (2002). Looking for learning in visitor talk: A methodological exploration. In G. Leinhardt, K. Crowley, and K. Knutson (Eds.), Learning conversations in museums (pp. 259-303). Mahwah, NJ: Lawrence Erlbaum Associates. Allen, S. (2004). Designs for learning: Studying science museum exhibits that do more than entertain. Science Education, 88 (Suppl. 1), S17-S33. Allen, S., Gutwill, J., Perry, D.L., Garibay, C., Ellenbogen, K.M., Heimlich, J.E., Reich, C.A., and Klein, C. (2007). Research in museums: Coping with complexity. In J.H. Falk, L.D. Dierking, and S. Foutz (Eds.), In principle, in practice: Museums as learning institutions (pp. 229-245). Walnut Creek, CA: AltaMira Press. American Association for the Advancement of Science. (1993). Benchmarks for sci- ence literacy. New York: Oxford University Press. Anderson, D. (2003). Visitorsâ long-term memories of world expositions. Curator, 46 (4), 401-420. Anderson, D., and Nashon, S. (2007). Predators of knowledge construction: Interpret- ing studentsâ metacognition in an amusement park physics program. Science Education, 91 (2), 298-320.
80 Learning Science in Informal Environments Anderson, D., and Piscitelli, B., (2002). Parental recollections of childhood museum visits. Museum National, 10 (4), 26-27. Anderson, D., and Shimizu, H. (2007). Factors shaping vividness of memory episodes: Visitorsâ long-term memories of the 1970 Japan world exposition. Memory, 15 (2), 177-191. Anderson, D., Lucas, K.B., Ginns, I.S., and Dierking, L.D. (2000). Development of knowledge about electricity and magnetism during a visit to a science museum and related post-visit activities. Science Education, 84 (5), 658-679. Astor-Jack, T., Whaley, K.K., Dierking, L.D., Perry, D., and Garibay, C. (2007). Un- derstanding the complexities of socially mediated learning. In J.H. Falk, L.D. Dierking, and S. Foutz (Eds.), In principle, in practice: Museums as learning institutions. Walnut Creek, CA: AltaMira Press. Azevedo, F.S. (2006). Personal excursions: Investigating the dynamics of student engagement. International Journal of Computers for Mathematical Learning, 11 (1), 57-98. Barron, B. (2006). Interest and self-sustained learning as catalysts of development: A learning ecology perspective. Human Development, 49 (4), 193-224. Bartholomew, H., Osborne, J., and Ratcliffe, M. (2004). Teaching students ideas about science: Five dimensions of effective practice. Science Education, 88 (5), 655-682. Beals, D.E. (1993). Explanatory talk in low-income familiesâ mealtime. Preschoolersâ questions and parentsâ explanations: Causal thinking in everyday parent-child activity. Hispanic Journal of Behavioral Sciences, 19 (1), 3-33. Beane, D.B., and Pope, M.S. (2002). Leveling the playing field through object-based service learning. In S. Paris (Ed.), Perspectives on object-centered learning in museums (pp. 325-349). Mahwah, NJ: Lawrence Erlbaum Associates. Beaumont, L. (2005). Summative evaluation of wild reef-sharks at Shedd. Report for the John G. Shedd Aquarium. Available: http://www.informalscience.com/ download/case_studies/report_133.doc [accessed October 2008]. Bell, P., and Linn, M.C. (2002). Beliefs about science: How does science instruction contribute? In B.K. Hofer and P.R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing. Mahwah, NJ: Lawrence Erlbaum Associates. Bell, P., Bricker, L.A., Lee, T.F., Reeve, S., and Zimmerman, H.H. (2006). Understand- ing the cultural foundations of childrenâs biological knowledge: Insights from everyday cognition research. In A. Barab, K.E. Hay, and D. Hickey (Eds.), 7th international conference of the learning sciences, ICLS 2006 (vol. 2, pp. 1029- 1035). Mahwah, NJ: Lawrence Erlbaum Associates. Bitgood, S., Serrell, B., and Thompson, D. (1994). The impact of informal education on visitors to museums. In V. Crane, H. Nicholson, M. Chen, and S. Bitgood (Eds.), Informal science learning: What the research says about television, sci- ence museums, and community-based projects (pp. 61-106). Deadham, MA: Research Communication. Borun, M., Chambers, M., and Cleghorn, A. (1996). Families are learning in science museums. Curator, 39 (2), 123-138. Borun, M., Massey, C., and Lutter, T. (1993). Naive knowledge and the design of science museum exhibits. Curator, 36 (3), 201-219.
Assessment 81 Brickhouse, N.W., Lowery, P., and Schultz, K. (2000). What kind of a girl does science? The construction of school science identities. Journal of Research in Science Teaching, 37 (5), 441-458. Brickhouse, N.W., Lowery, P., and Schultz, K. (2001). Embodying science: A femi- nist perspective on learning. Journal of Research in Science Teaching, 18 (3), 282-295. Brossard, D., and Shanahan, J. (2003). Do they want to have their say? Media, agricul- tural biotechnology, and authoritarian views of democratic processes in science. Mass Communication and Society, 6 (3), 291-312. Brossard, D., Lewenstein, B., and Bonney, R. (2005). Scientific knowledge and at- titude change: The impact of a citizen science program. International Journal of Science Education, 27 (9), 1099-1121. Brown, B. (2004). Discursive identity: Assimilation into the culture of science and its implications for minority students. Journal of Research in Science Teaching, 41 (8), 810-834. Brown, B., Reveles, J., and Kelly, G. (2004). Scientific literacy and discursive identity: A theoretical framework for understanding science learning. Science Education, 89 (5), 779-802. Calabrese Barton, A. (2003). Teaching science for social justice. New York: Teachers College Press. Callanan, M.A., and Oakes, L. (1992). Preschoolersâ questions and parentsâ explana- tions: Causal thinking in everyday activity. Cognitive Development, 7, 213-233. Callanan, M.A., Shrager, J., and Moore, J. (1995). Parent-child collaborative explana- tions: Methods of identification and analysis. Journal of the Learning Sciences, 4 (1), 105-129. Campbell, P. (2008, March). Evaluating youth and community programs: In the new ISE framework. In A. Friedman (Ed.), Framework for evaluating impacts of informal science education projects (pp. 69-75). Available: http://insci.org/docs/ Eval_Framework.pdf [accessed October 2008]. Carey, S., and Smith, C. (1993). On understanding the nature of scientific knowledge. Educational Psychologist, 28 (3), 235-251. Chaput, S.S., Little, P.M.D., and Weiss, H. (2004). Understanding and measuring attendance in out-of-school time programs. Issues and Opportunities in Out- of-School Time Evaluation, 7, 1-6. Clipman, J.M. (2005). Development of the museum affect scale and visit inspiration checklist. Paper presented at the 2005 Annual Meeting of the Visitor Studies Association, Philadelphia. Available: http://www.visitorstudiesarchives.org [ac- cessed October 2008]. COSMOS Corporation. (1998). A report on the evaluation of the National Science Foundationâs informal science education program. Washington, DC: National Science Foundation. Available: http://www.nsf.gov/pubs/1998/nsf9865/nsf9865. htm [accessed October 2008]. Crowley, K., and Jacobs, M. (2002). Islands of expertise and the development of family scientific literacy. In G. Leinhardt, K. Crowley, and K. Knutson (Eds.), Learning conversations in museums (pp. 333-356). Mahwah, NJ: Lawrence Erlbaum Associates.
82 Learning Science in Informal Environments Csikszentmihalyi, M., and Hermanson, K. (1995). Intrinsic motivation in museums: Why does one want to learn? In J.H. Falk and L.D. Dierking (Eds.), Public insti- tutions for personal learning: Establishing a research agenda. Washington, DC: American Association of Museums. Csikszentmihalyi, M., Rathunde, K., and Whalen, S. (1993). Talented teenagers: The roots of success and failure. New York: Cambridge University Press. Dancu, T. (2006). Comparing three methods for measuring childrenâs engagement with exhibits: Observations, caregiver interviews, and child interviews. Poster presented at 2006 Annual Meeting of the Visitor Studies Association, Grand Rapids, MI. Delandshere, G. (2002). Assessment as inquiry. Teachers College Record, 104 (7), 1461-1484. Diamond, J. (1999). Practical evaluation guide: Tools for museums and other educa- tional settings. Walnut Creek, CA: AltaMira Press. Dierking, L.D., Adelman, L.M., Ogden, J., Lehnhardt, K., Miller, L., and Mellen, J.D. (2004). Using a behavior change model to document the impact of visits to Disneyâs Animal Kingdom: A study investigating intended conservation action. Curator, 47 (3), 322-343. diSessa, A. (1988). Knowledge in pieces. In G. Forman and P. Pufall (Eds.), Con- structivism in the computer age (pp. 49-70). Mahwah, NJ: Lawrence Erlbaum Associates. Driver, R., Leach, J., Millar, R., and Scott, P. (1996). Young peopleâs images of science. Buckingham, England: Open University Press. Eccles, J.S., Wigfield, A., and Schiefele, U. (1998). Motivation to succeed. In N. Eisenberg (Ed.), Handbook of child psychology: Social, emotional, and personality development (5th ed., pp. 1017-1095). New York: Wiley. Ekman, P., and Rosenberg, E. (Eds.). (2005). What the face reveals: Basic and applied studies of spontaneous expression using the facial action coding system. New York: Oxford University Press. Ellenbogen, K.M. (2002). Museums in family life: An ethnographic case study. In G. Leinhardt, K. Crowley, and K. Knutson (Eds.), Learning conversations in muse- ums. Mahwah, NJ: Lawrence Erlbaum Associates. Ellenbogen, K.M. (2003). From dioramas to the dinner table: An ethnographic case study of the role of science museums in family life. Dissertation Abstracts Inter- national, 64 (3), 846A. (University Microfilms No. AAT30-85758.) Ellenbogen, K.M., Luke, J.J., and Dierking, L.D. (2004). Family learning research in museums: An emerging disciplinary matrix? Science Education, 88 (Suppl. 1), S48-S58. Engle, R.A., and Conant, F.R. (2002). Guiding principles of fostering productive disciplinary engagement: Explaining an emergent argument. Cognition and Instruction, 20 (4), 399-483. European Commission. (2001). Eurobarometer 55.2: Europeans, science and technol- ogy. Brussels, Belgium: Author. Fadigan, K.A., and Hammrich, P.L. (2004). A longitudinal study of the educational and career trajectories of female participants of an urban informal science education program. Journal of Research on Science Teaching, 41 (8), 835-860. Falk, J.H. (2006). The impact of visit motivation on learning: Using identity as a con- struct to understand the visitor experience. Curator, 49 (2), 151-166.
Assessment 83 Falk, J.H. (2008). Calling all spiritual pilgrims: Identity in the museum experience. Museum (Jan/Feb.). Available: http://www.aam-us.org/pubs/mn/spiritual.cfm [accessed March 2009]. Falk, J.H., and Adelman, L.M. (2003). Investigating the impact of prior knowledge and interest on aquarium visitor learning. Journal of Research in Science Teach- ing, 40 (2), 163-176. Falk, J.H., and Dierking, L.D. (2000). Learning from museums. Walnut Creek, CA: AltaMira Press. Falk, J.H., and Storksdieck, M. (2005). Using the âcontextual model of learningâ to understand visitor learning from a science center exhibition. Science Education, 89 (5), 744-778. Falk, J.H., Moussouri, T., and Coulson, D. (1998). The effect of visitorsâ agendas on museum learning, Curator, 41 (2), 107-120. Falk, J.H., Reinhard, E.M., Vernon, C.L., Bronnenkant, K., Deans, N.L., and Heimlich, J.E. (2007). Why zoos and aquariums matter: Assessing the impact of a visit. Silver Spring, MD: Association of Zoos and Aquariums. Falk, J.H., Scott, C., Dierking, L.D., Rennie, L.J., and Cohen-Jones, M.S. (2004). Inter- actives and visitor learning. Curator, 47 (2), 171-198. Feher, E., and Rice, K. (1987). Pinholes and images: Childrenâs conceptions of light and vision. I. Science Education, 71 (4), 629-639. Feher, E., and Rice, K. (1988). Shadows and anti-images: Childrenâs conceptions of light and vision. II. Science Education, 72 (5), 637-649. Fender, J.G, and Crowley, K. (2007). How parent explanation changes what children learn from everyday scientific thinking. Journal of Applied Developmental Psy- chology, 28 (3), 189-210. Gallenstein, N. (2005). Never too young for a concept map. Science and Children, 43 (1), 45-47. Garibay, C. (2005, July). Visitor studies and underrepresented audiences. Paper pre- sented at the 2005 Visitor Studies Conference, Philadelphia. Garibay, C. (2006, January). Primero la Ciencia remedial evaluation. Unpublished manuscript, Chicago Botanic Garden. Gration, M., and Jones, J. (2008, May/June). Learning from the process: Develop- mental evaluation within âagents of change.â ASTC Dimensions, From Intent to Impact: Building a Culture of Evaluation. Available: http://www.astc.org/ blog/2008/05/16/from-intent-to-impact-building-a-culture-of-evaluation/ [ac- cessed April 2009]. Guichard, H. (1995). Designing tools to develop the conception of learners. Inter- national Journal of Science Education, 17 (2), 243-253. Gupta, P., and Siegel, E. (2008). Science career ladder at the New York Hall of Sci- ence: Youth facilitators as agents of inquiry. In R.E. Yaeger and J.H. Falk (Eds.), Exemplary science in informal education settings: Standards-based success stories. Arlington, VA: National Science Teachers Association. GutiÃ©rrez, K., and Rogoff, B. (2003). Cultural ways of learning: Individual traits or repertoires of practice. Educational Researcher, 32 (5), 19-25. Hammer, D., and Elby, A. (2003). Tapping epistemological resources for learning physics. Journal of the Learning Sciences, 12 (1), 53-90.
84 Learning Science in Informal Environments Heath, S.B. (1999). Dimensions of language development: Lessons from older children. In A.S. Masten (Ed.), Cultural processes in child development: The Minnesota symposium on child psychology (Vol. 29, pp. 59-75). Mahwah, NJ: Lawrence Erlbaum Associates. Hein, G.E. (1995). Evaluating teaching and learning in museums. In E. Hooper-Greenhill (Ed.), Museums, media, message (pp. 189-203). New York: Routledge. Hein, G.E. (1998). Learning in the museum. New York: Routledge. Hidi, S., and Renninger, K.A. (2006). The four-phase model of interest development. Educational Psychologist, 41 (2), 111-127. Holland, D., and Lave, J. (Eds.) (2001). History in person: Enduring struggles, con- tentious practice, intimate identities. Albuquerque, NM: School of American Research Press. Holland, D., Lachicotte, W., Skinner, D., and Cain, C. (1998). Identity and agency in cultural worlds. Cambridge, MA: Harvard University Press. Huba, M.E., and Freed, J. (2000). Learner-centered assessment on college campuses: Shifting the focus from teaching to learning. Needham Heights, MA: Allyn and Bacon. Hull, G.A., and Greeno, J.G. (2006). Identity and agency in nonschool and school worlds. In Z. Bekerman, N. Burbules, and D.S. Keller (Eds.), Learning in places: The informal education reader (pp. 77-97). New York: Peter Lang. Humphrey, T., and Gutwill, J.P. (2005). Fostering active prolonged engagement: The art of creating APE exhibits. San Francisco: The Exploratorium. Isen, A.M. (2004). Some perspectives on positive feelings and emotions: Positive affect facilitates thinking and problem solving. In A.S.R. Manstead, N. Frijda, and A. Fischer (Eds.), Feelings and emotions: The Amsterdam symposium (pp. 263-281). New York: Cambridge University Press. Jackson, A., and Leahy, H.R. (2005). âSeeing it for real?â Authenticity, theater and learning in museums. Research in Drama Education, 10 (3), 303-325. Jacoby, S., and Gonzales, P. (1991). The constitution of expert-novice in scientific discourse. Issues in Applied Linguistics, 2 (2), 149-181. Jolly, E.J., Campbell, P.B., and Perlman, L. (2004). Engagement, capacity, and conti- nuity: A trilogy for student success. St. Paul: GE Foundation and Science Museum of Minnesota. Jung, T., Makeig, S., Stensmo, M., and Sejnowski, T.J. (1997). Estimating alertness from the EEG power spectrum. Biomedical Engineering, 44 (1), 60-69. Korn, R. (2003). Summative evaluation of âvanishing wildlife.â Monterey, CA: Monterey Bay Aquarium. Available: http://www.informalscience.org/evaluations/report_ 45.pdf [accessed October 2008]. Korn, R. (2006). Summative evaluation for âsearch for life.â Queens: New York Hall of Science. Available: http://www.informalscience.org/evaluation/show/66 [ac- cessed October 2008]. Kort, B., Reilly, R., and Picard, R.W. (2001). An affective model of interplay between emotions and learning: Reengineering educational pedagogyâBuilding a learn- ing companion. In Proceedings of IEEE International Conference on Advanced Learning Technologies, Madison, WI. Lave, J. (1996). Teaching, as learning, in practice. Mind, Culture, and Activity, 3 (3), 149-164.
Assessment 85 Leinhardt, G., and Knutson, K. (2004). Listening in on museum conversations. Walnut Creek, CA: AltaMira Press. Leinhardt, G., Tittle, C., and Knutson, K. (2002). Talking to oneself: Diaries of museum visits. In G. Leinhardt, K. Crowley, and K. Knutson (Eds.), Learning conversations in museums (pp. 103-133). Mahwah, NJ: Lawrence Erlbaum Associates. Lipstein, R., and Renninger, K.A. (2006). âPutting things into wordsâ: The development of 12-15-year-old studentsâ interest for writing. In P. Boscolo and S. Hidi (Eds.), Motivation and writing: Research and school practice (pp. 113- 140). New York: Kluwer Academic/Plenum. Loomis, R.J. (1989). The countenance of visitor studies in the 1980âs. Visitor Studies, 1(1), 12-24. Lu, S., and Graesser, A.C. (in press). An eye tracking study on the roles of texts, pictures, labels, and arrows during the comprehension of illustrated texts on device mechanisms. Submitted to Cognitive Science. Ma, J. (2006). Philosopherâs corner. Unpublished report. Available: http://www. exploratorium.edu/partner/pdf/philCorner_rp_02.pdf [accessed October 2008]. Martin, L.M. (2004). An emerging research framework for studying informal learning and schools. Science Education, 88 (Suppl. 1), S71-S82. McCreedy, D. (2005). Youth and science: Engaging adults as advocates. Curator, 48 (2), 158-176. McDermott, R., and Varenne, H. (1995). Culture as disability. Anthropology and Edu- cation Quarterly, 26 (3), 324-248. McNamara, P. (2005). Amazing feats of aging: A summative evaluation report. Portland: Oregon Museum of Science and Industry. Available: http://www.informalscience. org/evaluation/show/82 [accessed October 2008]. Meisner, R., vom Lehn, D., Heath, C., Burch, A., Gammon, B., and Reisman, M. (2007). Exhibiting performance: Co-participation in science centres and museums. In- ternational Journal of Science Education, 29 (12), 1531-1555. Melton, A.W. (1935). Problems of installation in museums of art. Washington, DC: American Association of Museums. Miller, J.D. (2004). Public understanding of and attitudes toward scientific research: What we know and what we need to know. Public Understanding of Science, 13 (3), 273-294. Moore, R.W., and Hill Foy, R.L. (1997). The scientific attitude inventory: A revision (SAIII). Journal of Research in Science Teaching, 34 (4), 327-336. Moss, P.A., Girard, B., and Haniford, L. (2006). Validity in educational assessment. Review of Research in Education, 30, 109-162. Moss, P.A., Pullin, D., Haertel, E.H., Gee, J.P., and Young, L. (Eds.). (in press). As- sessment, equity, and opportunity to learn. New York: Cambridge University Press. Mota, S., and Picard, R.W. (2003). Automated posture analysis for detecting learnerâs interest level. Paper prepared for the Workshop on Computer Vision and Pattern Recognition for Human-Computer Interaction, June, Madison, WI. Available: http://affect.media.mit.edu/pdfs/03.mota-picard.pdf [accessed March 2009]. Moussouri, T. (1997). The use of childrenâs drawings as an evaluation tool in the museum. Museological Review, 4, 40-50. Moussouri, T. (2007). Implications of the social model of disability for visitor research. Visitor Studies, 10 (1), 90-106.
86 Learning Science in Informal Environments Myers, O.E., Saunders, C.D., and Birjulin, A.A. (2004). Emotional dimensions of watching zoo animals: An experience sampling study building on insights from psychology. Curator, 47 (3), 299-321. Nasir, N.S. (2002). Identity, goals, and learning: Mathematics in cultural practices. Mathematical Thinking and Learning, 4(2 & 3), 213-247. Nasir, N.S., Rosebery, A.S., Warren B., and Lee, C.D. (2006). Learning as a cultural process: Achieving equity through diversity. In R.K. Sawyer (Ed.), The Cam- bridge handbook of the learning sciences (pp. 489-504). New York: Cambridge University Press. National Research Council. (1996). National science education standards. National Committee on Science Education Standards and Assessment. Washington, DC: National Academy Press. National Research Council. (2000). How people learn: Brain, mind, experience, and school (expanded ed.). Committee on Developments in the Science of Learning. J.D. Bransford, A.L. Brown, and R.R. Cocking (Eds.). Washington, DC: National Academy Press. National Research Council. (2001). Knowing what students know: The science and design of educational assessment. Committee on the Foundations of Assessment. J.W. Pellegrino, N. Chudowsky, and R. Glaser (Eds.). Washington, DC: National Academy Press. National Research Council. (2002). Scientific research in education. Committee on Scientific Principles for Education Research. R.J. Shavelson and L. Towne (Eds.). Washington, DC: National Academy Press. National Research Council. (2007). Taking science to school: Learning and teaching science in grades K-8. Committee on Science Learning, Kindergarten Through Eighth Grade. R.A. Duschl, H.A. Schweingruber, and A.W. Shouse (Eds.). Wash- ington, DC: The National Academies Press. National Science Board. (2002). Science and engineering indicatorsâ2002 (NSB- 02-1). Arlington, VA: National Science Foundation. Available: http://www.nsf. gov/statistics/seind02/pdfstart.htm [accessed October 2008]. Neumann, A. (2006). Professing passion: Emotion in the scholarship of professors in research universities. American Educational Research Journal, 43 (3), 381-424. OâNeill, M.C., and Dufresne-Tasse, C. (1997). Looking in everyday life/Gazing in museums. Museum Management and Curatorship, 16 (2), 131-142. Osborne, J., Collins, S., Ratcliffe, M., Millar, R., and Duschl, R. (2003). What âideas- about-scienceâ should be taught in school science? A Delphi study of the expert community. Journal of Research in Science Teaching, 40 (7), 692-720. Panksepp, J. (1998). Affective neuroscience: The foundations of human and animal emotions. New York: Oxford University Press. Penner, D., Giles, N.D., Lehrer, R., and Schauble, L. (1997). Building functional models: Designing an elbow. Journal of Research in Science Teaching, 34 (2), 125-143. Perry, D. L. (1993). Measuring learning with the knowledge hierarchy. Visitor Studies: Theory, Research and Practice: Collected Papers from the 1993 Visitor Studies Conference, 6, 73-77. Plutchik, R. (1961). Studies of emotion in the light of a new theory. Psychological reports, 8, 170.
Assessment 87 Randol, S.M. (2005). The nature of inquiry in science centers: Describing and assessing inquiry at exhibits. Unpublished doctoral dissertation, University of California, Berkeley. Raphling, B., and Serrell, B. (1993). Capturing affective learning. Current Trends in Audience Research and Evaluation, 7, 57-62. Reich, C., Chin, E., and Kunz, E. (2006). Museums as forum: Engaging science center visitors in dialogue with scientists and one another. Informal Learning Review, 79, 1-8. Renninger, K.A. (2000). Individual interest and its implications for understanding intrin- sic motivation. In C. Sansone and J.M. Harackiewicz (Eds.), Intrinsic motivation: Controversies and new directions (pp. 373-404). San Diego: Academic Press. Renninger, K.A. (2003). Effort and interest. In J. Gutherie (Ed.), The encyclopedia of education (2nd ed., pp. 704-707). New York: Macmillan. Renninger, K.A., and Hidi, S. (2002). Interest and achievement: Developmental is- sues raised by a case study. In A. Wigfield and J. Eccles (Eds.), Development of achievement motivation (pp. 173-195). New York: Academic Press. Renninger, K.A., and Wozniak, R.H. (1985). Effect of interests on attentional shift, recognition, and recall in young children. Developmental Psychology, 21 (4), 624-631. Renninger, K., Hidi, S., and Krapp, A. (1992). The role of interest in learning and development. Mahwah, NJ: Lawrence Erlbaum Associates. Renninger, K.A., Sansone, C., and Smith, J.L. (2004). Love of learning. In C. Peterson and M.E.P. Seligman (Eds.), Character strengths and virtues: A handbook and classification (pp. 161-179). New York: Oxford University Press. Robinson, E.S. (1928). The behavior of the museum visitor. New Series, No. 5. Wash- ington, DC: American Association of Museums. Rockman Et Al. (1996). Evaluation of Bill Nye the Science Guy: Television series and out- reach. San Francisco: Author. Available: http://www.rockman.com/projects/124. kcts.billNye/BN96.pdf [accessed October 2008]. Rockman Et Al. (2007). Media-based learning science in informal environments. Background paper for the Learning Science in Informal Environments Committee of the National Research Council. Available: http://www7.nationalacademies.org/ bose/Rockman_et%20al_Commissioned_Paper.pdf [accessed October 2008]. Rogoff, B. (2003). The cultural nature of human development. New York: Oxford University Press. Rosenberg, S., Hammer, D., and Phelan, J. (2006). Multiple epistemological coher- ences in an eighth-grade discussion of the rock cycle. Journal of the Learning Sciences, 15 (2), 261-292. Roth, E.J., and Li, E. (2005, April). Mapping the boundaries of science identity in ISMEâs first year. A paper presented at the annual meeting of the American Educational Research Association, Montreal. Rounds, J. (2006). Doing identity work in museums. Curator, 49 (2), 133-150. Russell, J.A., and Mehrabian, A. (1977). Evidence for a three-factor theory of emotions. Journal of Research in Personality, 11, 273-294. Sachatello-Sawyer, B., Fellenz, R.A., Burton, H., Gittings-Carlson, L., Lewis-Mahony, J., and Woolbaugh, W. (2002). Adult museum programs: Designing meaningful experiences. American Association for State and Local History Book Series. Blue Ridge Summit, PA: AltaMira Press.
88 Learning Science in Informal Environments Sandoval, W.A. (2005). Understanding studentsâ practical epistemologies and their influence on learning through inquiry. Science Education, 89 (4), 634-656. Schauble, L., Glaser, R., Duschl, R., Schulze, S., and John, J. (1995). Studentsâ un- derstanding of the objectives and procedures of experimentation in the science classroom. Journal of the Learning Sciences, 4 (2), 131-166. Schreiner, C., and Sjoberg, S. (2004). Sowing the seeds of rose. Background, ra- tionale, questionnaire development and data collection for ROSE (relevance of science education)âA comparative study of studentsâ views of science and science education. Department of Teacher Education and School Development, University of Oslo. Schwartz, D.L., Bransford, J.D., and Sears, D. (2005). Efficiency and innovation in transfer. In J.P. Mestre (Ed.), Transfer of learning from a modern multidisciplinary perspective (pp. 1-51). Greenwich, CT: Information Age. Schwartz, R.S., and Lederman, N.G. (2002). âItâs the nature of the beastâ: The influ- ence of knowledge and intentions on learning and teaching nature of science. Journal of Research in Science Teaching, 39 (3), 205-236. Serrell, B. (1998). Paying attention: Visitors and museum exhibitions. Washington, DC: American Association of Museums. Shepard, L. (2000). The role of assessment in a learning culture. Educational Re- searcher, 29 (7), 4-14. Shute, V.J. (2008). Focus on formative feedback. Review of Educational Research, 78 (1), 153-189. Smith, C.L., Maclin, D., Houghton, C., and Hennessey, M.G. (2000). Sixth-grade stu- dentsâ epistemologies of science: The impact of school science experiences on epistemological development. Cognition and Instruction, 18, 349-422. Songer, N.B., and Linn, M.C. (1991). How do studentsâ views of science influ- ence knowledge integration? Journal of Research in Science Teaching, 28 (9), 761-784. Spock, M. (2000). On beyond now: Strategies for assessing the long term impact of museum experiences. Panel discussion at the American Association of Museums Conference, Baltimore. St. John, M., and Perry, D.L. (1993). Rethink role, science museums urged. ASTC Newsletter, 21 (5), 1, 6-7. Steele, C.M. (1997). A threat in the air: How stereotypes shape the intellectual identi- ties and performance of women and African Americans. American Psychologist, 52 (6), 613-629. Stevens, R. (2007). Capturing ideas in digital things: the traces digital annotatin me- dium. In R. Goldman, B. Barron, and R. Pea (Eds.), Video research in the learning sciences. Cambridge: Cambridge University Press. Stevens, R., and Hall, R.L. (1997). Seeing the âtornadoâ: How âvideo tracesâ medi- ate visitor understandings of natural spectacles in a science museum. Science Education, 81 (6), 735-748. Stevens, R., and Toro-Martell, S. (2003). Leaving a trace: Supporting museum visitor interpretation and interaction with digital media annotation systems. Journal of Museum Education, 28 (2), 25-31. Tai, R.H., Liu, C.Q., Maltese, A.V., and Fan, X. (2006). Planning early for careers in science. Science, 312 (5777), 1143-1144.
Assessment 89 Taylor, R. (1994). The influence of a visit on attitude and behavior toward nature conservation. Visitor Studies, 6 (1), 163-171. Thompson, S., and Bonney, R. (2007, March). Evaluating the impact of participation in an on-line citizen science project: A mixed-methods approach. In J. Trant and D. Bearman (Eds.), Museums and the web 2007: Proceedings. Toronto: Archives and Museum Informatics. Available: http://www.archimuse.com/mw2007/papers/ thompson/thompson.html [accessed October 2008]. Travers, R.M.W. (1978). Childrenâs interests. Kalamazoo: Michigan State University, College of Education. Van Luven, P., and Miller, C. (1993). Concepts in context: Conceptual frameworks, evaluation and exhibition development. Visitor Studies, 5 (1), 116-124. vom Lehn, D., Heath, C., and Hindmarsh, J. (2001). Exhibiting interaction: Conduct and collaboration in museums and galleries. Symbolic Interaction, 24 (2), 189-216. vom Lehn, D., Heath, C., and Hindmarsh, J. (2002). Video-based field studies in museums and galleries. Visitor Studies, 5 (3), 15-23. Vosniadou, S., and Brewer, W.F. (1992). Mental models of the earth: A study of con- ceptual change in childhood. Cognitive Psychology, 24 (4), 535-585. Warren, B., Ballenger, C., Ogonowski, M., Rosebery, A., and Hudicourt-Barnes, J. (2001). Rethinking diversity in learning science: The logic of everyday sense- making. Journal of Research in Science Teaching, 38, 529-552. Weinburgh, M.H., and Steele, D. (2000). The modified attitude toward science inven- tory: Developing an instrument to be used with fifth grade urban students. Journal of Women and Minorities in Science and Engineering, 6 (1), 87-94. Wilson, M. (Ed.). (2004). Towards coherence between classroom assessment and ac- countability. Chicago: University of Chicago Press.