The body of research and writing on science literacy is immense and scattered across several scholarly fields. It has also been the subject of several comprehensive reviews (e.g., Miller, 1983, 2004; DeBoer, 2000; Laugksch, 2000; Roberts, 2007; Pardo and Calvo, 2004). In this chapter, we explore the many definitions of both science literacy1 and health literacy, as well as how the concepts have been measured. In order to put definitions of science literacy in context, we begin by examining some of the common justifications for promoting science literacy because definitions of the term are informed by ideas and assumptions about its value. We then describe how definitions of science literacy have changed over time. Building on this foundation, we then identify a set of aspects that appear to be common across many different definitions in order to provide some clarity about how the term may be both used and understood. We conclude by describing the history of the measurement of science literacy—an enterprise that has remained fairly removed from the conceptual evolution of the term—explaining how the pervasive reliance on narrow measurements of science knowledge constrains understanding of science literacy.
We also discuss the definitions of health literacy, as well as how the concept has been measured. Because science literacy is the primary focus of the committee’s charge, we devote most of our attention to this topic, addressing health literacy in separate sections intended to show how the two ideas are—and are
not—connected. Overall, in this chapter we seek to provide the historical and conceptual context necessary to understand the key arguments in the field and put the following chapters in context.
Four broad rationales have been proposed as to why science literacy is important and necessary: the economic rationale, the personal rationale, the democratic rationale, and the cultural rationale. In this section, we examine each of these rationales in order to provide a context for how the desired outcomes of science literacy inform understanding of the term itself. In addition, we consider the need for science literacy in new media environments (see Box 2-1).
The Economic Rationale
The economic rationale for science literacy is closely related to the impetus for educating the general population in science. For instance, a committee set up in the United Kingdom after the World War I to investigate the state of science education argued: “A nation thoroughly trained in scientific method and stirred with enthusiasm for penetrating and understanding the secrets of nature, would no doubt reap a rich material harvest of comfort and prosperity” (Committee to Enquire into the Position of Natural Science in the Educational System of Great Britain, 1918, p. 7). In one form or another, this argument has been a feature of the discussion about the role of science education in society for the past 100 years (see, e.g., Dainton, 1968; European Commission, 2004; Lord Sainsbury of Turville, 2007; National Academy of Sciences, National
Academy of Engineering, and Institute of Medicine, 2007, 2010; Rutherford and Ahlgren, 1989; The Royal Society, 2014). One of the most recent articulations is offered by Hanushek and Woessman, two prominent economists of education who draw from their extensive analysis of nations’ gross domestic product and performance on international tests to argue that the knowledge capital of nations is “powerfully related to long-run growth rates” (Hanushek and Woessman, 2016, p.64).
The essential premise of this utilitarian argument is that advanced economies require a scientifically and technologically skilled population, both in order to fill jobs in science or technology-related professions, such as computer science and engineering, and for the many jobs that require some knowledge of science in today’s society, such as nursing, physiotherapy, and construction. Although many authors treat professional training and science literacy as separate goals (e.g., Osborne and Dillon, 2008) proponents of the economic rationale argue that science literacy contributes to professional and economic success across a very wide range of contexts. Science literacy, from this perspective, is a valued outcome because it strengthens economies and economic competitiveness, leading to less unemployment and a high standard of living. Specific to employment claims, however, the mechanisms through which science education contributes to economic growth are contested. For example, countering widespread arguments about the need for more science, technology, engineering, and mathematics (STEM) professionals, data suggest that most STEM fields experience no shortage at all when compared with other professions—computer science and engineering being notable exceptions (Lowell and Salzman, 2007; Salzman, 2013; Weissman, 2013; Xie and Killewald, 2012).
The Personal Rationale
The personal rationale is that science literacy helps people respond to issues and challenges that emerge in their personal and community contexts. According to this rationale, people are confronted with a range of decisions, such as those about health, their consumption of materials and energy, and their lifestyle, in which an understanding of science (or an ability to interact with science) might help them to take informed actions and lead richer, healthier lives (OECD, 2012a). For instance, many conversations with health professionals require some understanding of the body, the structure and function of its many organs and systems, and even the nature of risk. Similarly, decisions and choices about energy may be informed by some understanding of the concept and the consequences of one choice in comparison with another.
The Democratic Rationale
The democratic rationale rests on the claim that a democracy only functions, or at least functions better, when its citizens are informed participants in civic decision making. Proponents of this rationale argue that many of the major problems facing humanity—such as the prevention of disease, the production of “clean” energy, the supply of potable water, and climate change—should be understood and addressed at least in part through scientific and technological advances. Only science literate citizens, proponents of this argument claim, are adequately prepared to participate in civic decision making around these challenges. According to a prominent report on education from the European Commission (1995, p. 28):
Democracy functions by majority decision on major issues which, because of their complexity, require an increasing amount of background knowledge. For example, environmental and ethical issues cannot be the subject of informed debate unless young people possess certain scientific awareness. At the moment, decisions in this area are all too often based on subjective and emotional criteria, the majority lacking the general knowledge to make an informed choice. Clearly this does not mean turning everyone into a scientific expert, but enabling them to fulfill an enlightened role in making choices which affect their environment and to understand in broad terms the social implications of debates between experts.
The democratic rationale revolves around what political and economic theorists call “the commons”: goods and resources that are not privately owned. Such goods and resources include the air, oceans, national parks, sanitation, water, public libraries, health infrastructure, and even accumulated scientific knowledge. In a democracy, managing public goods requires active civic engagement to sustain these resources and ensure their equitable distribution and public access. By engaging in such acts as deliberation, persuasion, and the donation of time and money, members of the public participate both in decisions about the use of scientific knowledge (e.g., ways of minimizing air pollution) and decisions about the allocation of resources to the production of scientific knowledge (e.g., supporting funding of stem cell research) (see Rudolph and Horibe, 2015).
The Cultural Rationale
The distinguishing feature of modern Western societies is science and technology. Science and technology are the most significant determinants in our culture. In order to decode our culture and enrich our participation—this
includes protest and rejection—an appreciation/understanding of science is desirable.
This rationale is different from those above in that it invokes no extrinsic or utilitarian justification. From this perspective, the sciences are important cultural activities that offer a powerful way of understanding the world and should therefore be part of what it means to be liberally educated (Bereiter, 2002; Committee on the Objectives of a General Education in a Free Society, 1945; Hirsch, 1987; Hirst, 1965; Hirst and Peters, 1970).
Proponents of the cultural rationale point out that science and technology have transformed people’s view of the world from one that is flat to one in which it is spherical, where day and night is caused by a spinning Earth instead of a rotating sun, where people look like their parents because every cell carries a chemically coded message about how to reproduce itself, and so on. Although this argument is deeply felt by many scientists and science educators, it is perhaps the least common of the four rationales, and is often obscured by more utilitarian arguments.
The rationales described above provide context for how the term science literacy is defined. As Norris and colleagues (2014) note, definitions of both science literacy and health literacy invoke a valued direction or desired goal. For instance, the OECD’s Programme for International Student Assessment (PISA) report defines science literacy (OECD, 2012a) as the “ability to engage with science-related issues” and undertake “reasoned discourse about science and technology.” Such outcomes are not simply an issue of knowing more—rather the outcome is defined by what an individual might be able to do. Likewise, Shen’s definition of science literacy (which, as we will discuss in following sections, is the rhetorical basis of much of the measurement of science literacy) is not simply knowledge, but rather “the kind of knowledge which can be used to solve practical problems . . . such as health and survival” and a facility that “would bring common sense to bear upon such issues and thus participate more fully in the democratic process of an increasingly technological society” (Shen, 1975b, p. 48). Shen is emphasizing both the personal and democratic rationales for science literacy here, defining the term in the context of how this knowledge will be of benefit. In this section, we explore how shifting ideas about the value of science literacy have informed how the term has been defined.
Definitions of Science Literacy
The term “science literacy” has pervaded much of the public discourse about science education and public understanding since 1958 when it appears to have been coined twice, independently, by Hurd (1958) and McCurdy
(1958), as noted by Laugksh (2000). The phrase was coined as a means of expressing the disposition and knowledge needed to engage with science—both in an individual’s personal life and in the context of civic issues raised by both the use of science and technology and the production of more knowledge. Then, as now, there was mounting concern about the growth of science knowledge2 and the need for the public to engage with the political and moral dilemmas posed by scientific and technological advances. McCurdy (the then president of the Shell Oil Corporation) argued that someone who was science literate would be able to “participate in human and civic affairs.” In practice, the term science literacy was used to make an educational case for teaching science to the “90% of all working people” who were not “potential scientists,” and who, it was argued, should experience a different kind of science education to enable them to achieve such a goal (Klopfer, 1969, p. 87).
Only 8 years later, the term had become so pervasive that Pella and colleagues (1966) in the Scientific Literacy Center at the University of Wisconsin–Madison identified six distinct types of understanding that were said to be essential to science literacy: the basic concepts in science; the nature of science; the ethics that control scientists in their work; the interrelationships of science and society; the interrelationships of science and the humanities; and the differences between science and technology. Definitions continued to proliferate and become more elaborate over time: 10 years after Pella and colleagues described their six types of understanding, Gabel (1976) constructed a matrix using Pella’s categories (now expanded to 8) on one dimension with 9 cognitive and affective objectives on another dimension for a total of 72 separate goals. As Roberts (2007, p. 737) points out: “Thus did scientific literacy become an umbrella concept with a sufficiently broad, composite meaning that it meant both everything, and nothing specific, about science education and the competency it sought to describe.”
One of the early definitions that has become influential, at least within the field of measurement of science literacy, is the definition offered by Shen (1975b, p. 46-47), who differentiated three types of science literacy:
- Practical: “the kind of knowledge which can be used to solve practical problems . . . such as health and survival.”
- Civic: “to enable the citizen to become more aware of science and science related issues so that he and his [sic] representatives would bring common sense to bear upon such issues and thus participate more fully in the democratic process of an increasingly technological society.”
- Cultural: A motivation or “desire to know something about science as a major human achievement.”
Despite the widespread enthusiasm for science literacy, writ large, and the prominence of a few widely cited definitions, none of the fields concerned with science literacy have managed to coalesce around a common conception of what is meant by the term. Examining the concept 40 years after its inception, DeBoer (2000) identified no fewer than nine overlapping but distinct uses of the term (see Appendix A for more details), and ultimately argued that there was a lack of any universally shared understanding of science literacy other than as “a broad and functional understanding of science for general education purposes and not a preparation for specific or technical careers” (DeBoer, 2000, p. 594).
In the field of education, at least, the lack of consensus surrounding science literacy has not stopped it from occupying a prominent place in policy discourse. From the 1980s onward, science literacy was increasingly presented as a central goal of primary and secondary science education. For example, Science for All Americans, a prominent reform document published by the American Association for the Advancement of Science (1989, p. xvii) argued:
The science-literate person is one who is aware that science, mathematics and technology are interdependent human enterprises with strengths and limitations; understands key concepts and principles of science; is familiar with the natural world and recognizes both its diversity and unity; and uses scientific knowledge and scientific ways of thinking for individual and social purposes.
Ten years later, the UK policy report Beyond 2000: Science Education for the Future argued that “the primary and explicit aim of the 5-16 science curriculum should be to provide a course which can enhance ‘scientific literacy’” enabling students to, among other things, “express an opinion on important social and ethical issues with which they will increasingly be confronted” (Millar and Osborne, 1998, p. 2009). And although the recently published A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas (National Research Council, 2012) avoids the term science literacy, it nevertheless suggests that by grade 12 students should be able to undertake a very similar set of aims—the difference being that these are more specifically defined. Thus, for instance, while students should be able to “read media reports of science or technology in a critical manner so as to identify their strengths and weaknesses” they should also be able to “explain how claims to knowledge are judged by the scientific community today and articulate the merits and limitations of peer review and the need for independent replication of critical investigations” (National Research Council, 2012, p. 73).3
Many of these outcomes overlap with the definition offered by PISA, which treats science literacy as a competency—that is, “the ability to engage with
science-related issues, and with the ideas of science, as a reflective citizen”—in that sense, a concept defined very much by the outcomes of being scientifically literate (Koeppen et al., 2008). According to the OECD (2013, p. 7), a scientifically literate person “is willing to engage in reasoned discourse about science and technology which requires the competencies to:
- Explain phenomena scientifically: Recognise, offer, and evaluate explanations for a range of natural and technological phenomena.
- Evaluate and design scientific enquiry: Describe and appraise scientific investigations and propose ways of addressing questions scientifically.
- Interpret data and evidence scientifically: Analyse and evaluate data, claims, and arguments in a variety of representations and draw appropriate scientific conclusions.
Interestingly, this definition, unlike many others, specifies that the knowledge required to undertake these acts includes not only content knowledge from the various sciences, but also knowledge about how scientists do their work and knowledge about how to make sense of science. While earlier definitions of science literacy sometimes focused on a simplified vision of scientific epistemology referred to as “the scientific method” (Rudolph, 2005; Windschitl, Thompson, and Braaten, 2008), these more recent documents evoke the iterative and social nature of scientific work, emphasizing practices like argumentation and model building in addition to the formulation and testing of hypotheses. After conducting a systematic search of the literature on science literacy, Norris et al. (2014) identified 74 articles containing a distinct definition of science literacy that they then sorted into three categories based on the goals and values inherent in them:
- States of knowing to be obtained—the nature and form of knowledge required.
- Capacities to be developed—the form of actions and competencies a scientifically literate individual should be capable of undertaking.
- Personal traits to be acquired, such as a positive attitude toward science and technology.
The definitions varied in the degree to which they emphasized each of these three goals, and, overall, there is no common agreement about the nature and definition of science literacy.
For some scholars, the key elements of science literacy have been neither knowledge nor capacities but, rather, a particular set of dispositions and habits of mind. This category is broad, including such sweeping ideals as open-mindedness, as well as more specific inclinations, such as a commitment to evidence (Norris et al., 2014; Siegel, 1988). For example, the OECD PISA definition of
science literacy includes an interest in science and technology, environmental awareness, and valuing scientific approaches to inquiry. The third part of the definition was seen as important because “scientific approaches to enquiry have been highly successful at generating new knowledge” (OECD, 2013, p. 37). Some researchers, such as Shamos (1995), have argued that is far more reasonable to expect people to develop an appreciation for scientific inquiry, along with a sense of how and when scientific ways of gathering and analyzing evidence have proved particularly successful, than it is to expect them to master a wide range of scientific facts and principles. According to Shamos, science literacy schemes such as the Benchmarks for Science Literacy (American Association for the Advancement of Science, 1993, p. 151) are “doomed to fail, for at no time in the entire history of U.S. public school education has even this much knowledge of science been expected, or realised, of high school graduates.”
Valuing scientific approaches to inquiry, however, does not mean that an individual has to be positively disposed toward all aspects of science or even use such methods themselves. As Rogers warned, as early as 1948, one should not assume that mere contact with science will make people think critically. The critical disposition that is the hallmark of most scientists when approaching their science—and is something that is acquired through long years of practice and is a feature which, with one or two notable exceptions (Goldacre, 2006; Lehrer, 2010) is too often absent from the communication of science and school science education (Henderson et al., 2015). As a society, while people are good at communicating what they know, they may be less good at communicating how they know—in particular, the central role of critique in establishing claims to know (Popper, 1963; National Research Council, 2012).
The methodological challenge to including dispositions within science literacy is that previous research has often examined whether science literacy predicts certain attitudes or dispositions. From this perspective, including dispositions in a definition of science literacy borders on tautological, as something cannot be both a necessary element of science literacy and a possible outcome of having or using science literacy. Despite this conundrum, the committee elects to include dispositions as a possible aspect of science literacy because it arises so frequently in the research literature. We summarize the various aspects of definitions of science literacy in Box 2-2.
Reading across the most prominent and influential definitions of science literacy, the committee identified elements that are common to many, if not all, definitions. The most basic of these ideas is that science literacy has value to the people who possess it, whether it solves civic and personal problems or makes the world a richer and more fascinating place and that it should be understood in light of that value (Norris et al., 2014). Beyond this generally shared value, we identified a set of seven commonly proposed aspects of individual science literacy (summarized in Box 2-2). Some scholars would exclude one or more
of these aspects from their own definitions, and almost all people who study science literacy emphasize some of these aspects more than others.
Fourez (1997) argues for a somewhat different definition of science literacy. First, he introduces a technological component, arguing that science literacy is inextricable from technology (or technological) literacy. Second, he frames the goals of science literacy differently than is done in most other definitions, arguing that it comprises three central aims: individual autonomy, communication with others, and managing and resolving issues and challenges posed by science and technology. Fourez agrees that a basic knowledge of science and technology provides a certain degree of autonomy, but he points out that the question of what knowledge might be needed is key and must be conceptualized in light of the lives and needs of nonscientists. The knowledge that people need, he argued, is that which empowers them to communicate with others about their life situations, increasing their potential to act (Fourez, 1997, p. 906):
. . . their knowledge gives them a certain autonomy4 (the possibility of negotiating decisions without undue dependency with respect to others, while confronted with natural or social pressures; a certain capacity to communicate (finding ways of getting one’s message across); and some practical ways of coping with specific situations and negotiating over outcomes).
Fourez’s argument about the inextricability of science and technology literacy is an important one that deserves discussion. As various scholars have observed, the social problems and challenges that are associated with science in the public mind are often tied to particular technologies that science has made possible (Kleinman et al., 2005). Technological issues are likely to raise social, economic, ethical, and cultural challenges. Understanding and responding to these challenges requires knowledge of both science and technology. For instance, the possibility that Apple and other phone manufacturers may decide to implement a form of encryption on phones requires some basic knowledge of what is meant by encryption but it also raises a number of social and ethical issues about whether and when it is legitimate for any one manufacturer to do so.
A more complex example is provided by new bioengineering technologies, such as the very recently developed gene-editing technology CRISPR/Cas9, which has already raised ethical and regulatory concerns: see, for example, the statement about this technology from the National Institutes of Health (NIH).5 Clearly, there is some face validity to the claim that science literacy must also include technology literacy. Yet the distinction between science literacy and technology literacy is not well defined and “science literacy” is often used as an
umbrella term encapsulating both (see, e.g., National Research Council, 1996). Given time constraints and lacking a mandate to explore the nature of technology literacy, the committee chose to continue this common practice.
Defining Health Literacy
The focus of the majority of the definitions for health literacy has been on the capabilities needed by individuals to access and understand health information so that they can act on it. For example, in 1998, the World Health Organization defined health literacy as “the cognitive and social skills which determine the motivation and ability of individuals to gain access to, understand, and use information in ways which promote and maintain good health” (World Health Organization, 1998, p. 10). Shortly thereafter, the American Medical Association (1999, p. 553) stated: “Patients with adequate health literacy can read, understand, and act on health care information.” Five years later, the Institute of Medicine (2004) published a consensus study on health literacy and focused on the capabilities needed for individuals to make appropriate health decisions.
Eight years later, Sørensen and colleagues (2012) conducted a content analysis of 17 health literacy definitions, observing that the components of the definitions appear to cluster around six primary concepts: (1) competence, skills, and abilities; (2) actions; (3) information and resources; (4) objective—what health literacy should enable someone or something to do; (5) context—the setting in which health literacy might be needed; and (6) time—the period within which health literacy is needed or developed. Based on this analysis, the authors propose the following “comprehensive” definition for health literacy (Sørensen et al., 2012, p. 3):
Health literacy is linked to literacy and entails people’s knowledge, motivation and competences to access, understand, appraise, and apply health information in order to make judgments and take decisions in everyday life concerning healthcare, disease prevention and health promotion to maintain or improve quality of life during the life course.
The authors expand on this definition to propose a conceptual model that encompasses both the “antecedents” (e.g., age, education, socioeconomic status, culture, societal systems) and “consequences” (e.g., risks to patient safety, poorer health outcomes, health costs) of health literacy.
Recently, there has been increasing attention to the social and physical context in which individuals engage in health activities. As Rudd et al. (2012, p. 26) argue, a more comprehensive definition of health literacy must “include both the abilities of individuals and the characteristics of professionals and institutions that support or may inhibit individual or community action.” Unlike earlier definitions that focus almost exclusively on personal decision making and action, this definition also incorporates a capacity for individuals to engage in health-related civic matters. Koh and Rudd (2015, p. 1226) note that the “arc
of health literacy bends toward population health” and point to an approach to the concept that includes consideration of social organizations and systems as well as individual capacity.
A recent perspective written by members of the National Academy of Medicine’s Roundtable on Health Literacy argues that the old consensus on health literacy is being challenged in interesting and productive ways and that the field “needs to come to a new consensus on the components of a definition of health literacy” (Pleasant et al., 2016, p. 1). They note that health literacy is multidimensional and that it operates in a wide variety of settings and mediums. According to the authors of this report, components of a new more comprehensive definition should include four elements: (1) system demands and complexities as well as individual skills and abilities; (2) measurable components, processes, and outcomes; (3) potential for an analysis of change; and (4) a clearer and more empirically sound linkage between informed decisions and action. In order to have a better understanding of how to improve health status among populations, investigators must have available to them measurement tools that are based on a sound multidimensional definition of health literacy (Pleasant et al., 2016).
We note that the third element, “potential for an analysis of change,” means that the definition itself must be open to the ways in which it will inevitably evolve. Pleasant et al. (2016, p. 4) wrote:
[T]he field of health literacy has come to realize that health literacy is malleable and can change for each person, health professional, or health system for a wide variety of internal and external reasons. A definition of health literacy must become open to that change. Doing so will support and allow researchers to begin to explore how and why change in health literacy occurs.
Including this component in the definition compels the field to regularly consider the ways in which health care needs change over time.
As we have noted in our discussion of the many rationales for science literacy, definitions of science literacy invoke a desired goal and are therefore framed by which rationale or rationales (i.e. economic, personal, democratic, or cultural) the definer is prioritizing at the time (Norris et al., 2014). In the case of health literacy, however, the desired goal implicit in the definitions for health literacy cited here is the promotion and maintenance of good health—for individuals, communities, and societies (World Health Organization, 1998). Though the definitions for health literacy cited here have immediate implications for personal (and community and social) well-being, the promotion and maintenance of good health is a necessary precursor to participation in economic, democratic, and cultural systems.
A comparison of the research on science literacy and the research on health literacy reveals some overlap. The capacity for civic engagement, which has long been a concern for scholars of science literacy, is emerging as a potential component of health literacy. In contrast, science literacy has only recently started to focus in concrete ways on empirical links to decisions and action—a characteristic emphasis of research and writing on health literacy. Both fields are paying increasing attention to social systems and the way they constrain and enable literate action. In summary, although the two constructs have evolved separately, there is some evidence that the researchers and practitioners who deal with science and health are struggling with many of the same challenges.
In this section we consider the development of these measurements and how the measurements have not evolved at the same pace as the definitions. As a result, the field faces a concept that cannot yet be fully assessed.
The dominant approach to conceptualizing and measuring science literacy in population surveys has arisen out of work by Jon D. Miller and Kenneth Prewitt in the United States (see Miller, 1983, 1998, 2004) alongside collaborators in Great Britain (see Durant et al., 1989). Underlying these efforts appears to have been widespread concern among policy makers and the scientific community that nonscientists were becoming skeptical about the benefits of science and that such skepticism might result in cuts to science funding that would harm the scientific progress that many argue underpins both American and European economic development (Bauer et al., 2007). The results of the U.S. portion of this work have formed the core of a chapter of a biennial report called Science and Engineering Indicators (hereafter, Indicators) that the National Science Board provides to Congress and the Executive Branch. Scholars have also used the raw data collected for Indicators (which is made publicly available) for peer-reviewed research (e.g., Gauchat, 2012; Losh, 2010), and other countries have used many of the Indicators’ questions for their own national surveys (e.g., Bauer et al., 2012a; National Science Board, 2016).
Miller (2004) has written that the current approach to assessing science inquiry in surveys began when he and Kenneth Prewitt rewrote a question used by the National Association of Science Writers in 1957 (Davis, 1958) for a 1980 report to the National Science Foundation (NSF) and analyses that appeared in a later journal article (Miller, 1983).6 This question, which continues to be used, involves asking survey respondents to say whether they feel they have a clear
understanding of what it means to study something scientifically.7 If the respondent says “yes,” that respondent is then asked to describe this understanding in his or her own words. These responses are coded using standard procedures. Two additional questions were also added in later years to further assess an understanding of what has been called the “scientific approach” (Miller, 1983) and, later, the “nature of scientific inquiry” (Miller, 1998). The first was added in the 1988 Indicators survey and seeks to assess a basic understanding of probability using a multiple-choice question (National Science Board, 1989).
The 1995 survey for Indicators then added a two-part question aimed at assessing knowledge of science inquiry that had been piloted in a 1992 study for NIH (National Science Board, 1996). This new question first asked respondents a close-ended question about the best way to test a drug and followed it with an open-ended question about why they thought their method was best.8
In addition to knowledge of scientific inquiry, the NSF with Miller also added a battery of true/false and multiple-choice questions aimed at assessing knowledge of basic science concepts to the Indicators survey in 1988 (National Science Board, 1989). These “Oxford Scale” science knowledge questions were developed in collaboration with researchers in the United Kingdom (see Durant et al., 1989; Evans and Durant, 1995). In another project, the questions were used as part of a multicountry European survey (Bauer et al., 1994). The science concept questions focused on stable, established areas of knowledge that the survey developers believed would continue to be relevant over time; 11 of the original questions continue to be used (National Science Board, 2016). Other countries have also adopted many of these items in various venues (see National Science Board, 2016, Table 7-3), including multiple European surveys between 1989 and 2005 (Bauer et al., 2012a). Box 2-3 lists the process knowledge questions and Box 2-4 lists the factual questions currently in use by Indicators.
Conceptually, Miller (2004, p. 273) has generally argued that a scientifically literate citizen is someone who has both a “(1) a basic vocabulary of scientific terms and constructs; and (2) a general understanding of the nature of scientific inquiry.” He writes that the focus on scientific constructs emerged out of a focus in standardized testing in the late 1960s and 1970s, while the focus on the nature of inquiry emerged from efforts around the same time to operationalize the idea of a scientific attitude as described early in the 20th century by John Dewey (1934) and in research related to high school in Wisconsin (Davis, 1935; Miller, 1983). In his early work Miller (1983) argued that someone who
8Prior to 1988, construct knowledge had sometimes been measured for Indicators using self-reports in which respondents were asked to indicate their level of understanding about such issues as radiation or by asking respondents to list benefits and risks associated with specific technologies (Miller, 1983).
is science literate should further know “both general information about the impact of science on the individual and society and more concrete policy information on specific technological issues,” (Miller, 1983, p. 35), but this additional dimension has not been a focus of Indicators or substantial work in other countries (but see Bauer et al., 2000). Also, drawing on Shen (1975b), the focus on constructs and inquiry is meant to apply primarily to “civic” science literacy, which Miller describes as the type of science knowledge that might be needed by a citizen to take part in public life, including following news in media outlets, such as The New York Times (Miller, 2004, p. 274).
Measurement Validity of the Standard Measures
Methodologically, it is important to recognize that the specific questions used to measure knowledge about scientific inquiry and concepts have always been understood by their developers to represent measures of underlying “latent” constructs. The implication is that an entirely different subset of ques-
tions could have been selected and would provide similar results, and any such subset could serve as a proxy for the underlying construct. To the extent that this is true, it is not necessary to ask every possible question to capture the underlying construct. All scale development in social science research builds from this idea, which is roughly analogous to surveying a sample of the population instead of the entire population (see DeVellis, 2003).
Efforts to create longer measures (e.g., the 110-question measure proposed by Laugksch and Spargo, 1996) or measures that better capture specific concepts—such as a wider range of scientific knowledge deemed as important knowledge for the public, particularly the methods of inquiry or the procedures for validating knowledge claims—miss the point. As Sturgis and Allum (2006, p. 333), argue:
Confusing the contents of the measurement instrument with the attitude or trait underlying responses to it is a common mistake among critics of quantitative approaches to [public understanding of science]. But, as Philip Converse has remarked, it does not take much imagination to realise that knowledge of minor facts … are diagnostic of more profound differences in the amount of contextual information citizens bring to their judgments.
An important related issue is that the development of measures for survey use—when the number of questions that can be used is limited by the amount of time one can realistically ask someone to spend completing a survey—requires a higher level of abstraction than the type of measures that might be used in formal education settings. Open-ended questions are also problematic because it is both expensive and time intensive to reliably code the responses. As a result, instruments such as the “nature of science” batteries developed by Lederman (2007) for use with students and teachers are not suitable for survey research in public settings.
One important change to the items that Miller popularized occurred in the 2010 version of Indicators when the National Science Board decided to reduce the battery of knowledge questions used to track factual knowledge from 11 to 9 items, removing the questions on evolution and the big bang. The decision to remove these items was based on research that suggested they were effectively assessing religiosity, rather than factual science knowledge among the U.S. population (for a discussion, see National Science Board, 2012, p. 7-20). The proportion of U.S. respondents giving a correct response on these questions was much higher when they were given alternate wording that did not require them to personally endorse evolution: respondents were asked to respond “true” or “false” to “According to the theory of evolution, human beings, as we know them today, developed from earlier species of animals,” rather than “human beings, as we know them today, developed from earlier species of animals” (National Science Board, 2016). The questions continue to be asked, but they are not included in the composite scale. It is important to note that the potential validity associated with the “evolution” and “big bang” items
is culturally dependent, since these items are not always problematic when included in factual knowledge scales in other countries. We also note that in line with Miller’s conceptualization of science literacy (and as discussed above), Indicators also includes measures of attitudes toward science, as well as interest in science (see Chapters 3 and 5 for further discussion).
More recently, Kahan (2015, in press) suggests that the current Indicators’ items—including both the factual knowledge questions and the process questions—could likely be combined into a single measure but that the available questions are too easy (also see Pardo and Calvo, 2004). He argues that the standard measures do a reasonable job at differentiating those individuals with low levels of science knowledge from those individuals with medium levels of science knowledge, they seem to be less useful in differentiating people with medium knowledge from people with high knowledge. Kahan (in press) has thus proposed a new measure that includes some of the more difficult NSF items and adds one item from a list of questions used by the Pew Research Center (see, e.g., Funk and Rainie, 2015), as well as several questions from a short form of a well-established numeracy scale (see, e.g., Weller et al., 2013).
Other researchers have also put forward general measures meant to tap specific aspects of science knowledge or science literacy. Most of these, however, have yet to receive substantial additional use by researchers other than those who created the measures. Among the most prominent efforts is the work conducted by Funk and Rainie (2015) for the Pew Research Center, who attempt to construct their own measure of science knowledge. No validation work on this effort appears to have been published to date. In addition, arguing that science literate individuals need to be able to understand what is present in the normal civic discourse, Brossard and Shanahan (2006) created a measure that directly assesses knowledge about the types of scientific terms used most often in the news media.
Recent examples of efforts to assess additional dimensions of scientific understanding include work on a scientific reasoning scale that attempts to capture understanding of key concepts associated with how science works (Drummond and Fischhoff, 2015), as well as a measure to assess the degree to which respondents understand the uncertainty of scientific evidence (Retzbach et al., 2015). (See chapters below for our examination of the relationship between survey measures of scientific knowledge and desired outcomes such as attitudes toward science or particular sorts of individual action.)
In addition to general science knowledge, it is also easy to imagine an infinite range of measures aimed at capturing knowledge of specific scientific areas or domains. However, whereas the public health community continues to create a broad range of measures aimed at capturing knowledge about specific
health conditions (Boston University, 2016), only limited research of this sort has been done on other aspects of health or science. As Sturgis and Allum (2006) discuss, one of the few frequently used set of measures is 10 true/false questions aimed at assessing people’s knowledge of genetics and biotechnology. This measure appears to have been initially developed for the Eurobarometer (European Commission, 1997; Gaskell et al, 2003), but the questions have also been used in other countries, including the United States (e.g., McComas et al., 2014; Priest et al., 2003). There have also been less expansive efforts to assess knowledge about nanotechnology using six questions (Lee et al., 2005) or just two questions (National Science Board, 2012). More recently, efforts have gone into trying to assess knowledge about climate change (e.g., Hart et al., 2015; Kahan, 2015) and energy (e.g., Cacciatore et al., 2012a; Funk and Rainie, 2015). Efforts to create more general environmental literacy measures are also in progress (Shephard et al., 2014; Zwickle et al., 2014), though there is not yet a standard measure.
Measuring Health Literacy
The number of survey-based instruments to measure health literacy has proliferated over the past decade. At least 112 instruments have been developed, suggesting that there is no consensus on which measure to use and no “gold standard.” The Health Literacy Toolshed9 includes information on these instruments, which focus on a broad range of health contexts and specific health conditions, and which measure a variety of competencies. These include pronunciation (20 measures); communication, including listening (5 measures) and speaking (2 measures); numeracy (55 measures); information seeking (39); and skills related to the application and function of health information (19).
Health literacy measures are used in a variety of ways. Clinicians may use them to assess a patient’s health literacy level prior to or at the beginning of a health care visit. Researchers may seek to improve health literacy directly, measuring it before or after implementing an intervention, or they may focus on examining the impact of an intervention on behavior, using health literacy as an independent or control variable.
A recent review of health literacy measures notes that the existing measures vary in the “dimensions that they measure and the level of psychometric rigor to exhibit various aspects of validity.” (Haun et al., 2014, p. 327) The authors’ conclusion is that there still is no “single rigorously validated health literacy measure that addresses the full range of dimensions” (p. 327) that characterizes health literacy. As health literacy definitions evolve to include the demands of health care systems, there is a concomitant need to develop measurement tools that measure not only individual capabilities, but also the demands of
health materials, the communication skills of health care professionals, and the expectations and assumptions of health care environments (Rudd et al., 2012).
According to a systematic review by the Agency for Healthcare Research and Quality (AHRQ) (Berkman et al., 2004), most studies of adult health literacy used three instruments: the REALM (Rapid Estimate of Adult Literacy in Medicine) (Davis et al., 1993), the TOFHLA (Test of Functional Health Literacy in Adults) (Parker et al., 1995), and the WRAT (Wide Range Achievement Test) (Jastak and Wilkinson, 1984). However, rather than measuring health literacy directly, these instruments were actually measuring aspects of foundational literacy.
The first national assessment of adult literacy that included a specific focus on health was the National Assessment of Adult Literacy Survey (NAAL) in 2003. NAAL measured prose, document, and quantitative literacy in relation to tasks related to health, including clinical preventive health issues and navigation of the health system. The European Health Literacy Project administered a health literacy questionnaire in eight European nations in 2011 (Sørensen et al., 2015). The questionnaire examined self-reported difficulties in accessing, understanding, appraising, and applying information in tasks related to health care decisions, disease prevention, and health promotion. Additional items related to health behaviors, health status, health service use, and health promotion.
Early in the committee process, members repeatedly questioned the common understanding that science literacy is, or should be seen as, just a property of individuals—something that only individual people develop, possess, and use. The consensus that emerged from these discussions was that research on individual-level science literacy provides invaluable insight, but also that such research, on its own, offers an incomplete account of the nature, development, distribution, and impacts of science literacy within and across societies. Instead, the committee agreed that science literacy can usefully be studied at different levels of social organization, that research on science literacy at the level of individuals should be complemented by efforts to examine how science literacy emerges in communities of people working together, and how both the nature and effects of science literacy cannot be extricated from the social systems within which they are embedded.
This committee’s consensus perspective should not be interpreted as a repudiation of the research that examines science literacy among individuals—by far the majority of research that explicitly focuses on science literacy—as this report details (see, in particular, Chapter 5). Given the committee’s consensus, it is still possible to see individuals as developing, possessing, or using science literacy. But it also becomes possible to see individuals as nested within
communities, contributing their knowledge and skills to collective actions that appear to be science literate, even when any given individual has a very limited knowledge of science. In this light, communities not only enable science literacy at the individual level, they can under certain circumstances be thought of as possessing science literacy themselves. Finally, the committee agreed that it is essential to consider how an individual or community’s position in society affects when and how it is important for them to be science literate and what forms of science literacy will enable them to achieve their personal and civic goals.
Because the committee sees the smaller units of analysis as being nested within the larger ones (individuals within communities that are within the social structures of their societies), we have taken the unusual step of beginning with the largest level—the bird’s eye view afforded by comparisons across societies and across groups and structures in society. From here, we shift our attention to science literacy at the level of communities, and then we turn our attention to individuals. This report structure means that most of the research from the field of science literacy (the research that may be most familiar to our scholarly readers) appears toward the end of the report (in Chapter 5), while the preceding chapters draw in scholarship from fields where science literacy is a less common frame. One of the primary research challenges that lies ahead—a challenge that is articulated but not addressed in this report—is understanding how these different levels of analysis can fruitfully be brought together.
The committee’s review of the history of the definitions of science literacy reveals a shifting landscape in which science knowledge has emerged as only one component of a larger and more nuanced construct. Health literacy, too, has evolved, in ways that suggest new potential for synergy between research on health literacy and science literacy.
CONCLUSION 1 The committee identified many aspects of science literacy, each of which operates differently in different contexts. These aspects include (but may not be limited to): (1) the understanding of scientific practices (e.g., formulation and testing of hypotheses, probability/risk, and causation versus correlation); (2) content knowledge (e.g., knowledge of basic facts, concepts, and vocabulary); and (3) understanding of science as a social process (e.g., the criteria for the assignment of expertise, the role of peer review, the accumulation of accepted findings, the existence of venues for discussion and critique, and the nature of funding and conflicts of interest).
CONCLUSION 2 Historically, the predominant conception of science literacy has focused on individual competence.
CONCLUSION 3 Foundational literacy (the ability to process information—oral and written, verbal and graphic—in ways that enable one to construct meaning) is a necessary but not sufficient condition for the development of science literacy.
CONCLUSION 4 Concerns about the relationship of health literacy to health outcomes have led to a reconceptualization of health literacy as a property not just of the individual but also of the system, with attention to how the literacy demands placed on individuals by that system might be mitigated.
Science literacy looks quite different depending on what is being demanded and from whom. As the committee illustrates in this report, science literacy can no longer be narrowly identified with the abilities (or limitations) of an individual. Using broader conceptualization would expand the ability to analyze science literacy at the community and system or societal level.
Given this expanding understanding of science literacy, it is critical to be able to accurately assess claims about the level and the consequences of science literacy for individuals, communities, and society. If science literacy is more than just the extent of an individual’s science knowledge, is the best means of enhancing it a focus on individuals? This report addresses this question in context by investigating the tensions between the shifting definitions of science literacy and the way it has been measured over time and by considering how those metrics have been used to understand whether (or not) science literacy matters for individuals, communities, and society.
This page intentionally left blank.