Although the public has a generally positive attitude toward science and scientists, specific contentious issues with a science component often become controversial. As noted in Chapter 1, some of that public controversy stems from the fact that the science itself is inconclusive, and some from a disconnect between what science shows and either long-held common-sense perceptions or deeply held moral, ethical, or social values. Often the moral, ethical, or social implications of using science to develop or deploy a technology or to make a particular decision can be more contentious than the scientific findings themselves (Sarewitz, 2015). Public debate about the issues—among the scientific community, policy makers, and citizens—can help uncover common ground among people holding diverse sets of values. Although this is not always the case, clear information from science can enable people to make more informed choices. Healthy debate also can strengthen the science, challenging its claims and leading to a push for better forms of evidence.
When public controversies with a science component (science-related controversies) result from uncertainty—such as that which arises from either an inconclusive set of scientific findings or disagreements within the scientific community about how to interpret the results of science or how the results should be communicated—clear and accurate messages about the
state of the science can be distorted and difficult to discern.1 As discussed in Chapter 2, communicating science is almost always a complex task in part because scientific information and its implications are understood, perceived, and interpreted differently by different individuals, social groups, communities, and decision-making bodies. This phenomenon is not unique to science, but is important because it makes the process of communicating science difficult. In communicating about science-related controversies, such factors as conflicting values, competing economic and personal interests, and group or organizational loyalties can become central to a person’s individual decision or to a decision on public policy.
Many parties—including corporations, advocacy and nongovernmental organizations, government agencies, and scientists themselves—typically are involved in debates about science-related controversies. The decisions made on these issues often involve corporate policies, laws and regulations, and international agreements. The high stakes of those decisions can pit the competing interests and political control of the various players against one another (Lupia, 2013; Nelkin, 1992; Nisbet, 2014). Organized interests other than science can play a large role in communicating about science related to contentious issues, and potentially influence people’s judgments about science and the relevant scientific information.
To communicate science effectively under these conditions, an understanding of the factors discussed in Chapter 2 is insufficient. It is important to understand how and why science becomes part of public controversy and the forces that affect how people encounter, interpret, and use scientific information in these circumstances.
Controversies involving science exist, like all controversies, in particular historical, geographic, and social contexts. The political commitments, culture, history, and religion of an audience will affect its perceptions of science in general and of the scientific information related to an issue in particular. National cultures, for example, have varying effects on how people interpret and react to science (Bhattacharjee, 2010; Miller et al., 1997; Scheufele et al., 2009). This is why a scientific issue can be controversial in the United States but not in Europe, as is the case with climate
1 A lack of clarity about how to interpret scientific findings also can be due to problems within the scientific community that may include publication bias, misuse of statistics, issues of replication, stating conclusions beyond the data presented, and using causal language when not justified by the study design (see, e.g., Boutron et al., 2014; Gelman and Loken, 2013; Ioannidis, 2005).
change, or the reverse, as is the case with the introduction of genetically modified organisms (GMOs) in food. This phenomenon is not confined to nation-states. Organizations and civic institutions (i.e., nongovernmental groups that may include community organizations and professional, religious, political, consumer, activist, or charity organizations) involved in science-related controversies are not merely collections of individuals but have properties and dynamics of their own that, as noted in Chapter 2, affect how their members perceive information and make decisions.
The diverse nature and origins of science-related controversies defy simple classification and offer few neat comparisons. Much of what can be understood about these controversies comes from the historical record, case studies, ethnography, and other descriptive work. Still, despite their variety and their roots in historical and cultural circumstances, science-related controversies often share three features:
- Conflicts over the beliefs, values, and interests of individuals and organizations, rather than simply a need for scientific knowledge, are central to the debate.
- The public perceives uncertainty either in the science or its implications or as a result of communicators making different and sometimes contradictory statements in the public sphere.
- The voices of organized interests and influential individuals are amplified in public discourse, making it difficult for the state of the scientific evidence to become clearly known.
These three features present major challenges to communicating science effectively under conditions of controversy and are discussed in turn in the sections below. In each case, the discussion draws on examples from the research literature and points to implications both for communicators and directions for future research.
Most public controversy, whether or not related to science, arises from the conflicting concerns of individuals and organizations. Controversy can be rooted in differing beliefs and values; personal, political, social, and economic interests; fears; and moral and ethical considerations—all of which are central to decisions and typically subject to public debate. When scientific evidence is relevant to a contentious issue, informing the public about it is critical. It is important to realize, however, that even if the science is clear, people may already understand it but still disagree agree with its implications. Some people, for instance, may recognize the benefit of a
new technology but believe it is not worth its risks or judge its use to be ethically questionable.
Because a controversy can involve high-stakes commitments and interests, significant effort can be expended to rally the public. In a number of documented instances, organized interests outside of science have sought to protect their interests by promoting polarization on a controversy (alignment of members of the public along inflexible and diametrically opposed opinions) (e.g., Brownell and Warner, 2009; McCright, 2000; McCright, and Dunlap, 2003; Michaels, 2006, 2008; Michaels and Monforton, 2005; Oreskes and Conway, 2011). Like other citizens, scientists and their organizations also have economic, professional, and personal interests that they may defend and promote, a fact that may influence the focus of their communication efforts.
As discussed in Chapter 2, when science and emerging technologies challenge people’s beliefs and threaten deeply held values, their attitudes toward science and scientific information can be affected (Blank and Shaw, 2015; Lupia, 2013; McCright et al., 2016).2 Across different science-related controversies, from climate change to GMOs to nanotechnology to genetic testing, people on opposite sides of the political spectrum who are demonstrably knowledgeable about the science may have opposite perceptions of or expressed support for the science (Cacciatore et al., 2012; Hart and Nisbet, 2012; Ho et al., 2008; Hornsey et al., 2016; Nisbet, 2014) and less trust in the scientists who conduct it (Pew Research Center, 2016b). The relationship between partisanship and perception or support also may be affected by people’s patterns of media use (Cacciatore et al., 2012; Ho et al., 2008). In addition, information intended to persuade people to adopt particular practices or behaviors can have the opposite effect, and the effects of such messages can differ across groups (Asensio and Delmas, 2015; Byrne and Hart, 2009; Dietz, 2013a; Gromet et al., 2013; Nyhan et al., 2014). Gromet and colleagues (2013) found, for example, that the effectiveness of various frames for advocating energy efficiency varied depending on a person’s political ideology (frames are discussed in Chapter 2). Examples from health (e.g., the human papilloma virus [HPV] vaccine and mammography)
2 An extensive body of research addresses how to conceptualize values, beliefs, attitudes, and norms; how to measure them; and the interrelationships among them (Dietz, 2015; Oskamp and Schultz, 2005; Schultz et al., 2007; Steg and de Groot, 2012). Much of this literature has implications for understanding the effects of different approaches to science communication. For brevity, the phrase “values, beliefs, and interests” is used here to encompass the fuller set of social-psychological factors that influence understanding of science, acceptance of scientific messages, and the use of scientific information in decision making.
illustrate that an issue can be politicized when it becomes linked with political actors, partisan policies, or politically framed news about the issue, and that confidence in doctors and support for health policies and programs can be affected as a result (Fowler and Gollust, 2015).
In some science-related controversies, religious beliefs and values play a more central role than political ideology (National Research Council, 2008; Pew Research Center, 2015b). When the science concerns the origin of the human species or the universe, for example, people may feel that their religious views are being challenged, and their expressed views may reflect their faith consistent with their interpretation of the Bible rather than their understanding of science (National Research Council, 2008; see also, e.g., Berkman and Plutzer, 2010). Citizens who choose their faith commitment over scientific accounts are not necessarily denying the science per se and may be well aware of the relevant scientific views (National Science Board, 2016b). In a 2012 survey, for example, half of respondents were asked to answer true or false to the statement, “Human beings, as we know them today, developed from earlier species of animals.” Among these respondents, 48 percent answered “true.” The other half of the respondents were asked to answer true or false to the statement, “According to the theory of evolution, human beings, as we know them today, developed from earlier species of animals.” Among these respondents, fully 74 percent answered “true” (National Science Board, 2014, 2016b; see also, e.g., Berkman and Plutzer, 2010).
Various explanations have been offered for the polarization often surrounding science-related controversies. One such explanation, motivated reasoning, is discussed in Chapter 2. In this case, people tend not to adopt explanations that conflict with their long-held views or values. A related explanation is that cultural biases—specifically, preferences for equality versus authority and for individualism versus community—influence people’s risk perceptions and related beliefs (e.g., Kahan et al., 2009). According to this interpretation, people preserve their identities as members of social groups that adhere to certain cultural values to the extent that they will not adopt positions counter to those they believe are held by members of their group. It may be, however, that much of the American public has more moderate allegiances and views and thus is not subject to this effect of group values. Further, a number of studies suggest that individuals’ values predict their risk perceptions (Dietz et al., 2007; Slimak and Dietz, 2006; Whitfield et al., 2009). This research needs to be expanded and further integrated to determine its utility in predicting people’s responses to different approaches to communicating science.
Research has identified several strategies that can be used to mitigate the effects of competing beliefs, values, and interests on science communication. These strategies include tailoring messages from science for understanding and persuasion and engaging the public.
Tailoring scientific messages for different audiences is one approach to avoiding a direct challenge to strongly held beliefs while still offering accurate information. People tend to be more open-minded about information presented in a way that appears to be consistent with their values (Corner et al., 2012; Kahan et al., 2010; Lord et al., 1979; Maibach et al., 2010; McCright et al., 2016; Munro and Ditto, 1997). Using this approach can help build trust and credibility, but communicators of science need to avoid conflating scientific understandings with their own values (for further discussion of this issue, see Dietz [2013a]). They cannot assume that their information conveys a moral imperative or presume that their own values are universal.
Tailoring strategies have in some cases drawn on research in social marketing and audience segmentation (Bostrom et al., 2013) to persuade people to change their attitudes or perceptions or to take action for the public good based on established scientific evidence (e.g., improving vaccination rates for the sake of public health as well as of individual children). The practice of dividing a large potential audience into subgroups and tailoring messages differently for each subgroup is termed “audience segmentation.” Although most efforts to communicate science using these methods have taken place in the field of health communication (Slater, 1996), information about people’s values and beliefs has in some cases been used to craft messages so as to motivate people to adapt to climate change or adopt views consistent with scientific consensus. However, research in this area is just emerging (Hine et al., 2016; Maibach et al., 2011; Moser, 2014). Other research has focused on tailoring messages to communicate information about the environment (e.g., Spartz et al., 2015a; Witzling et al., 2016).
Research on audience segmentation methods needs to be replicated and extended for researchers to understand how much of an effect science communication can have, for whom, and in what contexts. A related issue is how tailored messages designed to persuade people to adopt scientifically supported positions might affect their perceptions of scientists and scientific information.
One approach to communication in the context of science-related controversy, discussed in Chapter 2, is to engage the public in formal processes for participating in decisions on such issues. Public participation is a formal process for engaging the public that is often adopted by elected officials, government agencies, and other public- or private-sector organizations to increase the public’s involvement in assessment, planning, decision making, management, monitoring, and evaluation (National Research Council, 2008). Research on public participation has examined the conditions and practices that can make it a successful approach for adjudicating differing views. Research on this topic varies in scope, with some areas studied far better than others. Studies of the National Academies have examined procedures for public participation in the assessment of risk and in decision making related to technology and the environment for more than two decades (National Research Council, 1996, 2008). In a review of the literature covering roughly 1,000 studies related to environmental issues, a National Academies report offers principles of effective public engagement and concludes:
When done well, public participation improves the quality and legitimacy of a decision and builds the capacity of all involved to engage in the policy process. It can lead to better results in terms of environmental quality and other social objectives. It also can enhance trust and understanding among parties. Done well, processes that foster trust and that address the concerns of affected stakeholders can be effective in diminishing controversy around science in the public sphere. (National Research Council, 2012, p. 226)
To be most effective, public engagement needs to be undertaken as early as possible in a public debate, and those with a stake in the issue need to be engaged over many rounds of back-and-forth communication with each other. Often the first step is for scientists and interested and affected parties to work together to identify topics of concern and assess the degree to which research can clarify those concerns. Repeated deliberation over time builds trust among diverse participants—an approach that is much more successful than inviting participation after a conflict has emerged and intensified. In some such cases, participation processes have reestablished trust, but communication remains more difficult than when public participation is initiated early on. Other potential pitfalls in the public participation approach include perceived political manipulation of the process, a lack of fairness in the relative amount of attention given to different participants, and stakeholders who work only toward trivial or undesirable results (National Research Council, 2008). Further, some claims may be perceived as majority views simply because they are made loudly and frequently
(Binder et al., 2011). This effect has been shown to influence policy decisions (Binder et al., 2012), and science communicators may need to take steps to clarify or counter it. The gender mix of participants also can influence deliberation, although careful design of the process can counter these effects (Karpowitz et al., 2012; Mendelberg et al., 2014). In general, for public engagement to achieve its potential, care must be taken to design a process that is attentive to the character of the issues at hand and that takes into account the strengths and weaknesses of individual thinking and group interactions (for further discussion of these factors, see Lupia, 2002; Lupia et al., 2012; National Research Council, 2008).
Systematic reviews of research on public participation focus mainly on environmental assessment and decision making. Additional synthesis and research are needed to identify the elements of structures and processes for communicating science effectively in public forums across a range of social issues (e.g., biomedical research, health policy, gene editing, education policy) and types of controversies. Further, given that best practices in public engagement suggest that it take place early on, research is needed to examine to what extent and in what ways communicating science in formal public participation processes can be effective once an issue has become contentious and the science related to the issue controversial.
As noted in Chapter 1, some science-related controversies arise because the science around a topic is or is perceived by many to be uncertain or unclear. This uncertainty can stem either from uncertainty about the science itself, when no scientific consensus exists, or from people’s mistaken impressions about the degree of certainty within the scientific community (see Chapter 2 for a discussion of sources of scientific uncertainty). Motivated reasoning (also discussed in Chapter 2), comes into play frequently when people are exposed to ambiguous statements and then interpret the ambiguity to support their own long-held views (e.g., Budescu et al., 2009, 2014; Dieckmann et al., in press).
As noted earlier, some controversies may reflect at least in part disagreement within the scientific community about where the weight of evidence lies. Among these controversies are disputes about the causes and impacts of obesity, the relative merits of technologies for responding to climate change (such as carbon capture and storage and solar radiation management), the social and health impacts of vaping, and the academic
consequences of introducing more market-like policies (such as vouchers or charter schools) in education.
Controversies involving scientific uncertainty also can hinge on whether the science is adequate to determine cause and effect or to predict future risks or benefits. Examples of these kinds of controversy are disputes over the connection between diet and chronic disease; the debate over the risk of radiation exposure from cell phones; and disagreements over whether the benefits of electricity production from nuclear power outweigh the risks (Visschers and Siegrist, 2013). Many such disagreements remain within the scientific community or at most spread to a broader but still relatively small and attentive audience of interest groups, specialists, or policy networks. In other cases, however, particularly those involving issues with wide societal implications or issues, such as food and nutrition, that affect personal interests, more of the public begins to encounter and pay attention to the relevant science and invoke it when participating in public debate. Examples include controversies involving energy and environmental policy, stem cell therapy, and gene editing technology. In such cases, a debate once confined to scientists and a few onlookers comes to include more diverse actors and influencers, such as public officials, commercial interests, the media, social media commentators, celebrities, and various organized interest groups. These participants often state strong opinions about whether science is adequately equipped to assess benefits and risks and make forecasts with respect to an issue.
Uncertainty within science also can occur because a field of study is relatively new, and much remains unknown. Emerging evidence can vary in its quality, or it can be limited by the methods, samples, or contexts involved in its collection or by the amount of evidence that has accumulated and been replicated. More often than not, in the absence of clear consensus, the public and scientists themselves must determine what to believe and what choices to make given the state of the evidence as they understand and interpret it. For example, scientific disagreements about the potential of particular substances to cause cancer can fuel public controversy over exposure to those possible carcinogens (Löfstedt, 2003).
Another form of uncertainty-driven controversy can emerge from disagreement about the ethics, uses, or estimated impacts of an emerging technology when certain consequences must be forecast, given the inherent lack of certainty that forecasting entails. Similarly, uncertainty can result when scientists are asked to translate general knowledge about a subject into recommendations for a particular community or population. Scientists who work from a widely accepted and rich knowledge base about a toxic substance, for example, may nonetheless lack evidence about the risks that substance poses when a particular amount of it has contaminated groundwater in a specific community for which no prior data exist (Rosa,
1998; Rosa et al., 2013). In such cases, the extrapolation required almost inevitably increases uncertainty and necessitates careful assessment; it also means that members of the local public may have knowledge that, while not scientific, can be essential to applying scientific knowledge accurately to the local context (Dietz, 2013b; Wynne, 1989). It is interesting to note that in cases of prolonged controversy, members of the public can develop considerable expertise in the relevant science (e.g., Brown, 1992; Epstein, 1995; Kaplan, 2000; Kinchy et al., 2014).
Controversies that invoke or exaggerate scientific uncertainty often focus on risk or risk–benefit assessments. Risk decisions by their very nature center on uncertainty. How likely is an event to occur? Are the benefits involved likely to outweigh the harms? Many of the most contentious policy decisions concern managing environmental and human health risks.
Risk has been formally defined as the product of the likelihood and severity of future harm (Brewer et al., 2007; Kaplan and Garrick, 1981). People, though, use multiple means of understanding risk, some simple and some sophisticated, and all influenced by emotions and complex considerations such as familiarity, uncertainty, dread, catastrophic potential, controllability, equity, and risk to future generations (Finucane et al., 2000; Keller et al., 2006; Slovic, 1987, 2000; Slovic et al., 2004). Although individuals make judgments about risk, there are larger societal risks related to science and technology that emerge from and benefit from broader societal debate (National Research Council, 1996). Gender and race also play a role in how people perceive risk (e.g., environmental health risks), for reasons that are not entirely clear (Flynn et al., 1994; Satterfield et al., 2004). Controversies over risk often trigger competitions in which each group with a stake in the issue tries to persuade the public to share its perception of a particular risk. Some ways of communicating risk in the context of controversy have been studied (Fischhoff, 1995). Indeed, much of the early work in the field of risk perception and communication addressed controversial environmental risks, including the siting of nuclear power plants and waste facilities, chemical hazards, and other technological risks (Slovic, 2016).
Many of the strategies studied have focused on communicating and managing risk under conditions of uncertainty (Fischhoff, 2012; Fischhoff and Davis, 2014; Renn, 2008; Rosa et al., 2013), in part because uncertainty is inherent in scientific inquiry and because it is this uncertainty that is often exploited during controversies. As with other aspects of the science of science communication, however, much of the advice available to practitioners regarding risk communication under conditions of uncertainty or in the context of a crisis or controversy is based on case studies, personal experience, and face-valid principles lacking rigorous empirical testing. In addition, the available empirical research typically is limited by its focus on specific risk controversies or crises, frequently within the bounds of a
particular geography or period of time, rather than on cross-cutting or comparative issues. Moreover, research on presenting uncertainty often is based on hypothetical scenarios (see, e.g., LeClerc and Joslyn, 2012). More work is needed both to consolidate and to validate what is known in this area, and research is needed on the most effective ways to present risks of varying degrees of certainty.
In some instances, the relevant science is well established and agreed upon by the majority of the scientific community, but some uncertainty still exists. In such cases, the level of scientific agreement can be misunderstood or misrepresented in public discourse. Examples include current understanding from science about the human contribution to climate change, the health benefits of vaccines, and the validity of evolution. Some misunderstanding can arise from poor communication or from any of the challenges to understanding scientific information discussed in Chapter 2. The way the media portray the weight of evidence also can misrepresent the degree of scientific consensus on an issue, as when opposing views are presented equally (i.e., “false balance reporting”) regardless of the prevalence of a view or the extent to which it is supported by evidence (Bennett, 2016; Dixon and Clarke, 2013).
Analyses of past and ongoing science-related controversies indicate that organizations sometimes create doubt about what science says so as to protect their economic interests or ideological preferences (Brownell and Warner, 2009; McCright, 2000; McCright and Dunlap, 2003, 2010; Michaels, 2006, 2008; Michaels and Monforton, 2005; Oreskes and Conway, 2011). Both questioning the validity of research and exaggerating the extent of disagreement among scientists can create uncertainty and thus impede people’s understanding of what the scientific community is communicating (Brownell and Warner, 2009; Freudenburg et al., 2008). This strategy was used by the tobacco industry to obscure the harmful effects of smoking from the public, but interested parties have used similar approaches to exaggerate supposed uncertainty about climate change and the food industry’s contributions to obesity (Brownell and Warner, 2009; Dunlap and McCright, 2010). Organizations and individuals whose interests are served by sowing doubt about findings from science may feed particular media outlets and policy makers who often are sequestered from other messages, leading to media “echo chambers” in which only their inaccurate assessments of uncertainty are heard (Dunlap and McCright, 2010).
Correcting misperceptions of scientific consensus can be a first step toward influencing attitudes, on such issues as climate change and vaccination, that in turn shape decisions. Although it is true, as described previously, that many people resist information that appears to threaten their beliefs, it is also true that an accurate sense of scientific consensus can have an impact on people’s policy preferences. When people learn that most scientists agree about climate change, for example, they are more likely to believe that global warming is occurring and to express support for policies aimed at mitigating it (Ding et al., 2011).
In combating inaccurate claims of uncertainty, it may be useful to communicate repeatedly the extent of expert agreement on the science concerning a contentious issue. Such repeated communications can occur in many places, involve diverse people, and take various forms—conversations, use of social media, presentations, advertising, communication campaigns, and media interviews (see van der Linden et al., 2015). As discussed earlier, however, communicating scientific consensus may also deepen divisions in views about science, such as when the communications involve perceived attacks on values or the groups that hold them (Kahan, 2015). More research is needed to determine ways of communicating expert consensus that can help achieve understanding.
Other research suggests that there can be benefits to being explicit about the uncertainty involved in scientific understanding, being fully transparent about how scientific conclusions are reached and how uncertainty is reduced over time (Druckman, 2015; Jamieson and Hardy, 2014). Explaining how conclusions have been reached may build credibility, as well as create greater public interest in an unfolding story or mystery about scientific investigation and discovery (Druckman, 2015). More research is needed on the efficacy of consensus messaging, ways of talking about uncertainty, and the conditions under which these communications are likely to be effective.
In some domains, including sexually transmitted infections (Downs et al., 2004), vaccination (Downs et al., 2008), climate change (McCuin et al., 2014), and environmental hazards (Lazrus et al., 2016; Morss et al., 2015), effective communication has been developed through research with the intended audiences aimed at understanding how to address their concerns and misunderstandings. Although people tend to retain their common-sense mental models of how the world works (Niebert and Gropengießer, 2014; Vosniadou and Brewer, 1992), controlled laboratory experiments show that people can revise their thinking in response to information that explains underlying causal processes (Ranney et al., 2012). This approach has been used to improve understanding of how people think and make decisions about a variety of contentious science issues. More work is needed, how-
ever, to determine how such approaches to understanding audiences could be implemented on a large scale.
Science communication may take into account the beliefs and values of varied audiences and communicate well about the weight of evidence but still go unheard in discussions about controversies. High stakes, conflicting interests, uncertainty, and concerns about risk and the consequences can expand the number and diversity of people who attempt to communicate about science, as well as the number of parties who use the science. Public officials may call on science experts to help craft solutions to policy issues of concern to the public. Lawyers and courts may seek evidence pertinent to important judicial cases that garner publicity. Members of the media may look to noteworthy findings from science in an effort to craft stories that will speak to the needs or interests of their audiences. And those with stakes in the outcome on either side of a debate (partisan politicians, corporations, other mobilized advocacy or interest groups) may use science that supports their claims.
Because science-related controversies often involve or invoke policy responses, they raise the likelihood that the key actors will be organized interests (e.g., corporations, partisans, organized religion, advocacy groups, government agencies, universities) subject to various pressures, incentives, and institutional constraints. These elements may be as important to audiences—or more so—than an accurate understanding of the relevant science. In the context of controversies, scientists and the enterprise of science may be seen (sometimes correctly, sometimes not) as interested parties with their own motivations or organizational interests at stake.
Scientists are not immune from personal, academic, and societal pressures. Although it can be difficult for the scientific enterprise to self-correct, science as an institution possesses norms and practices that restrain scientists and offer means for policing and sanctioning those who violate its standards. In contrast, as discussed earlier, those who are not bound by scientific norms have at times intentionally mischaracterized scientific information to serve their financial or political interests (Dunlap and Jacques, 2013; Farrell, 2016; McCright, 2000; McCright and Dunlap, 2003, 2010; Michaels, 2006, 2008; Michaels and Monforton, 2005; Oreskes and Conway, 2011).
Almost all participants in a controversy will seek to frame their issues to their advantage (for a detailed discussion of framing, see Chapter 2). Such framing influences the amount of attention an issue receives, which arguments or considerations are seen as legitimate, and which individuals and groups have standing to express their opinion or participate in decisions (Nisbet and Huge, 2006). As attention to an issue grows and conflict increases, new participants may attempt to bring different frames to bear in an attempt to recast the conflict in ways that serve their goals. Food and environmental activists, for example, have promoted the term “frankenfood” to suggest unnatural dangers associated with food biotechnology, thus framing the issue in terms of unknown risks and unintended consequences rather than the industry-promoted focus on reducing world hunger. Likewise, climate change activists have adopted the term “big oil,” a headline-friendly phrase that triggers associations with corporate accountability and wrongdoing. The echo of the phrase “big tobacco” also associates the issue of climate change with an earlier generation’s discovery of the dangers of smoking. Once a frame has influenced people’s views, those views can be difficult to change. One study, for example, found that people who had never heard of carbon capture and storage could be influenced by uninformative arguments either for or against the technology, and that those manipulated feelings persisted even after these people had read carefully balanced communications designed to educate them (Bruine de Bruin and Wong-Parodi, 2014). More needs to be known about how audiences make sense of the competing frames they encounter from multiple sources.
Many science communicators, especially those who are scientists, typically feel an urgent need to correct information that is inconsistent with the weight of scientific evidence. As noted above, however, doing so actually is difficult under most circumstances (Cook and Lewandowsky, 2011; Lewandowsky et al., 2012). In addition, well-intentioned, intuitive efforts to debunk misinformation can have the unintended effect of reinforcing false beliefs, especially among the more educated (Cook and Lewandowsky, 2011; Nyhan et al., 2013, 2014; Skurnik et al., 2005). Correcting false beliefs also is more difficult when the incorrect information is consistent with how people already think about the information or issue. In fact, when people are challenged in their beliefs, they may react by dismissing the credibility of the messengers who provide the corrections (Lewandowsky et al., 2012).
In addition, repetition of false information, such as that which may
come from faulty science not generally accepted by the scientific community (i.e., “junk science”), can reinforce the belief in that information even if followed by correction. And providing too much or overly complex information to correct a simply worded myth is likely to be ineffective. It is possible that debunking efforts may be more effective with the undecided majority of people than with the firmly entrenched minority. More study is needed to determine for whom and under what conditions current understandings about debunking may hold.
One approach to avoiding the risks of debunking is to focus on messengers instead of their messages. In some cases, inducing skepticism about or distrust of a communication can help combat the effects of misinformation (Bolsen and Druckman, 2015). One such strategy, alluded to in Chapter 2, involves “prebunking” or “inoculating” audiences against the intentional efforts of individuals or organizations to mislead the public. This can be done by warning people that they may be exposed to misinformation and explaining why misleading information is being promoted (Bolsen and Druckman, 2015; Cook, 2016; Lewandowsky et al., 2012), but more needs to be known about when and for whom this strategy can be effective.
When the source of misinformation is within the science community itself, even well-publicized retractions and corrections can have little effect (Mnookin, 2012). One study suggests three factors that increase the effectiveness of retractions: warnings at the time of initial exposure to the misinformation; repetition of retractions; and corrections that tell coherent, plausible alternative stories, explaining the source and motivation behind the misinformation (Lewandowsky et al., 2012).
As described in Chapter 2, the successes and failures of large public health campaigns offer a number of lessons. They suggest that gaining sufficient exposure, engaging with audiences early on, and then applying a variety of communication approaches over an extended period of time can help ensure that the perspectives of science are heard among the amplified voices that may characterize science-related controversies.
Another way to bring accurate scientific information to the public is to work with opinion leaders—politicians, business leaders, community figures, journalists, celebrities, and others with a proven capacity to influence people’s views. Marketers, advertisers, and campaign strategists have targeted opinion leaders as effective promoters of positions, candidates, or products for decades. Until recently, however, opinion leaders have received little attention among researchers and practitioners in science communication. Yet research from these fields that have targeted opinion leaders suggests effective approaches that could be adopted for enlisting these leaders
to help in effective science communication (Nisbet and Kotcher, 2009). At the same time, the engagement of opinion leaders may carry some risk. Such leaders come from the realms of politics, media, and celebrity, all of which have less public credibility in general relative to scientists. Whether practices associated with industry and politicians could damage the credibility of scientists is unknown (Lang and Hallman, 2005; Pew Research Center, 2015a).
Opinion leaders who are not prominent individuals but are nonetheless influential in their social circles can be identified informally through observation or surveys (Frank et al., 2012; Nisbet and Kotcher, 2009; Roser-Renouf et al., 2014). Research suggests this to be a promising way to help achieve the goals of science communication. The actions people report being willing to take to reduce climate change, for example, are most likely to be influenced by a person close to them, such as a significant other or close friend (Leiserowitz et al., 2013). Similarly, at least one-third of Americans would sign a petition, attend a meeting, or support a candidate who shared their views on climate change if asked by someone they “like and respect” (Leiserowitz et al., 2014). Examples outside of science communication also indicate that community members can play important roles as opinion leaders (Dalrymple et al., 2013; Howell et al., 2014).
Of course, scientists themselves can serve as trusted opinion leaders, sparking conversations and the sharing of information among coworkers, friends, neighbors, and acquaintances both in their everyday interactions and in social media. One study found that, beyond the social influence aspects of conversation, discussing science with trusted acquaintances led to a richer understanding of climate change and greater use of this knowledge in making decisions or offering an opinion (Eveland and Cooper, 2013). Likewise, a survey that tracked the discussion patterns of Americans for 2 years found that people’s attention to science-related news coverage was associated with having more frequent conversations about science, which in turn was associated with an increase in overall concern about climate change. This heightened concern was related to a subsequent increase in attention to news coverage of science, as well as to more frequent science-related conversations and even greater levels of concern about climate change over time (Binder, 2010).
Further research is needed to understand how different types of opinion leaders may affect people’s understanding of and tendency to use information from science in their decision making.