Communicating Results, Interpretations, and Uses of Biomonitoring Data to Nonscientists
Very recent history has seen tensions aroused by monitoring of human tissues to assess exposure to environmental chemicals and the consequent importance of communication about biomonitoring. For example, concern over whether biomarker data would prompt new mothers to abandon beneficial breast-feeding for fear of contaminating their children helped to scuttle proposed biomonitoring legislation in California. Biomonitoring makes environmental exposure personal (Chapter 1), raising concerns about materials that seem out of place in the human body, such as perchlorate in breast milk and flame retardants in fetal cord blood. However, there is also great anxiety over “erroneous” use of biomonitoring data to reach premature conclusions about health effects or contaminant sources and exposure reduction. Another example of communication issues within “fractious debates” (Chapter 4) concerned the Centers for Disease Control and Prevention (CDC) release of its 2005 report on biomonitoring data. Two allegedly competing implications were trumpeted by outside groups: “The nation is awash in toxics.” “Look at the progress made in reducing exposures.” Anxiety among laypeople can be heightened by frequent reporting of biomonitoring data that are not fully explainable with current scientific knowledge.
Communication is essential for proper interpretation and use of biomonitoring data. Earlier in this report, we emphasized the intricate and mutual involvement of analysis, management, and communication of environmental biomonitoring (Chapter 1); the contentious social and political context with diverse constituencies for biomonitoring information; and the need to incorporate communication-evaluation planning, consideration of
partnerships, and constituency assessment into study design (Chapter 4). This chapter focuses on issues entailed in reporting results of biomonitoring studies and in discussing their interpretation and use. If study design included partnership with one or more constituencies, continued partnership on implementation of evaluation and of reporting results is prudent. Partnerships could be undertaken even without prior partnership in the study planning, but evaluation is likely to determine that communication would have been even more effective if partnership had begun earlier. Constituency assessment should have been completed by the time results are reported, although a significant lag time since the planning stage might necessitate updating this assessment to ensure there have been no critical changes before communication of results begins. Public perceptions about uncertainty, exposure, and other biomonitoring-relevant topics discussed in this chapter might inform constituency assessment as described in Chapter 4. The remainder of Chapter 6 assumes that appropriate evaluation planning and implementation, partnership consideration and implementation, and constituency assessment have been done, and therefore this chapter focuses on reporting of results, interpretation, and use.
Without effective communication in particular between biomonitoring researchers and nonscientists and among nonscientists, proper interpretation and use of biomonitoring data will occur only with difficulty, conflict, anxiety, and waste of time and money. The challenges for biomonitoring reflect those common to communication of risk assessment (not to mention risk management) as identified by the field of risk communication. There are failures by information generators to characterize interpretation of data fully and fairly, or to attend to constituent information needs or concerns; failure of information reporters to fully convey information complexities, and caveats while avoiding simple “sound bites”; and by information recipients to be prepared (for example, with knowledge and attention) to deliberate adequately on the information’s meaning for risk management.
Because the literature on biomonitoring-specific communication is extremely scarce, this chapter addresses risk communication issues most relevant to biomonitoring.
LIMITS OF THIS CHAPTER’S DISCUSSION
First, we focus here on communication with nonscientists, partly because that is where the challenges often are the most difficult1 and partly
because that is where the sketchy evidence on communication issues is centered. We also see scientists, as well as institutional policy-makers and communicators, as among the prime audiences for this chapter’s advice on communication. As noted in Chapter 4, potential discussants of environmental biomonitoring are more diverse than just “laypeople” and “experts,” and there is great diversity within each constituency cited in Chapter 4 in the nature and degree of beliefs relevant to biomonitoring. Communication is best described as at least a two-way, if not a multiple-voice all-talking-at-once, conversation in which scientists are not the sole generators of biomonitoring data (see Chapter 2), let alone the only ones able to interpret and use the data.
Second, the committee does not equate nonscientists with the general public, although much of the current scientific literature on lay beliefs and attitudes relevant to biomonitoring focuses on the latter. We treat nonscientists as a broader category because that describes reality: even in universities, some biomonitoring communicators and constituents are laypeople, and that is the case even more in government agencies, business firms, foundations, activist groups, and amongst politicians, as well as “the general public.”2 The need to distinguish biomonitoring communication for informing citizens from that for informing other constituencies is unclear.3 No doubt, similarities can be taken too far such as in giving an organization a clinical consultation, or assuming that officials of an agency or firm are unreceptive to quantitative data or explicit consideration of tradeoffs between variously uncertain health benefits and similarly uncertain exposure-reduction costs. The literature on reporting scientific results to lay decision-makers in government and other institutions (e.g., Brown 1985; NRC 1989; Balch and Sutton 1995; Stern and Fineberg 1996; Andrews 1998; PCCRARM 1997; Thompson and Bloom 2000) can be useful. Potential diversity is the reason to support research on how biomonitoring-related concepts might differ among discussants. How-
ever, until such research is conducted, it would not be incorrect to treat biomonitoring knowledge of, and communication with, nonscientists as if there are no differences within that group other than those that a competent and early constituency assessment (Chapter 4) would take into account in determining how to design an effective biomonitoring study. We do not minimize the challenges of communicating to different constituencies, but we believe that at this stage of the art, far more is to be gained by stressing similarities among biomonitoring communication of all types.
Third, the scientific literature on risk perception and communication reviewed in the rest of this chapter is based entirely on external-exposure monitoring and other nonbiomonitoring aspects of environmental issues. To the committee’s knowledge, no studies have directly explored biomonitoring beliefs or communication. We assume that the cited literature can be extrapolated to biomonitoring, but without evidence that it is a safe assumption. The risk communication literature shows that generalization from experience or other research topics can backfire if not verified with empirical testing (for example, Morgan and Lave 1990). That is why the communication-research agenda recommended by the committee is critical: it will fill a serious gap in our knowledge that has been left by the non-biomonitoring priorities of researchers and research funders—and a gap probably not fillable without explicit funding by biomonitoring sponsors.
Fourth, although good communication is critical for interpretation and use of biomonitoring data, this dictum should not blind anyone to the limits of what communication about biomonitoring can accomplish, given the volatile social and political context cited in the introduction to Chapter 4. Communication will not eliminate all value conflicts, will not obscure or reduce all imbalances of power between parties contending about what constitutes good science or appropriate risk management, and will not even get everyone to agree on interpretation of “facts” even if they agree on the facts themselves. Some gaps in knowledge, responses to uncertainty, power, and values are too large to bridge simply with what one says and how one says it, rather than (for example) with what one does.4 In making these
It is not our mandate to discuss noncommunication means of resolving environmental-management challenges (e.g., NRC 1997; 2004), but a few examples can be useful. The joint fact-finding and analytic-deliberative processes cited in Chapter 4 can narrow factual or values disputes. Stern (1991) suggested that “learning through conflict” could be “a realistic strategy for risk communication” if bolstered by a supportive infrastructure (such as, incentives for risk analysts and communicators to resist employer and other pressures, independent evaluation of risk messages, watchdog groups, institutional debates more open to citizen participation, and wider distribution of resources for risk communication). Finally, direct risk-reduction efforts by organizations and individuals (such as, emission controls; favoring nonpersistent, nontoxic inputs to production; and avoidance of possible sources) can minimize some communication challenges if decision-makers believe that such steps are appropriate.
remarks, we do not wish to encourage the view that communication about biomonitoring would be ineffective or inefficient. On the contrary, communication and systematic evaluation of communication techniques has been given inadequate attention in environmental management (Chapter 4). Neither overenthusiasm of supporters, as reflected in unmet (and undeliverable) promises, nor cynicism or apathy should undermine implementation of biomonitoring communication.
In the next section of this chapter, we discuss how “principles” of risk communication provide a good starting point, but details of a communication strategy must be case-specific and tested empirically before implementation. Examples from the literature on lay response to uncertainty and trust in risk-managing institutions demonstrate that point. Then the chapter argues that a proper balance must be achieved between communications seeking to avoid false positives (such as the inference that detection of a biomarker signals inevitable adverse health effects) and false negatives (such as the belief that nondetections or low concentrations relative to a reference range indicate no health problems). The core of the chapter discusses how different groups of biomarkers and thus different kinds and amounts of relevant information can affect interpretation and use. The chapter concludes with practical and research recommendations to enhance the infrastructure for effective communication about biomonitoring.
PRINCIPLES OF RISK COMMUNICATION
Our aim is to inform research and practice on environmental-biomonitoring communication, not to provide a primer on risk communication in general (a few, widely varied, examples of primers include ATSDR 2001; Hance et al. 1988; NRC 1989; Pflugh et al. 1994; Stern and Fineberg 1996). However, a brief background can both inform potential communicators who are new to this topic and put biomonitoring-relevant discussions into context. The extensive practical literature on risk communication can be drawn on for more detailed instruction as needed.
Much attention has been garnered by “principles” of communication that professionals are advised to follow. Well-known examples are seven principles articulated for the Environmental Protection Agency (EPA): accept and involve the public as a legitimate partner (see our Chapter 4); plan carefully and evaluate efforts (Chapter 4); listen to the public’s specific concerns (Chapter 4); be honest, frank, and open; coordinate and collaborate with other credible sources; meet the needs of the media; and speak clearly and with compassion (Covello and Allen 1988). Those and related principles have face validity and often practical utility despite their apparent obviousness and abstractness. For example, treating your constituents as though they are ignorant, hysterical, self-interested, or ideologically
driven is no more likely to be effective than if you were treated that way by someone who wanted you to comprehend and agree with his or her message. Thus, one of the first commandments of effective communication is to never assume how any party knows or feels without empirical test. Another is to show respect for each other, regardless of what you think you know about the other’s beliefs.
Such principles emerge from practitioners’ deliberation on personal experiences, complemented occasionally by systematic observation or experimentation. As with the Golden Rule and its equivalents in other cultures (“Do unto others …”), it can be surprisingly difficult to recognize shortfalls in one’s performance of principles regarding respect, honesty, clarity, and the like, let alone to modify one’s behavior to put them into practice. So repetition of such principles in communication guides and careful attention to them by would-be communicators are by no means superfluous.
However, “rules for risk communication are not enough” (Rowan 1994). There are two critical notions in biomonitoring communication: the need for empirical testing of even the assumptions of the expert or experienced communicator (Morgan and Lave 1990) and attention to situational details that broad principles alone cannot provide and published principles may not even cover. For example, if your goal is to communicate biomonitoring findings to a constituency, what do you know about its members’ beliefs, attitudes, behavioral intentions, behaviors, and policy preferences with regard to this topic, in both mean responses and their variability? How are they similar to or different from other constituents on these measures, including those who will hear your conversation without being deliberately included? How do your background and current environment, and those of your institution, limit what you could say or even imagine saying? How aware are you of such personal and institutional limits? How might constituents’ or your own limits or flexibility affect communication success? Those and other contextual factors affect whether and how mutual understanding, agreement, and action on biomonitoring data occur; and people charged with such communication must learn the answers to these and related questions. The tension between “principles” and effective communication practices is illustrated in discussion of uncertainty (and variability) and trust.
Uncertainty and Variability
As with other data used to evaluate health risks, uncertainty will characterize interpretation and use of biomonitoring results for years to come, although little is known about how nonscientists deal with technical uncertainties. In general, in their daily lives, people avoid wherever possible
uncertainty about bad outcomes from activities that have small or uncertain benefit; control over outcomes is preferred to the lack of control that uncertainty implies (Edwards and Weary 1998). However, both citizens and policy-makers make decisions in the face of uncertainty (e.g., Lopes 1983); decisions are often rationalized, if not driven, by that uncertainty.
It is common for scientists and officials to believe that they are far less uncertainty-averse with respect to environmental risks than is “the public” (Lopes 1983; Carpenter 1995; Einsiedel and Thorne 1999); for many environmental-health scientists, uncertainty is a professional “given.” Many scientists and officials apparently deem citizens unable to conceptualize risk-management uncertainties (Frewer et al. 2003)—a view not shared by this committee. In fact, what little evidence we have suggests that a globally uncertainty-averse public is a myth; responses vary widely across the population (e.g., Furnham and Ribchester 1995). Johnson and Slovic (1998) found that 35% of a college-student sample preferred to know whether a situation was safe or unsafe rather than to get a risk probability or range of risk estimates. Frewer et al. (2002) found that only 13% of their UK sample preferred no information about risk until all uncertainty had been eliminated. They also found a public demand for information on food-risk uncertainty as soon as the uncertainty was identified and a greater public acceptance of uncertainty about the science than of uncertainty due to government’s ignorance of the nature or extent of a problem. Those authors concluded that communication should focus on “what is being done to reduce the uncertainty.” Miles and Frewer (2003) speculated that communication of uncertainty in risk estimates about a hazard exposure over which people feel they have little individual control might make the hazard seem “out of control” by institutions too, but their study design did not allow a direct test of that hypothesis.
Overall, most risk-communication guides urge open and transparent discussion of uncertainty (e.g., NRC 1989; Hance et al. 1988; ATSDR 2001; also see literature reviews in Johnson and Slovic 1995, 1998).5 Occasionally, the guides go into slightly more detail. For example, Hance et al. (1988) suggest “be specific about what you are doing to find the answers,” “consider involving the public in resolving the uncertainty,” “give people as much individual control as possible over an uncertain situation,” “stress the caution built into standard-setting and risk assessment,” “if people are demanding absolute certainty, pay attention to values and other concerns, not just the science,” and “acknowledge the policy disagreements that arise from uncertainty.” Despite their and others’ discussions of what this advice might mean, however, such principles carry practitioners only so far.
Sketchy but provocative suggestions are beginning to emerge from empirical studies of uncertainty in risk communication. Frewer et al. (1998) provided persuasive information about genetic engineering in food production to British citizens who had positive or negative attitudes toward the technology. Half saw a statement of uncertainty; half did not. The statement said “we are reasonably certain that there are minimal risks …, we cannot be 100 percent certain. This is true of any scientific process. However, the information provided has been derived from the best scientific information available” (Frewer et al. 1998). The admission of uncertainty increased acceptance and reduced rejection of genetic engineering of human DNA, animals, and plants. People with prior negative views were particularly likely to “find the information more informative if information about uncertainty is included” (Frewer et al. 1998).
Carpenter (1995) noted that “the client/recipient” might prefer “unambiguous predictions and advice” now to candor about uncertainties, but environmental professionals’ credibility will disappear if “events … show them to be substantially wrong.” However, White and Eiser (in press) suggested that trust experiments show that if, in the face of uncertainty, professionals make a “mistake … of the right kind [such as a precautionary rather than risk-taking action on the public’s behalf, it] could actually make them seem more trustworthy to lay observers because a) it shows they are open and honest and b) people accept that even experts make mistakes sometimes.” Thus, the results of communicating about uncertainty depend on the context.
That conclusion is complemented by studies (Johnson and Slovic 1995, 1998; Johnson 2003a, 2004a) that examined how people reacted to uncertainty as expressed in a range of estimates of risk (for example, from 1 in 10,000,000 to 1 in 100,000). As reported by Johnson (2003a, 2004a), the proportions of college-student and working-class industry-neighbor samples that found the producer of such a risk range to be honest and competent ranged from 23% to 49%. Ratings for dishonest and incompetent were 12-27%, honest but incompetent 9-17%, and competent but dishonest 10-20%; 7-18% did not know. The honest-competent inference clearly dominated even without any signal of what (if anything) would be done in light of the uncertainty in risk estimates, although there was no majority view. Adding a precautionary signal (such as, intent to reduce exposure) might increase the proportion that found official discussion or representation of uncertainties in biomonitoring cases to be both honest and competent, just as a clear signal of inaction might sharply increase negative responses. Systematic testing of those and other uncertainty hypotheses is needed because we do not yet know what factors (such as, perceived benefits of hazardous activities) might affect such relations. Empirical examination of principles of risk communication related to uncertainty is in its infancy.
Carpenter (1995) specified four questions that communication should try to answer: What do we know, with what accuracy, and how confident are we about our data? What don’t we know, and why are we uncertain? What could we know, if we had more time, money, and talent? What should we know to act in the face of uncertainty? The first two questions are ones that CDC uses when it reports results of site-specific biomonitoring studies (J. Pirkle, CDC, personal commun., May 16, 2005). Both are valuable, but attention should also be devoted, in communication research and practice, to the latter two questions.
Communicating about variability might be less challenging than communicating about uncertainty, although equally important. Anecdotal information suggests that people tend to be aware of or to recognize quickly the concept of variability in susceptibility and exposure, so communicating about variability might be easier than discussing probability and other unfamiliar concepts. Furthermore, uncertainty can be reduced to some degree if sufficient and proper effort is devoted to that end, and failure to undertake uncertainty reduction might undermine trust, whereas variability is immutable (Chapter 4). However, no research has explored those hypotheses, and other aspects of biomonitoring variability (for example, in excretion rates) important for interpretation and use of biomarker data are probably less familiar to laypeople.
On its face, the topic of trust is more abstract than uncertainty in application to biomonitoring. However, experience and correlational studies suggest that trust in institutions is a critical factor in judgments of how risky something is, and it is likely, in the contentious atmosphere surrounding biomonitoring, that trust will also affect whether nonscientists see having biomarker concentrations in one’s body tissues as risky. For example, later in this chapter we point to evidence of skepticism about the protectiveness of benchmarks based on external-exposure monitoring and suggest that it might apply to biomonitoring benchmarks, too.
Interpretations of experience and initial research on “trust asymmetry” in the risk literature (Slovic 1993) suggest that trust is easy to lose and hard to gain. The thrust of the “principles” literature, however, puts practitioners in a bind. They must perform flawlessly to avoid ultimate failure, so it seems, but they have no guidance on building or maintaining trust much more specific than “plan carefully” (plan what?), “listen” (how and to whom?), and the like. More recent research suggests that any asymmetry in gaining and losing trust can depend on the risk object (such as nuclear power vs pharmaceutical industries); studies differ in whether good or bad news has stronger effects on judged risk. Whether trust is
asymmetric depends on such factors as the constituency’s attitudes (for example, trusting groups resist bad news, and skeptical ones resist good news) and on whether the good or bad “news” concerns risk-management policies or concrete events (Cvetkovich et al. 2002; White et al. 2003; White and Eiser 2005).
Studies also are beginning to suggest that demonstrating that one shares salient values with one’s constituencies—such as preferring to take the risk of creating false alarms (false positives) rather than misses (false negatives)—can build trust (e.g., Earle and Cvetkovich 1995; Cvetkovich and Winter 2003; Siegrist et al. 2003; White and Eiser, in press). For example, a comparison of the same risk at different times (for example, this year vs last year) was suggested to be among the best ways to put risks into context. It was shown empirically that the public ranked it first among 14 comparisons (Roth et al. 1990; Johnson 2003b). However, the message tested also included elements of risk reduction (“Despite the extremely low health risks to the community from emissions … at our plant, we are still looking for ways to lower these levels further. These are some of the plans we have under way to accomplish this….”) and information-sharing not strictly part of the temporal comparison. With those removed from the text, it dropped to a middle-to-low rank (Johnson 2004b). In other words, it was the promise to keep searching for ways to lower the risk further and to keep the concerned community informed about plant operations that fostered positive reactions, not the risk comparison itself. Those studies differ in the conditions that make value-sharing helpful. For example, some scholars argue that effective demonstration that one shares the constituency’s salient values is most important when people are unfamiliar with a hazard (often the case with environmental chemicals). Others suggest that the critical factor is how constituents judge the balance of risks and benefits to themselves; if they see few personal benefits (also commonly the case when environmental toxicants are being considered), a precautionary stance by risk managers becomes more desired. The field is not yet developed enough to provide guaranteed recipes for trust-building. Such recipes may be impossible to provide, given variability in social contexts, and might be undesirable for ethical and democratic reasons. But the studies point the way toward moving beyond general principles to more-detailed advice. Clearly, biomonitoring efforts will vary widely in both need and ability to match the full scope of suggestions for promoting trust, such as the analytic-deliberative processes discussed in Chapter 4 (Stern and Fineberg 1996). But study funders and managers would benefit from considering whether and how their efforts would be enhanced by pursuing trust-enhancing techniques and by empirically testing and expanding relevant communication principles.
TRADING OFF AVOIDANCE OF FALSE POSITIVES AND FALSE NEGATIVES IN COMMUNICATION
Communication challenges are often intimately entwined with risk-management challenges (Chapter 1), and biomonitoring is no exception. Scientists use statistical and other criteria to err on the side of accepting false negatives (they reject a hypothesis that turns out to be true) because they see false positives (not rejecting a false hypothesis) as the outcome more dangerous to science’s advance and credibility.6 In a parallel sense, there is a strong emphasis in current institutional messages about biomonitoring, as well as in concerns about incautious expansion of biomonitoring, on avoiding message recipients’ false-positive interpretations (Becker 2005; Duggan 2005; Osterloh 2005; Robison 2005; Schober 2005). For example, government agencies and industry groups have argued that one should warn against inferences that health effects would come from observed biomarker concentrations when (as for most biomarkers) the effects are not certain. Similarly, messages should not imply without evidence that a specific activity is the source of observed body burdens or that particular actions will reduce exposures to environmental chemicals. The assumption in those arguments is that most claims about health effects or sources will turn out to be false positives, so officials do not want other people (such as “the general public”) to conclude prematurely that a health effect could occur or a source be responsible. Avoiding the creation of such false positives and possible large negative outcomes is a legitimate risk-management aim that biomonitoring communicators should respect.
However, there are flaws in an unreflective emphasis on avoiding creation of false-positive inferences as a result of biomonitoring communications. First, it fails to discriminate between good and bad reasons for fearing that messages will evoke false-positive conclusions. Erroneous assumptions about the psychological, economic, or political fallout of declaring a biomarker concentration as evidence of a public-health threat can lead to misallocation of societal resources. For example, it is an enduring myth among many policy-makers that “panic” is the default response of “the public” to natural, social, and technological hazards, whether hurricanes, terrorism, or “pollutants” in people’s bodies. In some cases, individual or collective human responses may be inappropriate, but
people rarely exhibit mass hysteria (Wenger 1987). Second, ensuring that messages do not create false-negative interpretations by message recipients also can be an important risk-management goal. Unrecognized threats can be as undesirable as overlooked opportunities, and apathy as dangerous as fear; some claims about health effects or sources will turn out to be true, and we cannot tell prospectively which is which in the face of the uncertainties to which most biomarkers are subject. Obviously, tradeoffs are necessary because false positives (such as inferring health effects from biomarker findings when the effects are nonexistent) and false negatives (such as assuming that biomarker concentrations that cause health effects do not) cannot be minimized simultaneously except in rare circumstances. Third, for communication specifically, a failure during communication planning to recognize its flaws might make a strategy aimed only at avoiding false-positive responses backfire. Countervailing facts, dissenting opinions, and divergent values will tend to emerge, however carefully the communicating organization tries to obscure them (Hance et al. 1988), and the outcome of their emergence may be loss of credibility for the organization and its message (Slovic 1993).
A balanced and forthright communication about what is known, or can be reasonably (or unreasonably) inferred, from biomonitoring data would be prudent (Hance et al. 1988; NRC 1989; Pflugh et al. 1994; Carpenter 1995; Stern and Fineberg 1996; ATSDR 2001). Acknowledging tradeoffs between the dangers of communication that could foster false positives and the danger of communication that could foster false negatives, and explaining why the tradeoff embodied by a specific message was chosen, could reduce constituencies’ concerns even if some continue to oppose the tradeoff. Joint decision-making about a tradeoff (Chapter 4), in which biomonitoring researchers partner with their constituencies, is another useful strategy. Putting potential counterarguments into one’s initial messages and pointing out their flaws can inoculate constituencies against later criticism (Johnson 2002b). Similarly, acknowledging the wide range of possible interpretations and discussing their relative strengths and weaknesses can undercut the effect of overemphases by contending constituencies on particular interpretations (for example, the disparate responses to CDC biomonitoring reports by stakeholders cited in the introduction to this chapter).
Our goal here is not to dictate acceptable tradeoffs to decision-makers but to make explicit the problems that an excessive emphasis on avoiding creation of alleged false positives might pose. The appropriate balance between having constituents avoid drawing false-positive inferences and avoid drawing false-negative inferences will vary with the aims of a biomonitoring study and the threats of concern to the relevant decision-makers (such as, researchers, sponsors, and subjects). Our intent here is that the
decision on balance be an explicit one, whatever it is, rather than be the outcome of untested stereotypes or absent-mindedness.
DISCUSSING RESULTS BELOW THE LIMIT OF DETECTION FOR BIOMARKERS
One of several examples in this chapter of the “balance” issue is the study that detects no biomarkers. Nondetection could be ideal for everyone except (perhaps) biomonitoring researchers: all else being equal, no one wishes evidence of human exposure to environmental chemicals. Study subjects and wider populations that share their potential external exposures could be told that their exposure is no worse, and perhaps better, than that of the reference-range population.
However, the communication challenge of results below the limit of detection is not quite so easily resolved. Each environmental-monitoring technique, including those of biomonitoring, has a limit in the amount of a chemical that it can reliably and validly measure in a given matrix. Below that limit, it is impossible to tell how much of the substance, if any, is in the sample. Experience with or modification of the technique or invention of a new one can lower the detection limit eventually, but in the short run it is fixed. That is one reason, when multiple biomonitoring methods are available, that “the method chosen can have an appreciable effect on the results and their interpretation” (Helsel 1990, cited by Bates et al. 2005).
A result below the limit of detection is not an indicator of nonexposure, and this needs to be conveyed clearly to lay constituents of biomonitoring. Similarly, it is not necessarily true in all cases, that concentrations below the detection limit will not cause health problems. In the case of external measures of exposure (such as, concentrations in drinking water), public-health standards for carcinogens are commonly set above the value that experts believe to be protective of health (whether that is EPA’s zero maximum-contaminant-level goal or the target one-in-a-million risk for New Jersey’s drinking-water standards). Because measurements of the health-protective concentration are not reliable, the standard is set at the detection level (or at the level that treatment technology can reliably or cost-effectively reach, if that is higher). Regulators hope that the standard can be set at the health-protective level when technology improves. It would not be surprising if biomonitoring reference benchmarks (Chapter 5) for some chemicals were below reliably measurable levels, and this raises questions about whether results below the limit of detection indicate lack of potential health problems. Study communication about results below the limit of detection should take those issues into account by explaining both the reassuring news of relatively low exposure and the possibility (when applicable) that the health risk might not be zero.
COMMUNICATING HEALTH INTERPRETATIONS OF DETECTED BIOMARKERS
We discuss here issues raised by an inference of health effects when available evidence on health effects of biomonitored substances varies widely in quality, quantity, and applicability to the subject population (see Table 3-1). Depending on the constituency and situation being addressed, investigators’ assumptions about whether and how epidemiologic or toxicologic data support the purpose of the biomonitoring project may need to be communicated. The first topic is the mere observation of group II and IV biomarkers in study subjects, which can be reliably measured in humans but lack biologic-effect data or dose-response data. The second topic is comparison of observed biomarker concentrations with reference ranges, which are also likely for group II and IV biomarkers. The third topic is comparison with health benchmarks, which ideally will involve human data (groups V, VII, and sometimes VI) but in some cases (for example, biomarker-informed risk assessment) might entail animal data (groups III and sometimes VI). Finally, we discuss clinical practice, which might involve any biomarker group.
Biomarker Presence Implies Neither Health Effects Nor Their Absence
Public Beliefs About Exposure and Health Effects
The sketchy evidence on lay views about relationships between exposure and health effects suggests that there should be some concern about erroneous health inferences that nonscientists might draw from reports that biomarkers have been detected in human tissues. Nonscientists seem to read a wide range of interpretations and content (including content not explicitly included) into small texts about exposure (MacGregor et al. 1999). For example, “when a [mock] newspaper report about a chemical includes the phrase ‘has been found to cause cancer,’ the reader may infer that since only an important and serious finding would warrant publication, typical exposures to the chemical must be widespread, pose a significant risk, and should be a matter of some concern” (MacGregor et al. 1999). The same study found that people have widely varied but usually deterministic beliefs about links between cause, exposure, and effect. At least for carcinogens, which many laypeople do not see as having threshold levels for health risk, “the concept of exposure gains its meaning from both the nature of exposure [such as length of contact] and the perceived seriousness of its consequences” (MacGregor et al. 1999). CDC’s Third National Report on Human Exposure to Environmental Chemicals (2005) cautioned that “concentrations of the chemical are more important determinants of the relation
to disease, when established in appropriate research studies, than the detection or presence of a chemical.” However, 36% of a Portland, Oregon, public sample agreed (contrary to that caution) that “for pesticides, it’s not how much of the chemical you are exposed to that should worry you, but whether or not you are exposed to it at all” (Kraus et al. 1992; McCallum and Santos 1994).
But biomonitoring data offer potential communication advantages over environmental-monitoring data that should not be overlooked. Sexton et al. (2004) note that they yield “unequivocal evidence that both exposure and uptake have taken place.” As a result, state officials said “that human exposure [biomarker] data are often the most valid and persuasive evidence available to demonstrate whether, and to what extent, exposure has occurred or changed over time. In highly charged situations, where community trust has eroded, such data may be the only evidence acceptable to area residents” (GAO 2000). Thus, the ‘unequivocal evidence [of] both exposure and uptake’ provided by exposure biomarkers offers some rare certainty in a biomonitoring field rife with uncertainties. When communicators report ‘what is known’ about biomonitoring results, as suggested earlier in this chapter, this is an important point to make, and equally pertinent whether biomarkers are detected or not. However, such an emphasis will need to avoid feeding into any automatic assumption of ‘exposure=health effects’ among biomonitoring constituents; as noted later in this section, our current ignorance about how to avoid that inference warrants research on how this can be done effectively.
The concerns expressed by some experts and constituents about breast-feeding in the case of the California biomonitoring legislation exemplify the fear that nonscientists will assume that increased (or any) biomarker concentrations in human tissue indicate potential health effects. The extent of the assumption is unclear, although available evidence suggests that it occurs among at least a substantial minority of the public. “If you are exposed to a carcinogen, then you are likely to get cancer” garnered agreement from 36% of the Portland group; 17% said that they didn’t know (Kraus et al. 1992). A similar statement received agreement from 62% (3% “don’t know”) of a national survey of Canadians (Krewski et al. 1995), 43% (13% “don’t know”) of a college student population (MacGregor et al. 1999), and 26% (14% “don’t know”) of an opportunity sample of New Jersey respondents (B. Johnson, New Jersey Department of Environmental Protection, personal communication about Johnson 2002b data). It seems unlikely that that reaction will be less common when the indicator of exposure is biomarker detection. To the extent that such a lay view is at odds with the scientific view that exposure data alone do not indicate health effects, it presents an important challenge.
Communicating About Biomarker Presence
We do not know how to convey the biomarker-presence-does-not-indicate-health-effects message effectively. The anecdotal evidence that large surveillance-study results (such as the National Health and Nutrition Examination Survey, NHANES) no longer excite great public attention except where people identify a possible local source might reflect the efficacy of CDC’s and others’ efforts to convey this message. However, apparent quiescence might just as well reflect lack of salience, poor measures of public concern and action, a (temporary) fatalism because people see no effective means to prevent or reduce such exposures, or a lack of serious effort by institutions to take such actions or inform the public about them. If current no-health-effects messages are not effective, one might do better to apply a “mental-model” approach to developing messages (Morgan et al. 2002). That approach, in this case, entails identifying the beliefs (accurate and important, accurate and trivial, misconceptions, biases, and so on) that constituencies hold about the causal process by which exposure (external or internal) leads to health effects. It would entail intensive interviews that begin with nondirective questions (for example, “Tell me about environmental chemicals in human blood, urine, or breast milk”) followed by questions informed by scientific views of the causal process.7 Later large-scale surveys reveal the relative prevalence of particular biomonitoring views in the population. Messages are then designed to identify incorrect views, explain why they are not correct (Rowan 1994), and provide “correct” views (which, depending on the topic, could be definitive statements or merely clarifications of scientific uncertainties and disagreements and the reasons for them). In referring to ‘correct’ views, the committee considers that experts and laypeople can be educated by properly designed “mental models” research, as in exploring expert disagreements (see footnote 7) or in learning about lay beliefs and concerns, such as about exposure pathways (for example, more than one risk assessment has overlooked residential garden vegetables). The committee does not intend this concept to imply only one-way communication and learning about the general public’s views, but supports examining the views of experts, risk managers, and the general public.
For example, many people seem to hold a one-hit “mental model” of carcinogenesis; 58% of Canadians in a national survey disagreed that “the body usually repairs the damage caused by exposure to radiation so that cancer does not occur,” whereas only 31% agreed (Krewski et al. 1995). However, beliefs about what exposure means vary widely, so the proportion of people who felt that exposure to a carcinogen had “definitely” occurred was over 90% for daily smoking of one pack of cigarettes, over 40% for 10 minutes in a smoke-filled room, and 34% for smoking a single cigarette (MacGregor et al. 1999). Explaining the multiple-hit model could help people to understand that a single “exposure” to a carcinogen will not in and of itself cause cancer. Analogies (such as the fact that one exposure to someone with the flu does not inevitably mean getting the flu oneself) also might help if both communicators and their constituencies see the analogies as legitimate comparisons (e.g., Covello et al. 1988; Roth et al. 1990; Johnson 2003b, 2004b). Use of qualifiers (such as the idea of exposure to “an extremely small amount” or instances of small exposures, such as smoking only one cigarette or pumping one’s own gasoline just once) may be needed for most people to agree on exposure magnitudes. The term exposure alone may not be enough to convey the concept (MacGregor et al. 1999).
Expert Disputes About Health Implications
Any such messages must account for expert disagreements on exposure-health linkages if such disputes could affect the credibility or lay understanding of biomonitoring results. The mental-model approach to risk communication was developed on the presumption that experts’ equivalent models would constitute a “gold standard.” Expert consensus and good reason for high confidence occur in many fields, perhaps including some aspects of biomonitoring. Yet expert certainty and consensus are neither universal nor infallible. Although the advocates of the mental-model method acknowledge the need to grapple with expert disagreements (Morgan et al. 2002), we need more effort in this regard (Johnson 2002a). The public does not appreciate disputes among scientists over risks and tends to attribute them to incompetence or self-interest (for example of experts’ employers) rather than to limitations of available evidence (Johnson and Slovic 1995, 1998; Johnson 2003a). Studies of ways to communicate conflicting experts’ views are few (e.g., Renn et al. 1991 on the group Delphi approach), and we do not yet know what will both reduce distrust and increase knowledge. But even implicitly portraying expert opinion on disputed topics as united is likely to have poor results for practical communication and ethical goals. Constituents’ discovery of expert disputes is highly likely, and the perceived coverup will undermine future relations.
Health Effects Cannot Be Ruled Out
The message that biomarker data alone do not indicate health problems is incomplete, if necessary, and should not stand on its own. Without reasonably definitive data demonstrating the absence or likely absence of health effects due to observed magnitudes of exposure (such as well-done epidemiologic studies of the same or a similar population or reliable benchmarks), such a statement could be correct in denotation but false in connotation. In other words, it would imply that health effects have been ruled out when in fact they had not been sought, that there was no current method for observing or predicting such effects at these levels, or that available data were equivocal. Absence of evidence of effects is not identical with evidence of absence of effects—a distinction that must be clear to constituents. Otherwise, there is a large practical communication and ethical risk attached to simply saying that the presence of chemicals in human tissue does not imply health effects. In many cases biomonitoring uncertainties will mean that the appropriate scientific conclusion will be ‘high biomarker levels are not necessarily bad, low levels are not necessarily good.’ Empirical research will be needed to determine how to convey that conclusion appropriately, which might require supporting information (for example, on exposure reduction options) to avoid undue concern or apathy among constituents.
Some people might be confused by that message. For others, it might evoke both comfort and anxiety, perhaps to the extent that anxiety swamps reassurance (Otway and Wynne 1989). Still others will find it difficult to hear the presence-does-not-imply-health-effects message; they would be vigilant about group II and IV biomarker-related health effects even without the caveat about not ruling out such effects. However, the varied responses cannot justify omitting an accurate and useful interpretation.
We have already presented ideas on how to discuss uncertain data and how willingness to discuss uncertainties (including those in health effects) could promote trust. As noted in Chapter 4 on ethical grounds, providing information on steps that people can take to reduce their exposures to a chemical, regardless of uncertainties about health effects, allows them to take action if they so choose. In addition, risk-perception research shows that providing a sense of individual control over potential threats by giving people options for individual action is often a good way to reduce perceived risk as well (e.g., Hance et al. 1988; Slovic 1993). Sometimes, the mere fact that such information is available can reassure people enough to forestall such personal action. Provision of personal-action suggestions, however, may not avoid demands for institutional precautionary action. Other techniques may help to reduce rejection of information that seems threatening. Thinking about one’s own mortality—perhaps fostered in some people by
information that they or people similar to them have chemicals in their bodies—can make people defensive about their values and identities and thus resistant to countervailing information (Jonas et al. 2003). Asking people to deliberate on important values (for instance, asking them to rank several values and then write a little essay on a time in their life that exemplified their most important value) reduced their resistance to applying messages about health-protection behaviors to themselves (e.g., Sherman et al. 2000). Translation of “values affirmation” and other denial-reducing techniques for practical use in public-health communication is worth pursuing.
Miscellaneous Issues for Group II and IV Biomarkers
Two other health issues related to the mere reported presence of biomarkers of environmental chemicals deserve mention here; they involve mixtures and difference in exposures and susceptibilities between populations. First, as detection limits decrease and the number of biomonitored chemicals increases, reports of multiple chemicals in “bodies in general,” if not “in my body,” might arouse concern without regard to concentrations, types, or sources. We do not know to what degree laypeople have mental models that include additive or synergistic adverse health effects of multiple chemicals, but anecdotally it seems unlikely that their mental models include antagonistic or health-enhancing effects of mixtures. Experts have not gathered enough data on effects of mixtures (biomarker-related or not) to have stable, consensual mental models of their own on this topic. CDC does not report personal body burdens of environmental chemicals—partly, it seems, because of concern about precisely this potential perception, partly because the hundreds of chemicals that it now tests for are not tested in every subject. Thus, we are at the mercy of anecdotes and probably incorrect inferences.
If CDC reported the individual body-burden data that it now has (see Chapter 5 recommendation), even with caveats this could help to address communication, as well as technical needs. Assuming that expert knowledge about mixtures will be lacking for some time to come, messages need to emphasize how big a problem (if any) such interactions are likely to pose and what efforts are being made (even if not by the specific study at hand) to get more definitive answers. Laypeople will grudgingly accept messages about uncertainty (presumably including effects of mixtures) if they can be assured that something is being done to reduce the uncertainty. Supporting messages would address ever-lower detection limits and expansion of the scope of biomarker surveillance, as with analogies (for example, changing from grosser-weave nets or sieves to finer-weave ones to explain that most, if not all, of the “new” things were probably already there—also see Chapter 5
on historical-use information—but that earlier technology could not capture the smaller “fish” or “particles”). Data are also needed on how laypeople understand or respond to notions of detection limits or the scope of surveillance efforts and on their views on the effects of exposures to chemical mixtures on health.
The second issue for talking about biomarker concentrations is different exposures and susceptibilities across populations (such as those defined by sex, ethnicity, or income); “what is considered ‘healthy’ in some individuals might indicate a health risk in others” (Schulte and Talaska 1995). Experience has shown that the American public is rather familiar with the concept of variability in exposures and susceptibility (see earlier discussion)—for example, people will ask, in effect, whether health standards for drinking water take into account the greater consumption per body weight of infants and children. The limits of expert knowledge about the nature and degree of such variability motivated our recommendation for collection of data on socioeconomic status as part of biomonitoring studies (Chapter 4). Communication will need to take account of what is known so that messages (such as the biomarker-presence-does-not-mean-health-effects message) are sent only to populations for which they are valid. Yet, in general, there is no reason at this point to presume that communication with women, ethnic minorities, or other populations, which might have different exposures or susceptibilities from that of the general population, should differ from communication with any other constituents. Respect, attention to constituent concerns and questions, and careful crafting of messages to address constituent beliefs work with any constituency, including those whose particular exposures or susceptibilities should be kept in mind in interpreting biomarkers’ health implications.
Comparisons with Reference Ranges
The next step up from the no-necessary-health-effects message in communicating about biomarker detection is to compare detected concentrations with reference ranges. This also occurs for group II and IV biomarkers but depends on the added availability of studies in reference populations. Strictly speaking, such comparisons concern relative exposures rather than health effects (see Chapter 5); however, in the absence of relevant health information, it appears to be a common strategy to use them to imply the likely absence of health effects. If subjects in a given study exhibit concentrations lower than, or at least no higher than, a reference range, by implication (according to this approach) they should not exhibit unusual health effects relative to the reference population. CDC found that people seem to appreciate learning that “your value [or the mean value for the demographic group to which you belong] falls below what 95 out of 100 people
have” or “you have values three times those of the general population but [more or less than] this occupationally exposed group” (J. Pirkle, CDC, personal commun., May 16, 2005).
Despite the reported appreciation, there are dangers in transmitting such messages. For example, Sexton et al. (2004) asserted that “if average levels among the cohort are similar to those of the general public, then the group’s exposure is unlikely to cause unique problems.” Yet similarity of averages (and which kinds?) in two groups does not necessarily denote similar distribution of exposures. Nor does inference that problems among study subjects are unlikely to be “unique” mean that no problems should be expected in response to that exposure in either group. “You’re no worse off than anyone else” is not a conclusion that will or should reassure everyone.
Biomonitoring publications variously define a reference population as having “no,” “only minimal,” “some,” “nonoccupational,” or “typical” exposure to the target substance (e.g., Pirkle et al. 1995; Schulte and Talaska 1995; GAO 2000). Depending on the substances and populations involved, the resulting reference ranges could differ widely and thus promote quite different inferences about results in the study population. In particular, the implication that below- or within-range concentrations of tested toxicants in a study sample are free of adverse health outcomes is more plausible the more demonstrable it is that the reference population lacks substantial exposure. Thus, it is important for the nature of population sampling and exposure data (if any) to be made clear in communication.
The NHANES sample is generally drawn to reflect the national population as a whole, with external exposures neither measured nor used as part of the sampling frame. That makes it somewhat problematic as the basis of reference ranges to which other samples might be compared. Without exposure measures or reasonable dose-response data or other health benchmarks, it is difficult to conclude that people at the higher end of the NHANES range are “out of trouble” with regard to potential health effects and thus that only study subjects above that range merit concern. CDC has decided that this problem can be avoided by using the 95th percentile as the basis of comparison. In a population as large as that of the United States, that means that more than 5% of 297 million people (as of the middle of 2005), or nearly 15 million people, would have to be exposed to a contaminant source in order to be “in trouble” within the NHANES reference range (J. Pirkle, CDC, personal commun., May 16, 2005). If the sources of a given chemical are concentrated, this is and should be reassuring with respect to relative exposure of biomonitoring subjects who are below the reference range’s 95th percentile. However, Chapters 4 and 5 discuss several problems with an exclusive focus on the 95th or high percentiles of reference populations, as well as with reference ranges in general, that should be accounted for in communication on this topic; for example,
reporting the low end of the distribution can be informative for communicating exposure differences, risk-reduction potential, and such factors as excretion-rate capacity that might affect biologically effective doses.
In short, biomonitoring communicators should not extrapolate from relative exposure to the absolute likelihood of adverse health effects. Comparison of study with reference-population biomarker concentrations cannot be informative about such likelihood, for reasons discussed at length in Chapter 5. At best, such comparisons can inform only about relative likelihood, and even then caution is warranted.
Comparisons with Benchmarks
This section discusses communication issues raised by group V-VII biomarkers (Chapter 3), in which some indicator of concentrations at which health effects do or do not appear could be used as a benchmark for the observed biomarker concentrations.
As discussed in Chapter 5, benchmarks based on relevant populations, health end points, and internal doses (or plausible external doses) can be beneficial to study subjects and other concerned publics in evaluating individual and group biomonitoring results. For example, benchmarks could help to dampen health concerns that might otherwise be unduly high because of default lay beliefs about links between chemical body burdens and health outcomes. Conversely, exceedances of such benchmarks can be a signal for more attention and perhaps exposure reduction and other protective actions.
However, the earlier discussion highlighted technical weaknesses of the use of some benchmarks (such as undue extrapolation from occupational benchmarks to more general populations) that raise communication concerns and would need to be explained if the benchmarks were used. For example, CDC has mentioned Biological Exposure Indices (BEIs) for substances for which they are available, warning that these occupational values are “not appropriate” for the general population and are provided “for comparison, not to imply that the BEI is a safety level for general population exposure” (CDC 2005). That is a subtle distinction that may escape many constituencies. People tend to presume that they are being given information (whether in a conversation or in a government report) because it is useful (see also the discussion above of lay reactions to mock news stories about exposures). Many are likely to infer that they could be given such a comparison only because the BEI says something about the likelihood of health effects or about their need to worry. For a few constituents (such as those with physical conditions and health histories similar to those of workers), the BEI comparison might be worthwhile. For others, the communication costs of using a benchmark of uncertain relevance are likely
to outweigh the benefits of providing the BEI as perhaps the only available benchmark for health effects.
The question of whether to treat a benchmark value derived from human or animal data on the relation between a biomarker and a biologic response as a “bright line” (for example, between “safe” and “unsafe”) will remain contentious. In clinical and environmental communication practice, some physicians and officials feel no compunction about using reference values as just such bright lines. Some nonscientists appreciate such conclusiveness and do not wish to hear about uncertainties and other warnings (see above uncertainty discussion). However, treatment of a benchmark as sharply demarcating good and bad conditions is technically false, and doubt about official statements increases if they stress safety rather than danger (e.g., Weinstein 1986; Kraus et al. 1992; Siegrist and Cvetkovich 2001; Johnson 2003c; White et al. 2003). There is a difference between use of a benchmark for regulatory or litigation purposes, in which the bright-line approach is warranted, and its use for purposes without a structural context that requires definitive conclusions. For example, for external doses it is a legal violation when a utility’s water exceeds a drinking-water standard, as determined by mandated measurement protocols. Being ambivalent or ambiguous about what constitutes proper and improper conditions here defeats the purpose of efficient and equitable enforcement of the rule. However, for purposes of explaining potential health consequences, the standard is less clear. It may not have been defined entirely by avoidance of health outcomes (as when there are problems of detection limits or technologic or economic feasibility), and there are differences in standard-setting between carcinogenic and noncarcinogenic contaminants. Thus, even in the regulatory realm it can be misleading to use a benchmark to distinguish between “safe” and “unsafe” exposures for purposes of communication, however appropriate or prudent this use might be for legal purposes.
The bright-line approach to biomonitoring communication is likely to be even more problematic than extrapolation from animal data, the use and calculation of thresholds for noncarcinogens, and the avoidance of thresholds for carcinogens characteristic of benchmark-setting based on external doses (particularly because for some time to come biomarker benchmarks will come from external-dose assessments). Use of “acceptable risk” as an official interpretation of below-benchmark exposure can outrage people who object to having someone else decide what is “acceptable” for them. This is different from government deciding at what concentration no further action by government is needed. A related problem is that people interpret the phrase very low probability so variously that it may be as likely to confuse or alarm as to reassure (MacGregor et al. 1999). Empirical testing of methods to convey these concepts without falling into such communication traps is needed. For above-benchmark exposure, use of mes-
sages (as appropriate) that expert uncertainty about health effects diminishes as exposure gets further above the benchmark and that a carcinogen proved in animals may not be proved in humans may help to forestall undue expression of the bright-line syndrome among biomonitoring constituents.
Use of benchmarks is complicated by our ignorance of how lay constituencies perceive them (Johnson and Chess 2003). Among working-class residents near New Jersey factories, equally high concern was prompted by post-treatment concentrations of a contaminant that were 95% or 50% of the health standard. Sketchy findings from other populations suggest that most people are reassured by external-dose exposures (concentrations of substances in ambient air or drinking water) below those allowed by standards, but a substantial minority are concerned by below-standard exposures. In contrast, although most people see above-standard exposures as at least potentially harmful to health, a minority is not concerned about such exposures, at least not if they are only slightly above the standard. Research (B. Johnson, New Jersey Department of Environmental Protection, personal commun., 2005) with more highly educated New Jersey residents confirmed that a substantial proportion exhibited maximum concern below the standard (for example, 24% at half the level of the standard; 39% at 95% of the standard; and 48% at the standard). That study also revealed an unexpected additional perspective: an optimal range of concentrations bracket the standard, with higher or lower values being of more concern, and these study participants want to know the values defining the optimal range rather than just hear a single number. Measures (direct and indirect, qualitative and quantitative) of the frequency of this “optimal” view were inconsistent, and none of the studies involved random national samples. However, for our purposes, the main point is the diversity of public views on benchmarks and their divergence in many cases from the views of scientists and officials, underlining the need for the committee’s research recommendations at the end of this chapter.
Reasons for the disparate views have not been explored systematically. They might include distrust in government and other institutions in general, concern about uncertainty in measurement of environmental exposure or in calculation of dose-response relationships, experience in seeing standards revised downward (but never upward), belief that standards incorporate such “illegitimate” considerations as cost and detection limits as well as health effects, and suspicion that expert disputes over standards or risk estimates reflect incompetence or employer self-interest rather than limits of scientific knowledge (Johnson and Chess 2003; Johnson and Slovic 1998; Johnson 2003a). The optimal-range belief might be based on an analogy with medical blood-test results or with nutritional experience (such as minimal daily requirements of vitamins combined with toxicity at higher doses);
the secondary (nonregulatory) standard for pH in drinking water is an optimal range, although few people would know this.
Whatever the reasons for those views, biomonitoring messages must address potential constituent concerns and offer honest explanations as alternatives to the more skeptical ones some members of biomonitoring constituencies may produce on their own. Distrust might be handled by referring people to information sources that are seen as independent and honest. Another trust-building technique is to give people more direct control over information or protection (as in personal radiation dosimeters distributed to neighbors of Japanese nuclear power plants or the personal exposure-reduction advice discussed below) when feasible. Part of public concern about below-benchmark exposures in particular might stem from belief that institutions do not share the precautionary, risk-averse values of their constituencies with regard to the largely involuntary hazards that laypeople see environmental chemicals as representing. The institutions might consider ways to represent their precautionary stance honestly (for example, in exposure-reduction efforts, as discussed below) as a means to make biomarker benchmarks more credible, in addition to such communication alternatives as educating people about natural and other nonindustrial sources of environmental chemicals.
Recent papers on European attitudes toward real or hypothetical precautionary measures regarding health risks posed by mobile telecommunication handsets and towers or base stations (Timotijevic and Barnett 2006; Wiedemann and Schütz 2005) suggest that such measures might not reassure concerned publics and might even raise perceived risk as cues that the risks might be real. Although anecdotal experience and trust research (e.g., White and Eiser, in press) suggest that this will not be a universal reaction to precautionary approaches to environmental biomarker findings, the European results imply two warnings. First, constituent reaction to proposed precautions, as well as constituent suggestions for appropriately reassuring precautions, should be explored before precautions are announced or implemented. Second, it is likely that constituent views of precautions will vary both in kind (for instance, “things must be really bad if they’re actually doing something” vs “this is the protection we expect from officials”) and among action types or substances, so communication on this topic must be prepared for diversity.
Caution should be used in relying on benchmarks as a means to put biomonitoring findings into a health context. Van Damme and Casteleyn (2003) made the following comment about occupational health, but it applies as well to nonworker situations:
Health protection cannot always be ensured by simply complying with one or a few biological limit values. Health status of an individual worker
is the result of the integrated effect of many variables, one of which is the exposure to toxic substances in the workplace. A reductionistic approach will fail to offer appropriate protection.
The challenge is complicated by the possibility that there will be a set of potential “comparison values” for a given substance or that a benchmark may be useful for some applications but bad for others (see Chapter 5). Explanation of other potential benchmarks and why they are not suitable in this case may be warranted; in some cases (such as local or national surveillance studies), partnership with other entities (Chapter 4) may help to get agreement on which benchmarks to apply, and disputes over interpretation of results will decrease.
Interpreting Biomarker Findings in the Context of Clinical Data
The best opportunities for communicating health implications of biomonitoring data arise when an unequivocal internal dose-response relationship has been established for humans (by methods discussed in Chapter 5) or when a clinician has data on a person’s health that can be used for context-setting. The first case applies primarily to group VII biomarkers (and, with caveats, to some group VI examples); the second case extends to group V biomarkers.
It is challenging to extrapolate general surveillance data, particularly with limited health-effects data, to individual risk estimates without the genetic, external-exposure, lifestyle, and other data that a clinician could use to adjust population risk estimates for individual cases. As a result, previous comments about communication difficulties with respect to health implications apply far more to surveillance studies than they do to the interaction of a personal or occupational physician with an individual patient or worker on whom the physician also has extensive nonbiomarker information. For example, clinicians might in rare cases determine that BEIs are appropriate comparison values for a specific patient’s biomarker concentrations.
The greater ease of clinical communication about biomarkers and health than of other biomonitoring communication comes with several caveats. First, not all clinical communication involves people who were study subjects; for example, announcement of local (if not national) surveillance results might prompt members of the wider population to visit their doctors for consultation. The experience of environmental-risk assessors in communicating the distinction between population risks (the usual focus of risk estimates) and individual risks does not augur well for either professionals’ ability to communicate the difference well or constituents’ ability to comprehend. Second, some people subject to biomonitoring (including those
who order biomarker tests themselves or through their doctors) may have been subject to lower-quality tests or less-informed consent, which may have presented their physicians with challenges that the doctors had trouble recognizing. Third, most doctors are notoriously ignorant about environmental exposure and health issues (e.g., American College of Physicians 1990; Grupenhoff 1990; Goldman et al. 1999; Wynn et al. 2003), and increasing pressures on their time in medical school and practice offer little hope for swift resolution of this problem. Biomonitoring, because it deals with internal doses, might have a better but still small chance at gaining doctors’ attention and comprehension than other environmental-health topics. Bates et al. (2005) cite some helpful resources for physicians, such as medical toxicologists (ACMT 2006) and pediatric environmental-health specialty units (ATSDR 2005), but efforts must be made to make doctors aware of them and to use them. Fourth, communication between doctor and patient is often problematic, even without the time pressures of the current U.S. clinical visit. There is a growing literature on doctor-patient communication problems and solutions (e.g., Rimer and Glassman 1998; Schwartz et al. 1999; Alaszewski and Horlick-Jones 2003; Maynard 2003; O’Connor et al. 2003; Paling 2003), but it will take time for this literature to influence clinical practice.
There is no easy recipe for good biomonitoring communication, even if we were dealing with only one kind each of population, biomarker, health effect, reference range or benchmark, exposure pathway, exposure source, biomonitoring study, initial communicator, and constituency and if we had good information on each of these. Given that those conditions do not apply and given the dangers of extrapolating unduly from seemingly similar situations, we do not encourage nor have we promulgated any such recipes. Situation-specific, empirically driven understanding and testing of communication options are vital. However, implementation of several general practical and research recommendations also would enhance the practice of biomonitoring communication. These are listed in rough order of priority for practice and research, respectively.
The research proposed in the next section is critical for systematic development and evaluation of improved biomonitoring communication. However, even without such research, effective implementation of the practical recommendations listed below would go a long way toward improving both the performance and the comfort level of biomonitoring communicators.
Promote Communication Funding and Good Practice. All too often in current biomedical and environmental research and practice, no attention is given by sponsors to communication issues or funding, so the proposals they receive for studies also ignore them (for example, McCallum and Santos 1996). By implication, communication is either relegated to institutional review board review of informed-consent forms or is to be performed (without funding, training, expertise, or planning) in the interstices of the technical tasks of the project. This action by omission is a recipe for bad communication. Without strong institutional support for communication planning and evaluation in individual studies and without development of communication infrastructure generally, biomonitoring communication will become at best unhelpful and at worst a barrier to effective interpretation and use of biomonitoring data. Occasional creative solutions will die for lack of support and dissemination. Biomonitoring sponsors of all kinds (agencies, corporations, foundations, activist groups, and so on) should take the lead in promoting and funding of communication. Sponsors should require explicit planning of communication (Chapter 4); study-specific evaluation of communication (Chapter 4); documentation of communication methods, messages, evaluation methods, and results; and wide distribution of communication materials and findings to the biomonitoring community (not only peer-reviewed academic publications) so that each study need not start from scratch. In a more ambitious approach to information– sharing, a sponsor or consortium of sponsors would establish a biomonitoring-communication database to be maintained and updated for the benefit of the national and international biomonitoring community. In addition to its practical use, biomonitoring-communication researchers might use it for retrospective analyses or to help to set up cooperating networks of practitioners for prospective research (for example, testing one kind of message against another).
Use Consistent Terminology and Concepts. Consistency of usage is needed within and between projects (see above on varied definitions of reference populations, for example). This is a recommendation that can be implemented quickly and relatively easily. Ultimately, this should become part of a larger effort to train various constituencies on what biomonitoring can and cannot tell one about environmental chemicals in humans. Both efforts are vital if there is to be any hope of establishing a minimum of shared knowledge among constituencies so that communicators eventually will not need to recapitulate the entire spectrum of education for each new project.
Expand Biomonitoring Education for Constituents. Depending on educational gaps identified in proposed research, appropriate institutional actors, such as government agencies and university staff, should provide simple, standard background information that will help people to under-
stand how to interpret biomonitoring results that might appear in the mass media and to decide whether and how to pursue informative independent biomonitoring of their own environmental-chemical body burdens. This generic effort will complement project-specific use of consistent terminology and concepts. The usual warnings about systematic development and evaluation of communication apply to these educational materials.
Support Communication Training. Communication training for institutions, organizations, and professionals is particularly important, and it will become more vital once research makes such training more than drilling in communication “principles.” Personal and institutional barriers to good practice by doctors, officials, and experts need to be addressed, and training is necessary, but not sufficient, for that task. If partnerships (Chapter 4) with research subjects or other people are envisioned, all parties (including citizens) need some time to learn, both individually and collectively, “how to do it.” Practices that “work” in other contexts, such as conventional public hearings, do not help much here, and citizens are as unfamiliar with appropriate behavior in partnerships as are institutions (Renn et al. 1995). Yet training is as neglected as any other communication issue in environmental practice or funding. Anecdotal data suggest that people assume that communication is either something anyone can do without training or obtainable from press-office advice; neither is true.
Document Risk-Reduction Options. Given the potential importance of exposure-reduction actions for both ethics and communication and for both citizens and officials, it is important that there be documentation and wide distribution of information about steps that individuals, communities, and private and public organizations could take to reduce external exposures that might or do contribute to observed biomarker concentrations. Good communication practices mentioned elsewhere in this chapter and in Chapter 4 are as vital to clear and credible communication of exposure reduction information as communication of any other biomonitoring topic. That information should include, whenever available, sources of each environmental chemical and their relative contribution; types of exposure-reduction actions that individuals, households, and institutions could take; and absolute and relative strengths and weaknesses of the actions, such as effectiveness for a given source, cost, and cost effectiveness. The information must be updated to account for new research, innovative technology, and social changes. CDC’s biannual reports on human exposure to environmental chemicals now include information on each chemical’s uses and sources but too generically to be much more than a start for deciding whether and how to act. Exposure-source and -reduction information is unlikely to come from biomonitoring projects themselves in most cases, so other researchers and institutions must provide information on exposure reduction that will be useful for biomonitoring-study design, communica-
tion, and ethics. The aim of the documentation is not to endorse either exposure reduction in general or specific actions but to help people to identify quickly whether and which exposure-reduction actions, collective or individual, might be appropriate in a given situation. Ideally, there will be a central clearinghouse for such exposure-reduction information because a more laissez faire approach to the distribution of relevant information might not match the need to know and could create communication and ethical problems. Study managers must adapt any general advice to their own cases, so we urge managers to include discussion of their adaptations in their documentation and distribution of communication efforts, so that future biomonitoring studies can benefit from the creativity of their predecessors.
All the following recommendations are expected to be valuable for the advancement of biomonitoring communication. However, the first three listed should be particularly fruitful because they should be mutually reinforcing: knowing the biomonitoring-related beliefs of communicators and constituents allows evaluation of current and development of alternative communication messages, and evaluation of current and alternative messages feeds back into understanding of what people believe and thus how to improve communication.
Identify Mental Models of Exposure and Health Effects. The direct and indirect links between external dose, biomarker concentrations, and biologic effects (Figure 3-1) are the core of both exposure biomonitoring and biomonitoring-communication research. Probing for beliefs about specific subtopics (such as chemical mixtures’ effects on health, pharmacodynamics, and variability in susceptibility) pertinent to biomonitoring communication will need to be part of the effort. We need a better grasp of the mental models of the linkages held by all parties to biomonitoring communication. Studies of the views of lay, “general-public” constituents are important, but so are those of experts, institutional risk managers, and others. That expansive approach is justified by the need to know the expert consensus on causal linkages as a “gold standard” for building messages about the links; any expert disputes that will need explanation to nonexperts (and, perhaps, foster steps toward scientific resolution of the disputes); how, if at all, those who are neither experts nor the general public (such as most institutional officials interested in biomonitoring) differ in their views of causal linkages; and how communicators’ similar or different beliefs about the linkages will affect whether communication succeeds or fails. That information will inform both experts and lay people about technical and nontechnical issues related to biomonitoring, and will determine whether it
is feasible to develop and evaluate generic, rather than project-specific, communication designs.
Assess Current Biomonitoring Communications. Research to identify the current nature and scope of biomonitoring-related communications by various organizations, including retrospective analyses of generic and project-specific informational materials, will be a vital complement to the prospective evaluation of new project communications recommended in Chapter 4. For example, the apparently growing phenomenon in which individuals contract with a testing laboratory to measure biomarkers in their urine or blood independently of any formal study (Chapter 2) raises communication concerns. Most people are unlikely to have the background to know what to demand of laboratories in terms of tests, quality control, or interpretation, and it cannot be expected in this for-profit, narrow-margin sector that the laboratories themselves will undertake thorough communication efforts. But without systematic analysis, we do not know whether or what deficits exist in current communication by laboratories, so we cannot work to correct them. Similarly rigorous analyses of biomonitoring-related communication by citizen activists, university scientists, industry (for both occupational and environmental issues), and environmental and public-health agencies at local, state, and federal levels of government are also needed. Despite its increasing experience, for example, even the mass-media strategies of CDC in announcing results of its national surveillance reports might be improved (and inform the work of others) if given careful study. Inconsistencies and gaps among the various organizations’ efforts can be identified as targets for remediation or explanation. Furthermore, experience shows that communication may be at odds not only with the beliefs of their intended audiences but also with the mental models of the communicators themselves. That is, the mental models of the communicator might aim at reassurance and clarification, but the actual communication materials do not exemplify reassurance, clarity, or topical relevance even to sympathetic colleagues, much less to intended constituents. Thus, comparison of such materials with the mental models of originators and recipients can be informative.
Identify Reactions to and Effective Messages About Uncertainty. Some relevant topics (such as trust) will continue to be the subject of considerable research outside the biomonitoring field and might in turn be applicable to biomonitoring efforts without much adaptation or supplementation. That is unlikely to be true of the perception of and communication about uncertainty, which despite its centrality to environmental-health issues generally has attracted little researcher attention in the last few decades. Furthermore, only some aspects of biomonitoring-relevant uncertainty are shared with other health or environmental topics. If biomonitoring sponsors do not fund this critical research, it is unlikely to be sponsored by others.
Particular questions important to biomonitoring include these:
Which of the myriad uncertainties in biomonitoring are of most concern or most difficult to understand? For example, are laypeople more interested in reduction of uncertainties about biomarker-effects relations or exposure-biomarker relations?
How can these uncertainties best be explained—for example, with verbal vs numeric formats (PCCRARM 1997); Carpenter’s (1995) four questions?
How can alternative lay explanations for uncertainties (such as incompetence or self-interest of experts) best be addressed?
What are the best means to convey that exposure need not mean health effects, given existing lay beliefs about the exposure-health link? What are the best means to convey that low or typical biomarker concentrations do not rule out health effects?
Can values affirmation and other techniques to reduce resistance to messages of personal relevance (such as exposure and exposure reduction) be applied in nonlaboratory situations and populations?
How do comparisons with necessarily uncertain reference ranges (for example, “less than 95% of the population”) affect beliefs about exposure, or comparisons with necessarily uncertain benchmarks affect beliefs about health effects, and thus in turn affect the credibility of biomonitoring communication?
Do discussions of uncertainty affect judgments of the discussant’s honesty and competence differently, depending on whether action or inaction is proposed as a consequence of the uncertainty?
What are the best ways to communicate biomonitoring results when science is unable to determine any interpretation of the data (such as health effects of mixtures)?
Identify Mental Models of Exposure Reduction and Risk Managers. The mental-models method was developed to identify how experts and lay constituencies conceive of causal links in the development of hazards, including the exposure-effects link that is so central to biomonitoring, and how these conceptions and differences between the linkages might affect risk communications. One of the goals for systematically identifying potential reasons for communication failures was to develop subsequent messages that would help laypeople accurately identify institutional or personal actions that would prevent such effects. The method is technically capable of identifying relevant beliefs about risk management as well, but its advocates have done little in this direction despite evidence that such beliefs could have a dominant effect on risk judgments. Although, as mentioned earlier, related trust research does not depend entirely on biomonitoring
funding, additional work will be needed to make some of it applicable to that field. For example, such research often uses abstract institutional stimuli, such as “the federal government” or “information provision,” that are not useful (Earle and Cvetkovich 1995), and biomonitoring includes actors (such as private laboratories) that are rarely included in these studies. Probing for exposure-reduction and risk-management beliefs will identify whether precautionary exposure-reduction action by institutions or exposure-reduction advice to individuals reduces or increases judged-risk magnitude, concern, or trust among the various constituencies involved. It will also identify effects of lay and expert concepts of detection limits and scope of biomonitoring surveillance on communication.
Given the central role of communication in the success of interpretation and use of biomonitoring data, but high uncertainty about what makes for effective biomonitoring communication, building infrastructure and research in this field must have high priority for biomonitoring funders and investigators. Without that priority, the whole field of biomonitoring could fail to advance.
ACMT (American College of Medical Toxicology). 2006. American College of Medical Toxicology, Fairfax, VA [online]. Available: http://www.acmt.net/main [accessed April 4, 2006].
Alaszewski, A., and T. Horlick-Jones. 2003. How can doctors communicate information about risk more effectively? BMJ 327(7417):728-731.
American College of Physicians. 1990. Occupational and environmental medicine: The internist’s role. Ann. Intern. Med. 113(12):974-982.
Andrews, C.J. 1998. Giving expert advice. IEEE Technol. Soc. Mag.17(2):5-6.
ATSDR (Agency for Toxic Substances and Disease Registry). 2001. A Primer on Health Risk Communication Principles and Practices [online]. Available: http://www.atsdr.cdc.gov/HEC/primer.html [accessed Nov. 30, 2005].
ATSDR (Agency for Toxic Substances and Disease Registry). 2005. Pediatric Environmental Health Specialty Units (PEHSU). U.S. Department of Health and Human Services, Agency for Toxic Substances and Disease Registry [online]. Available: http://www.atsdr.cdc.gov/HEC/natorg/pehsu.html [accessed April 4, 2006].
Balch, G.I., and S.M. Sutton. 1995. Putting the first audience first: Conducting useful evaluation for a risk-related government agency. Risk Anal. 15(2):163-168.
Bates, M.N., J.W. Hamilton, J.S. LaKind, P. Langenberg, M. O’Malley, and W. Snodgrass. 2005. Workshop report: Biomonitoring study design, interpretation, and communication—Lessons learned and path forward. Environ. Health Perspect. 113(11):1615-1621.
Becker, R.A. 2005. Trace Chemicals in the Human Body: Interpreting Biomonitoring Data in a Risk Context. Presentation at the First Meeting on Human Biomonitoring of Environmental Toxicants, March 21, 2005, Washington, DC.
Brown, R.V. 1985. Presenting Risk Management Information to Policymakers: Executive Summary. Technical Report 85-4. Falls Church, VA: Decision Science Consortium, Ltd.
Carpenter, R.A. 1995. Communicating environmental science uncertainties. Environ. Prof. 17(2):127-136.
CDC (Centers for Disease Control and Prevention). 2005. Third National Report on Human Exposure to Environmental Chemicals. U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, Atlanta, GA [online]. Available: http://www.cdc.gov/exposurereport/3rd/pdf/thirdreport.pdf [accessed Sept. 26, 2005].
Covello, V.T., and F.W. Allen. 1988. Seven Cardinal Rules of Risk Communication. OPA-87-020. Office of Policy Analysis, U.S. Environmental Protection Agency, Washington, DC. April 1988.
Covello, V.T., P.M. Sandman, and P. Slovic. 1988. Risk Communication, Risk Statistics and Risk Comparisons: A Manual for Plant Managers. Washington, DC: Chemical Manufacturers Association.
Cvetkovich, G., and P.L. Winter. 2003. Trust and social representations of the management of threatened and endangered species. Environ. Behav. 35(2):286-307.
Cvetkovich, G., M. Siegrist, R. Murray, and S. Tragesser. 2002. New information and social trust: Asymmetry and perseverance of attributions about hazard managers. Risk Anal. 22(2):359-367.
Duggan, A. 2005. CropLife America (CLA) Comments. Presentation at the First Meeting on Human Biomonitoring of Environmental Toxicants, March 21, 2005, Washington, DC.
Earle, T.C., and G.T. Cvetkovich. 1995. Social Trust: Toward a Cosmopolitan Society. Westport, CT: Praeger.
Edwards, J.A., and G. Weary. 1998. Antecedents of causal uncertainty and perceived control: A prospective study. Eur. J. Pers. 12(2):135-148.
Einsiedel, E., and B. Thorne. 1999. Public responses to uncertainty. Pp. 43-57 in Communicating Uncertainty: Media Coverage of New and Controversial Science, S.M. Friedman, S. Dunwoody, and C.L. Rogers, eds. Mahwah, NJ: Lawrence Erlbaum Associates.
Frewer, L.J., C. Howard, and R. Shepherd. 1998. The influence of initial attitudes on responses to communication about genetic engineering in food production. Agr. Hum. Values 15(1):15-30.
Frewer, L.J., S. Miles, M. Brennan, S. Kuznesof, M. Ness, and C. Ritson. 2002. Public preferences for informed choice under conditions of risk uncertainty. Public Underst. Sci. 11(4):363-372.
Frewer, L.J., S. Hunt, M. Brennan, S. Kuznesof, M. Ness, and C. Ritson. 2003. The views of scientific experts on how the public conceptualize uncertainty. J. Risk Res. 6(1):75-85.
Furnham, A., and T. Ribchester. 1995. Tolerance of ambiguity: A review of the concept, its measurement and applications. Curr. Psychol. 14(3):179-199.
GAO (U.S. General Accounting Office). 2000. Toxic Chemicals: Long-term Coordinated Strategy Needed to Measure Exposures in Humans. GAO/HEHS-00-80. U.S. General Accounting Office, Washington, DC. May 2000 [online]. Available: http://www.gao.gov/new.items/he00080.pdf. [accessed Dec. 1, 2005].
Goldman, R.H., S. Rosenwasser, and E. Armstrong. 1999. Incorporating an environmental/ occupational medicine theme into the medical school curriculum. J. Occup. Environ. Med. 41(1):47-52.
Grupenhoff, J.T. 1990. Case for a National Association of Physicians for the Environment. Am. J. Ind. Med. 18(5):529-533.
Hance, B.J., C. Chess, and P.M. Sandman. 1988. Improving Dialogue with Communities: A Risk Communication Manual for Government. Trenton NJ: New Jersey Department of Environmental Protection.
Helsel, D.R. 1990. Less than obvious: Statistical treatment of data below the detection limit. Environ. Sci. Technol. 24(12):1766-1774.
Johnson, B.B. 2002a. Risk communication: A mental models approach [book review]. Risk Anal. 22(4):813-814.
Johnson, B.B. 2002b. Stability and inoculation of risk comparisons’ effects under conflict: Replicating and extending the ‘Asbestos Jury’ study by Slovic et al. Risk Anal. 22(4):789-800.
Johnson, B.B. 2003a. Further notes on public response to uncertainty in risk and science. Risk Anal. 23(4):781-789.
Johnson, B.B. 2003b. Are some risk comparisons more effective under conflict? A replication and extension of Roth et al. Risk Anal. 23(4):767-780.
Johnson, B.B. 2003c. Do reports on drinking water quality affect customers’ concerns? Experiments in report content. Risk Anal. 23(5):985-998.
Johnson, B.B. 2004a. Erratum to “Further notes on public response to uncertainty in risks and science” by Branden B. Johnson, in Risk Analysis, 23(4), 2003. Risk Anal. 24(3):781.
Johnson, B.B. 2004b. Risk comparisons, conflict, and risk acceptability claims. Risk Anal. 24(1):131-145.
Johnson, B.B., and C. Chess. 2003. How reassuring are risk comparisons to pollution standards and emission limits? Risk Anal. 23(5):999-1007.
Johnson, B.B., and P. Slovic. 1995. Presenting uncertainty in health risk assessment: Initial studies of its effects on risk perception and trust. Risk Anal. 15(4):485-494.
Johnson, B.B., and P. Slovic. 1998. Lay views on uncertainty in environmental health risk assessment. J. Risk Res. 1(4):261-279.
Jonas, E., J. Greenberg, and D. Frey. 2003. Connecting terror management and dissonance theory: Evidence that mortality salience increases the preference for supporting information after decisions. Pers. Soc. Psychol. B. 29(9):1181-1189.
Kraus, N., T. Malmfors, and P. Slovic. 1992. Intuitive toxicology: Expert and lay judgments of chemical risks. Risk Anal. 12(2):215-232.
Krewski, D., P. Slovic, S. Bartlett, J. Flynn, and C.K. Mertz. 1995. Health risk perception in Canada II: Worldviews, attitudes and opinions. Hum. Ecol. Risk Assess. 1(3):231-248.
Lopes, L.L. 1983. Some thoughts on the psychological concept of risk. J. Exp. Psychol. Human 9:137-144.
MacGregor, D.G., P. Slovic, and T. Malmfors. 1999. “How exposed is exposed enough?” Lay inferences about chemical exposure. Risk Anal. 19(4):649-659.
Maynard, D.W. 2003. Bad News, Good News: Conversational Order in Everyday Talk and Clinical Settings. Chicago: University of Chicago Press.
McCallum, D.B., and S.L. Santos. 1994. Public Knowledge and Perceptions of Chemical Risks in Six Communities. Follow-Up Survey Results. Final Report to the U.S. Environmental Protection Agency. New York: Columbia University, Center for Risk Communication.
McCallum, D.B., and S.L. Santos. 1996. Participation and persuasion: A communications perspective on risk management. Pp 16.1-16.32 in Risk Assessment and Management Handbook: For Environmental, Health, and Safety Professionals, R.V. Kolluru, S.M. Bartell, R.M. Pitblado, and R.S. Stricoff, eds. New York: McGraw Hill.
Miles, S., and L.J. Frewer. 2003. Public perception of scientific uncertainty in relation to food hazards. J. Risk Res. 6(3):267-284.
Morgan, M.G., and L. Lave. 1990. Ethical considerations in risk communication practice and research. Risk Anal. 10(3):355-358.
Morgan, M.G., B. Fischhoff, A. Bostrom, and C.J. Atman. 2002. Risk Communication: A Mental Models Approach. New York: Cambridge University Press.
NRC (National Research Council). 1989. Improving Risk Communication. Washington, DC: National Academy Press.
NRC (National Research Council). 1994. Science and Judgment in Risk Assessment. Washington, DC: National Academy Press.
NRC (National Research Council). 1997. Building a Foundation for Sound Environmental Decisions. Washington, DC: National Academy Press.
NRC (National Research Council). 2004. Adaptive Management for Water Resources Project Planning. Washington, DC: The National Academies Press.
O’Connor, A.M., F. Légaré, and D. Stacey. 2003. Risk communication in practice: The contribution of decision aids. BMJ 327(7417):736-740.
Osterloh, J. 2005. Biomonitoring: Attributes and Applications. Presentation at the First Meeting on Human Biomonitoring of Environmental Toxicants, March 21, 2005, Washington, DC.
Otway, H., and B. Wynne. 1989. Risk communication: Paradigm and paradox. Risk Anal. 9(2):141-145.
Paling, J. 2003. Strategies to help patients understand risks. BMJ 327(7417):745-748.
PCCRARM (Presidential/Congressional Commission on Risk Assessment and Risk Management). 1997. Risk Assessment and Risk Management in Regulatory Decision-Making. Final Report. Washington, DC: U.S. General Printing Office [online]. Available: http://www.riskworld.com/Nreports/1997/risk-rpt/volume2/pdf/v2epa.PDF [accessed Dec. 1, 2005].
Pflugh, K.K., J.A. Shaw, and B.B. Johnson. 1994. Establishing Dialogue: Planning for Successful Environmental Management; A Guide to Effective Communication Planning. Trenton, NJ: New Jersey Department of Environmental Protection and Energy.
Pirkle, J.L., L.L. Needham, and K. Sexton. 1995. Improving exposure assessment by monitoring human tissues for toxic chemicals. J. Expo. Anal. Environ. Epidemiol. 5(3):405-424.
Renn, O., T. Webler, and B.B. Johnson. 1991. Public participation in hazard management: The use of citizen panels in the U.S. Risk Issues Health Safety 2(3):197-226.
Renn, O., T. Webler, and P. Wiedemann, eds. 1995. Fairness and Competence in Citizen Participation: Evaluating Models for Environmental Discourse. Dordrecht, The Netherlands: Kluwer.
Rimer, B.K., and B. Glassman. 1998. Tailoring communications for primary care settings. Methods Inf. Med. 37(2):171-177.
Robison, S.H. 2005. Biomonitoring. Presentation at the First Meeting on Human Biomonitoring of Environmental Toxicants, March 21, 2005, Washington, DC.
Roth, E., M.G. Morgan, B. Fischhoff, L. Lave, and A. Bostrom. 1990. What do we know about making risk comparisons? Risk Anal. 10(3):375-387.
Rowan, K.E. 1994. Why rules for risk communication are not enough: A problem-solving approach to risk communication. Risk Anal. 14(3):365-374.
Schober, S. 2005. National Health and Nutrition Examination Survey (NHANES): Environmental Biomonitoring Measures, Interpretation of Results. Presentation at the Second Meeting on Human Biomonitoring of Environmental Toxicants, April 28, 2005, Washington, DC.
Schulte, P.A., and G. Talaska. 1995. Validity criteria for the use of biological markers of exposure to chemical agents in environmental epidemiology. Toxicology 101(1-2): 73-88.
Schwartz, L.M., S. Woloshin, and H.G. Welch. 1999. Risk communication in clinical practice: Putting cancer in context. J. Natl. Cancer Inst. Monogr. 25:124-133.
Sexton, K., L.L. Needham, and J.L. Pirkle. 2004. Human biomonitoring of environmental chemicals: Measuring chemicals in human tissue is the “gold standard” for assessing the people’s exposure to pollution. Am. Sci. 92(1):39-45.
Sherman, D.A.K., L.D. Nelson, and C.M. Steele. 2000. Do messages about health risks threaten the self? Increasing the acceptance of threatening health messages via self-affirmation. Pers. Soc. Psychol. B. 26(9):1046-1058.
Siegrist, M., and G. Cvetkovich. 2001. Better negative than positive? Evidence of a bias for negative information about possible health dangers. Risk Anal. 21(1):199-206.
Siegrist, M., T.C. Earle, and H. Gutscher. 2003. Test of a trust and confidence model in the applied context of electromagnetic field (EMF) risks. Risk Anal. 23(4):705-716.
Slovic, P. 1993. Perceived risk, trust, and democracy. Risk Anal. 13(6):675-682.
Stern, P.C. 1991. Learning through conflict: A realistic strategy for risk communication. Policy Sci. 24(1):99-119.
Stern, P.C., and H.V. Fineberg, eds. 1996. Understanding Risk: Informing Decisions in a Democratic Society. Washington, DC: National Academy Press.
Thompson, K.M., and D.L. Bloom. 2000. Communication of risk assessment information to risk managers. J. Risk Res. 3(4):333-352.
Timotijevic, L., and J. Barnett. 2006. Managing the possible health risks of mobile telecommunications: Public understandings of precautionary action and advice. Health Risk Soc. 8(2):143-164.
Van Damme, K., and L. Casteleyn. 2003. Current scientific, ethical and social issues of biomonitoring in the European Union. Toxicol. Lett. 144(1):117-126.
Weinstein, N.D. 1986. Public Perceptions of Environmental Hazards; Study 1 Final Report: Statewide Poll of Environmental Perceptions. Trenton, NJ: New Jersey Department of Environmental Protection, Office of Science and Research.
Wenger, D.E. 1987. Collective behavior and disaster research. Pp. 213-238 in Sociology of Disasters: Contribution of Sociology to Disaster Research, R.R. Dynes, B. De Marchi, and C. Pelanda, eds. Milan, Italy: Franco Angeli Libri.
White, M.P., and J.R. Eiser. 2005. Information specificity and hazard risk potential as moderators of trust asymmetry. Risk Anal. 25(5):1187-1198.
White, M.P., and J.R. Eiser. In press. Marginal trust in risk managers: Building and losing trust following decisions under uncertainty. Risk Analysis.
White, M.P., S. Pahl, M. Buehner, and A. Haye. 2003. Trust in risky messages: The role of prior attitudes. Risk Anal. 23(4):717-726.
Wiedemann, P.M., and H. Schütz. 2005. The precautionary principle and risk perception: Experimental studies in the EMF area. Environ. Health Perspect. 113(4):402-405.
Wynn, P.A., N.R. Williams, D. Snashall, and T.C. Aw. 2003. Undergraduate occupational health teaching in medical schools—not enough of a good thing? Occup. Med. 53(6):347-348.