Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
2 Scientific Issues in Environmental Health Decision Making Good environmental health decisions require using all available scientific information. One part of this process includes a thorough and rigorous examina- tion of all scientific evidence, including the consideration of the type of research (i.e., case study; cohort study; double blind, randomized control), and understand- ing the uncertainty in the existing data. Furthermore, science is a dynamic process and not static. There will always be new information to consider once a decision is made, which in some instances may alter the landscape for making decisions based on science. This chapter captured the presentations and discussion on how to weigh evidence in decision making, working with variability and uncertainty in the data, the use and misuse of science in decision making, and when policy makers should revisit decisions based on advancing science. Evaluating Weights of Evidence for Decision Making J. Michael McGinnis, M.D., M.P.P., Senior Scholar, Roundtable on Evidence-Based Medicine, Institute of Medicine Evidence as Science Along a Spectrum The first step in understanding how evidence relates to scientific decision making is to reflect on the nature of evidence as science. In this respect, just as science may be a tool less for finding the answer than for revealing the next question, so may evidence be a tool less for making a decision than for inform- ing its context. Evidence is neither static nor formulaic. Its character is not binary, but evo- lutionary in nature. McGinnisâs current area of focus, evidence-based medicine, may be described as a systematic march to care that is most effective, personal, and appropriately tailored to circumstanceâin effect, a march down an evidence 21
22 ENVIRONMENTAL HEALTH SCIENCES DECISION MAKING spectrum in which the opposite poles represent, on one end, the nonexistence of evidence and, on the other, the evidence that is irrefutable. The challenge is therefore to determine the decision rules at play at various points along the path to ever stronger evidence. Forms and Standards of Evidence: A Hierarchy The decision rules will be forged by the forms of the evidence and the stan- dards applied. In the biological sciences, both the forms and the standards take on a certain generally accepted character. The forms of evidence include biochemical data, animal studies, population studies, and individual studies. The standards of evidence relate to such issues as the consistency, strength, specificity, response of the association, and the biological plausibility. Together, these serve as a general framework for assessing health interventions. In the case of medical care, for example, evidence is usually information from clinical experience that has met an established test of validity, with the appropriate standard determined according to the requirements of the interven- tion and the clinical circumstance. Typically, evidence of clinical effectiveness is conceptualized as a pyramid, in which the base of the pyramid contains the least scientifically sound type of evidence formationâprofessional ideas and opinions (Figure 2-1). Moving up the pyramid are ever-stronger types of evidence: case reports and case series; case-control studies and cohort studies; toward the top of the pyramid, randomized controlled studies; and, at the apex, randomized, double-blind, placebo-controlled studiesâoften called âthe gold standard.â While this hierarchy of evidence has been widely accepted and used in the medical community for more than a decade, its real-world application is less than perfect. It is important to look at the nature of evidence needed in the context of whether the motivating question focuses on safety, efficacy, effectiveness, or efficiency. Does the intervention under study cause harm? Does it work? Does it work in context? Is it a sound use of resources? There is a growing recognition of the need to view evidence in a more nuanced and detailed fashion. Rather than a pyramid or hierarchy, a more com- prehensive and systematic view of clinical evidence emerges when evidence is viewed through an evidence matrix, which is structured according to levels of certainty juxtaposed with levels of likely benefit, in order to provide a framework for better understanding which interventions would provide the greatest impact or greatest likelihood of impact (Pearson et al., 2003). Insight into the possible levels of impact can then be used to inform, in variable fashion, the many different types of decision-making challenges often faced in health care, such as regulation, medical coverage, guidelines, indicators used in quality care assessment, and even individual-level decisions (Teutsch, 2008). Considering the multifaceted dimen- sions of the application of evidence in medical care offers a sense of the complex nature of factors involved in using evidence for decision making.
Evidence pyramidâ¦ SCIENTIFIC ISSUES 23 Randomized Controlled Double Blind Randomized Controlled Studies Cohort Studies Case Control Studies Case Series Case Reports Ideas, Opinions FIGURE 2-1â Evidentiary hierarchy of weighing evidence. SOURCE: Copyright SUNY (State University of New York). 2004. Guide to research methods: The evidence pyramid. http://library.downstate.edu/EBM2/2100.htm (accessed November 11, 2008). 2-1 Evidence in Population Health In population health matters (compared with personal medical care deci- sions), considering the roles and nature of evidence in policy decision making can be even more complicated. Because in population health, effectiveness is more a function of the nature of the intervention than the nature of the evidence, the evi- dence standard will vary dramatically according to the nature of the intervention. There is also a spectrum of factors at work in the decision making for population health, but that spectrum relates to the nature of the interventionâwith purely environmental interventions at one pole (e.g., water supply fluoridation) and purely individual interventions at the other (e.g., behavioral change interventions designed to encourage smoking cessation, increased physical activity, a change in dietary habits). Making decisions at the population level may require juggling fewer data points, but their powerful impact requires that particular consideration be given to understanding several other factors: the potential health, economic, and social consequences of inaction; the potential health, economic, and social consequences of action; the formal characterization of uncertainties and mapping strategies as uncertainties resolve; and the design of systematic assessment and feedback as part of the intervention.
24 ENVIRONMENTAL HEALTH SCIENCES DECISION MAKING Although population health and clinical medicine are two very different disciplines, there are certain commonalities between the approach to evidence in clinical arenas and in population health arenas. Both gravitate toward simple answers in complex circumstances and the need to consider the overall context in which the evidence or intervention would apply. The interventions need to be transparent about the decision rules at various points along the evidence spec- trum, and at the same time communicate meaningfully and constantly about the state of the evidence. As seen throughout science, evidence is dynamic in nature and therefore needs a strategy to accommodate new insights. Perhaps the most important commonality related to the interpretation of evidence in both clinical and population health settings is the centrality of effective communication. Too often the concept of risk communication is not well understood, yet the ability to explain the nature of risk and the evolving nature of the scientific process is vital to enabling the public to understand how decisions are made in a scientific context. There is an inherent tension between an individualâs natural desire for the definitive answer and the nature of the scientific process, in which it is well known that nothing is foolproof. Communicating this tension effectively on both an individual and a population basis is fundamental. Although teaching institu- tions and other avenues have not yet been able to engage society in learning the theory behind risk, there is a need to continue to improve how the evolutionary and dynamic nature of evidence is communicated so that the public can better understand the workings behind a decision. With the continual challenge of misinterpretation of evidence by media and others, an effective communication strategy is vital to moving evidence to the point of decision making, whether for individual, clinical, or population-wide interventions. The Role of Uncertainty and Susceptible Populations in Environmental Health Decision Making Dale B. Hattis, Ph.D., Research Professor, George Perkins Marsh Institute, Clark University Problematic Concepts: Uncertainty and Variability Starting in the 1980s, two probabilistic concepts, variability and uncertainty, began to be associated with the science of risk analysis. These two concepts can account for differences made in technical assessments and have different impli- cations for policy decisions. Variability consists of the real differences among people or among cases in some parameter that affects risk. For example, how much exposure one individual may have to a chemical or substance, how much of a chemical or substance one person may intake compared with another person, and how much of that chemical or substance is activated by metabolic enzymes
SCIENTIFIC ISSUES 25 all contribute to individual risks. The concept of uncertainty is the imperfection in knowledge of the true value of a parameter for either an individual or a group. Both of these concepts should be taken into consideration for purposes of risk management decision making, but for very different reasons, as they have differ- ent implications for both information gathering and analysis (see below). Uncertainty and Variability: Different Yet Important In order to understand the implications for the future of risk management decision making, the underappreciated features of both concepts need to be explained and understood. First, variability, the standard statistical descriptions of data (e.g., the standard deviation), tends to overstate real variability by includ- ing measurement errors. These measurement errors spread the observations out further from each other than the underlying reality of the differences among individuals. Currently, there are not many well understood or commonly used statistical methods to disentangle the measurement errors from the real variability. A second underappreciated feature of variability is seen during priority setting for interventions. Here, the more predictable variability there is among a number of categories for intervention, the more benefit can be derived by focusing resources on high-scoring categories for intervention. In both the application of standard statistics and priority setting, the concept of uncertainty works in the opposite manner. First, standard statistical descrip- tions of dataâfor example, standard errorsâtend to radically understate the actual level of uncertainty by excluding unsuspected systematic errors that affect all data points in common. Such systematic errors include the unrepresenta- tiveness of population samples or error resulting from a miscalibrated instru- ment, among a number of other sampling inaccuracies. In addition, incomplete assessment of model errors is an important threat to the accuracy of uncertainty assessment. In a priority-setting system, greater uncertainty in priority scores suggests greater spreading of resources to lower scoring categories or interven- tions. Essentially, better data on lower priority categories improve information for later decision making, and the value of that information is greater if there is greater uncertainty initially. Tied into the concepts of uncertainty and variability is the idea that uncer- tainty can be quantitatively characterized by reducing it to an observable vari- ability among putatively analogous cases. Many times scientists claim they do not have information about a certain aspect of a chemical or substance, yet informa- tion can be assembled on similar chemicals or substances to then extrapolate to the original chemical in question. This approach is not without difficulty, how- ever, as there is a need for rules for making the analogiesâfor example, defining the reference groups to derive uncertainty distributions for particular cases. While this may be a challenge, it does provide a way forward for health scientists to learn to reason quantitatively from available evidence relevant to specific uncer-
26 ENVIRONMENTAL HEALTH SCIENCES DECISION MAKING tainties. Doing so requires the creation of databases and applying the information derived from these databases to the risk analysis process. Evolution and Implications of Concepts and Practices: Susceptible Populations There has been an evolution of concepts and practices in the representation of variability (Hattis, 2004), especially for the study of susceptible populations. The older, obsolete view is a categorical representation of variability only in the form of discrete susceptible subgroups. This is problematic as it does not account for the variability within a susceptible group. Using the example of asthmatics, it often happens that, for example, an air pollution study makes measurements of susceptibility compared with nonasthmatics but reports the data only in the form of group means. One may say that individuals with this problem have been studied and therefore information is known about their sensitivities, but that view does not take into account the variation from one asthmatic to another. The cur- rent or usual practice for mathematical representation of variability is a simple application of assumed distribution of form without assessment of fit. As science moves forward, the best practice for this would be distributional forms chosen on the basis of mechanistic theories about how the differences among people or cases arise. Without such, one is more or less making a decision using measures that do not reflect the causal processes producing the differences of interest. Making projections beyond the data at hand is much more reasonable if it reflects some mechanistic theory. This has large implications for the information used in policy making. Using the older, obsolete view can cause the dismissal of âhypersensitiveâ populations. While current practice does allow for susceptible subgroups with a defined safety threshold, there is no single factor that can capture a distributional response. Therefore, what is needed and obtained by using best practice is the quantitative analysis of how many people and which groups of people are at how much risk from various policiesâideally with some statement of associated confidence (Hattis, 2004). In terms of technical aspects of variability measurement and analysis, the older, obsolete view is to deliberately restrict the sample and not have it cor- respond to what would truly be representative of the population. For example, it has been common for initial drug studies to have only young, healthy white men instead of an older, diverse population. This does little to tell the public how a par- ticular drug will affect a wider population. A more current usual practice is to use observations in haphazard or convenience samples to include all readily available subjects but without attention to factors that could affect the primary parameter under study. As for the foreseeable best practices, they would include a stratified random sampling, with the strata constructed to represent groups expected to
SCIENTIFIC ISSUES 27 be different in the studied parameter. This would also include oversampling of relatively rare subpopulations of special interest. Future Direction and Motivation Ultimately, there is a need to define variability and uncertainty distribu- tions as integral parts of risk analyses for both analysts and managers. Four considerations will make this increasingly important in the future. First, legal cases involving environmental issues are increasingly calling for the recognition that some finite rates of adverse effects will remain even after implementation of reasonably feasible control measures. Second, societal reverence for life and health means making the best decision with available resources to reduce harmful effects. Third, responsible social decision making requires making estimates of how many people are likely to experience how much harm for effects of specific degrees of severity and with what degree of confidence. Last, the traditional multiple-single uncertainty factor system cannot yield estimates of health protec- tion benefit that can be juxtaposed with the costs of health protection measures. It should be expected that a younger generation of analysts will not accept the older, obsolete procedures that fail to provide a coherent way to use distributional information that is clearly relevant to factual and policy issues. Also, this gen- eration will have greater mathematical and computational facility, particularly as biology becomes quantitative systems biology. In addition, the legal process will demand the use of best science, and newer information and communication tools will foster increasing habits and demands for democratic accountability and transparency. Hattis and Anderson (1999) proposed some risk management criteria for environmental health decision making that draw on considerations of uncertainty and variability. First is fair processâopen disclosure and, to the extent practi- cable, voluntary acceptance of risks. The second is equity in the distribution of risks in relation to the benefits derived from accepting the riskâideally, redefin- ing criteria of significant risk in terms of individual variability and uncertainty (no more than x probability of harm for the yth percentile of the population with z degree of confidence). Third is a goal for government agencies to achieve the greatest possible effectiveness to reach health and safety goals using limited resources. And fourth is the ethical principle in medicine of âFirst, do no harm.â This means there is an obligation to assess the likely comparative consequences of policy prescriptions for environmental problems, selecting only interventions that have a reasonably high probability of producing overall benefits.
28 ENVIRONMENTAL HEALTH SCIENCES DECISION MAKING The Use and Misuse of Science in Decision Making Rena Steinzor, J.D., Jacob A. France Research Professor of Law, University of Maryland School of Law The Intersection of Science and Law A central tenet in scientific decision making is that any decision rendered needs to be based on the best available science, which âdepends upon a disinter- ested and transparent scientific processâ (Steinzor and Shudtz, 2007). Scientific decisions should be made using the weight of the evidence, yet in todayâs world, scientific decisions are often called into question by the legal profession, seeking to influence an outcome. The pathway from science to science policy is often perceived by scientists and the public as a straightforward one, as the merits of the science have been vetted during peer review in the publication process (Wagner and Steinzor, 2006). However, this is not always true. The cultures of law and science are vastly different and at times clash with one another, which puts pressure on science when it is applied in a legal setting. On one side is science, which involves a quest for truth through the collection of evidence. That version of the truth is developed through a largely collabora- tive process, which has a built-in incentive to more deeply explore and test a hypothesis due to the nature of the scientific process itself. After careful, and at times repeated, testing, scientists ideally arrive at a particular explanation or propose a line of reasoning, which is based on the weight of the evidence. In the environmental field, this analysis often involves applying evidence from chemical structures and animal studies to human epidemiological evidence. Data are col- lected and reviewed by a multidisciplinary team of subject matter experts, who, after taking into account confounding factors and scientific error, try to reach a consensus on a particular scientific issue, which may then be accepted by the larger scientific community and the public. On the other side is law, which trains individuals to be primarily concerned with winning and losing. Wagner and Steinzor (2006) have argued that ârather than incorporating science into policy dispassionately and using research to fur- ther a quest for truth, the legal system makes most decisions through an adversar- ial process driven by affected parties who interpret and re-interpret the science to prove that they should âwinââ (p. 4). In fact, the legal profession instructs lawyers to take an issue and look for a version of the truth to present to a decision maker that contrasts with an opposing sideâs version of the truth on that same issue. The differences between these two processes are most profound in the regu- latory arena, where, once a scientific decision has been reached, it can then be subject to extreme scrutiny and deconstruction by the legal profession in its quest to argue for one policy position over another. Lawyers often use a technique referred to as âcorpuscularization,â which undermines a body of evidence by disassembling each individual study in order to discredit it from inclusion in the
SCIENTIFIC ISSUES 29 overall data set. This clash presents a threat to achieving clean and independent science, as each component of scientific study is dissected to the point that the entire premise on which a decision was based is undermined. This technique can create important data gaps and is in stark contrast to the weight of the evidence approach used by scientists (Wagner and Steinzor, 2006). Another threat to the integrity of scientific decision making is the confla- tion of risk assessment and risk management. The Office of Management and Budgetâs attempt to combine these stages in decision making has largely failed because it is a âone size fits allâ approach to the widely different missions and goals of the federal government agencies (OMB, 2006). It is a transparent attempt to ensure that scientists consider the economic impact of a decision to control risk at the beginning, rather than at the end, of the decision-making process, at the same time that they describe and characterize the risk, raising the specter that some risks will never be deemed significant because they would cost too much to control. The issue of perchlorate is one example. Perchlorate is a highly soluble type of rocket fuel that has been found in the drinking supply in certain sections of the United States. Economic estimates for environmental cleanup are quite high, resulting in the scientific research agenda being subverted and crucial research left undone. The military has argued that scientists should take national security into account in assessing risk, since this chemical is used by the military. What this means is that the use of economics to undermine objective scientific evalua- tion is inappropriate. The only truly scientific decisions are ones reached on the basis of the weight of the evidence. Rationale for Revisiting an Environmental Health Decision: THE National Toxicology Program Kenneth Olden, Ph.D., Sc.D., Principal Investigator, The Metastasis Group, Laboratory of Molecular Carcinogenesis, NIEHS Four Fundamental Decisions The National Toxicology Program (NTP) is charged with overseeing four fundamental decisions on the health effects of chemicals. First is what research exists and what research is needed to support the nationâs toxicity testing pro- gram. The NTP has made great progress in developing transgenic animal models as well as furthering the understanding of toxicogenomics, that is, the genetic basis for differences in response to a toxic chemical. Second is which specific exposures should be studied and what are the best testing systems. One way the NTP has approached this is through the creation of the Interagency Coordinating Committee on Validation of Alternative Methods, which âpromotes the scientific validation and regulatory acceptance of toxicological test methods that more
30 ENVIRONMENTAL HEALTH SCIENCES DECISION MAKING accurately assess the safety or hazards of chemicals and products and that reduce, refine (decrease or eliminate pain and distress), and/or replace animal useâ (NTP, 2008). Third is which exposures to evaluate and report as risks to human repro- duction and development. Fourth is the decision about what should be included in the Report on Carcinogensâa congressionally mandated document. All of these decisions are made using a public and prescriptive process that is attentive to the input received from the scientific community, policy makers, the American public, and other stakeholders. Report on Carcinogens: Criteria and Process for Listing The Report on Carcinogens (RoC) is a congressionally mandated document âprepared by the NTP for the purpose of identifying substances, mixtures of chemicals, or exposure circumstances associated with technological processes that cause or might cause cancer and to which a significant number of persons in the United States are exposed. Listed in the RoC are a wide range of substances, including metals, pesticides, drugs, and natural and synthetic chemicalsâ (NTP, 2005). The chemicals on this list go through a very extensive public review, and additions are made after careful scrutiny and consideration of all available sci- ence (Table 2-1). Specifically, in order for a chemical to be listed in the RoC, it would either be a known human carcinogen or is reasonably anticipated to be a human carcinogen. Table 2-1â Report on Carcinogens Listings Number of Number of Substances RoC Edition Year Substances Listed Delisted Knownââââ easonably R Anticipated ââ ââ Chemicals or First 1980 26 ââ Industrial Processes Second 1981 25 63 Third 1983 23 98 Fourth 1985 30 119 Fifth 1989 23 140 4 Sixth 1991 26 148 2 Seventh 1994 27 156 Eighth 1998 29 169 Ninth 2000 48 170 2 Tenth 2002 52 176 Eleventh 2004 58 188 Source: Olden, unpublished.
SCIENTIFIC ISSUES 31 The process of listing a chemical in the RoC has gone through several revi- sions to ensure that the criteria used for such categorization are accurate. Start- ing in 1985, the RoC process itself began to allow for larger public input and increased transparency. The most significant changes resulted in the 1994 and 1996 reviews. In fact, the strength of the NTP is a direct result of the amount of public input factored into the decision-making process. Two other important changes to the RoC are a change in publication times, from annually to every 2 years, and the decision to use all of the available science as the criteria for inclu- sion in the reportâfor example, the allowance for the utilization of knowledge of mechanisms and structure/activity relationships in assessing risk. This change provided a set of criteria to determine that if a chemical or substance has a struc- ture or activity comparable to a chemical already listed in the report, it is reason- able to assume that the chemical in question would also be a known or reasonably anticipated human carcinogen, even if all of the data needed to draw such a con- clusion are still not accessible. Other criteria or kinds of evidence used in mak- ing the listing decision include experimental studies in animals, epidemiological studies in animals, and mechanistic studies. Essentially all available science is used to make a decision regarding the listing of a chemical in the RoC. Reasons to Revisit A decision to list a chemical in the RoC does not mean that it cannot be reconsidered. As science evolves and new information is discovered about the harmful, or not harmful, effects of a chemical, there may be circumstances for reevaluating it (see the saccharin case study below). First and foremost, it is the evolution of new science that may provide evidence in support of either upgrading a chemical from one that is reasonably likely to be a carcinogen to one that is a known carcinogen, or downgrading or removing it altogether from the RoC. A second circumstance is if the exposure has been eliminated because of removal from the market or to effective environmental control, so that a significant num- ber of people are no longer being exposed to an environmental agent. Through constant revision and public evaluation of the science and evidence, the decision to delist a chemical has been made nine times since 1980. Since the original list- ing of these chemicals, the science evolved over the intervening years to show that the evidence was not substantial enough to continue to include the chemical in the RoC as a reasonably anticipated human carcinogen. Chemicals have also been removed from the list because they went out of commercial use, such as aramite and cycasin, thus eliminating exposure. Finally, there have been chemi- cals reviewed but not recommended for listing in the RoC. All of these decisions were made taking into account the evolving nature of science and discovery of new evidence to trigger a reassessment of data.
32 ENVIRONMENTAL HEALTH SCIENCES DECISION MAKING Case Study: Saccharin One example of the use of evolving science and evidence to call for, and ultimately inform, the decision-making process for chemicals in the RoC is that of the artificial sweetener saccharin. The original reasoning behind the listing of saccharin on the RoC was based on the weight of the scientific evidence at the time. There was evidence, in the form of experimental animal data, demonstrating that use of this chemical caused urinary bladder cancer in male rats (HHS and NTP, 2005). This chemical was therefore listed as reasonably anticipated to be a human carcinogen. Over the intervening years, evidence came to light that led the scientific community to call into question the carcinogenic effects of saccharin, and a decision was made to reassess its listing (HHS and NTP, 2005). After careful review, several pieces of evidence were considered in the delist- ing of saccharin from the RoC. First, although there was evidence for the carci- nogenesis of saccharin in male rats, there was less convincing evidence in female rats and mice (Arnold et al., 1980; Taylor et al., 1980). Second, studies indicated that the observed urinary bladder cancers in rats were related to the physiology of the ratâs urinary system and that the damage to epithelial cells lining the bladder led to an increase in cell growth in the rat (Sweatman and Renwick, 1979). Third, results of several human epidemiological studies showed no clear association between saccharin consumption and urinary bladder cancer in the general popu- lation compared with diabetics, who presumably consume greater amounts of artificial sweeteners (Armstrong and Doll, 1975). Fourth, saccharin is essentially nonmutagenic in conventional bacterial assays and does not bind to DNA (Ashby, 1985; IARC, 1987; Whysner and Williams, 1996), both of which are important predictors of carcinogenicity. This example illustrates the reasons for revisiting decisions based on the evolution of science. Session Discussion: Weight of the Evidence in Science versus Law Evaluating the weight of the evidence in order to make credible decisions based on a full set of data is in the interest of all scientists as well as the public. However, how to evaluate and ultimately incorporate the weight of the evidence in the scientific decision-making process is an issue of current debate in the sci- entific community. Making a decision based on the totality of the evidence should be the standard for the decision-making process, argued Hattis and Olden. How- ever, there is concern about the growing disconnect between the legal process and the scientific process, even to the point at which science is under attack and the use of widely accepted scientific criteria, such as animal studies, is even called into question. This disconnect illustrates the inherent tension between societyâs desire for a definitive answer and the nature of the scientific process itself, which is one of slow evolution, said Hattis. The legal system has increasingly chosen
SCIENTIFIC ISSUES 33 to view the expression of scientific uncertainty, even in small amounts, as cause for alarm and deconstruction. In order to preserve the scientific process, there is a need to communicate better the evolutionary and dynamic nature of evidence involved in the scientific process, so that it is well understood by the public and is less vulnerable to dissec- tion in a legal setting, said Hattis. Explaining the scientific process may include applying different standards of proof in legal settings, so that the expresson of uncertainty by a scientist does not mean the end of a case or does not become fodder for the opposing side. One possible solution to the growing question of sci- entific uncertainty and the disconnect between the scientific and legal processes is that of transparency and communication, noted McGinnis. A call to improve the way in which the evolutionary and dynamic nature of evidence is communicated to the public needs to take place, so that the data process and evidence are not so vulnerable in a courtroom setting.