National Academies Press: OpenBook

Science and Decisions: Advancing Risk Assessment (2009)

Chapter: 6 Selection and Use of Defaults

« Previous: 5 Toward a Unified Approach to Dose-Response Assessment
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 188
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 189
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 190
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 191
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 192
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 193
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 194
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 195
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 196
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 197
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 198
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 199
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 200
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 201
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 202
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 203
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 204
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 205
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 206
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 207
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 208
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 209
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 210
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 211
Suggested Citation:"6 Selection and Use of Defaults." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 212

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

6 Selection and Use of Defaults As described in Chapter 2, the authors of the National Research Council report Risk Assessment in the Federal Government: Managing the Process (NRC 1983), known as the Red Book, recommended that federal agencies develop uniform inference guidelines for risk assessment. The guidelines were to be developed to justify and select, from among available options, the assumptions to be used for agency risk assessments. The Red Book commit- tee recognized that distinguishing the available options on purely scientific grounds would not be possible and that an element of what the committee referred to as risk-assessment policy—often referred to later as science policy (NRC 1994)—was needed to select the op- tions for general use. The need for agencies to specify the options for general use was seen by the committee as necessary to avoid manipulation of risk-assessment outcomes and to ensure a high degree of consistency in the risk-assessment process. The specific inference options that now appear in EPA’s risk-assessment guidelines, and that permeate risk assessments performed under those guidelines, have come to be called default options, or more simply defaults. The Red Book committee defined a default option as the inference option “chosen on the basis of risk assessment policy that appears to be the best choice in the absence of data to the contrary.” As the authors of Science and Judg- ment in Risk Assessment (NRC 1994) observed, many of the key inference options selected as defaults by EPA are based on relatively strong scientific foundations, although none can be demonstrated to be “correct” for every toxic substance. Because generally applicable defaults are necessary, the ultimate choice of defaults involves an element of policy. Since 1983, EPA has updated its set of defaults and has made strides in providing more detailed explanations for the choice of defaults that emphasize their theoretical and evidentiary foundations and the policy and administrative considerations that may have influenced the choices (EPA 2004a).   The Red Book committee did not use the phrase risk-assessment policy in the usual sense in which science policy is used but far more narrowly to describe the policy elements of risk assessments. The committee distin- guished between the policy considerations in risk assessment and those pertaining to risk management. 188

SELECTION AND USE OF DEFAULTS 189 The Red Book emphasized both the need for generically applicable defaults and the need for flexibility in their application. Thus, the Red Book and Science and Judgment pointed out that scientific data could shed light, in the case of specific substances, on one or more of the information gaps in a risk assessment for which a generally applicable default had been applied. The substance-specific data might reveal that a given default might be inap- plicable because it is inconsistent with the data. The substance-specific data might not show that the default had been ill chosen in the general sense but could show its inapplicability in the specific circumstance. Thus, there arose the notion of substance-specific departures from defaults based on substance-specific data. Much discourse and debate have attended the question of how many data, and of what type, are necessary to justify such departures, and the committee addresses the matter in this chapter. EPA recently altered its view on the question of “departures from defaults,” and this chapter begins by examining this view in relation to its central theme. CURRENT ENVIRONMENTAL PROTECTION AGENCY POLICY ON DEFAULTS The committee recognizes that defaults are among the most controversial aspects of risk assessments. Because the committee considers that defaults will always be a necessary part of the risk-assessment process, the committee examined EPA’s current policy on defaults, beginning with an eye toward understanding its applications, its strengths and weaknesses, and how the current system of defaults might be improved. EPA began articulating a shift toward its current policy on defaults in the Risk Charac- terization Handbook (EPA 2000a) when it stated, For some common and important data gaps, Agency or program-specific risk assessment guidance provides default assumptions or values. Risk assessors should carefully consider all available data before deciding to rely on default assumptions. If defaults are used, the risk assessment should reference the Agency guidance that explains the default assumptions or values (p. 41). EPA’s staff paper titled Risk Assessment Principles and Practices (EPA 2004a) reflected a further shift in the agency’s practices on defaults: EPA’s current practice is to examine all relevant and available data first when performing a risk assessment. When the chemical- and/or site-specific data are unavailable (that is, when there are data gaps) or insufficient to estimate parameters or resolve paradigms, EPA uses a default assumption in order to continue with the risk assessment. Under this practice EPA invokes defaults only after the data are determined to be not usable at that point in the as- sessment—this is a different approach from choosing defaults first and then using data to depart from them (p. 51). EPA’s revised cancer guidelines (EPA 2005a) emphasize that the policy is consistent with EPA’s mission and make clear that the general policy applies to cancer risk assessments: As an increasing understanding of carcinogenesis is becoming available, these cancer guide- lines adopt a view of default options that is consistent with EPA’s mission to protect human health while adhering to the tenets of sound science. Rather than viewing default options as the starting point from which departures may be justified by new scientific information, these cancer guidelines view a critical analysis of all of the available information that is relevant to assessing the carcinogenic risk as the starting point from which a default option may be invoked if needed to address uncertainty or the absence of critical information (p. 1-7). Those statements may reflect the agency’s current perspective on the primacy of scien- tific data and analysis in its risk assessments; the agency commits to examining all relevant

190 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT and available data before selecting defaults. The committee struggled with what the current policy means in terms of both literal interpretation and application to the risk-assessment process. The lack of clarity has the potential to lead to multiple interpretations. It raised questions regarding the implications of the policy for risk decision-making. It is difficult to argue with a more robust examination of available science, which the committee strongly supports; however, the committee expressed concern that without clear guidelines on the extent to which science should be evaluated, the open-ended approach could lead to delays and undermine the credibility of defaults and the ultimate decision process. The committee notes that the risk-characterization handbook (EPA 2000a) provides some statements regard- ing the need to identify key data gaps and avoid delays in the risk-assessment process in the planning and scoping phase, but it is concerned that such statements may not be adequate to address complications resulting from the current policy: Another discussion during the planning and scoping process concerns the identification of key data gaps and thoughts about how to fill the information needs. For example, can you fill the information needs in the near-term using existing data, in the mid-term by conducting tests with currently available test methods to provide data on the agents(s) of interest, and over the long-term to develop better, more realistic understandings of exposure and effects, and to construct more realistic test methods to evaluate agents of concern? In keeping with [transparency, clarity, consistency, and reasonableness] TCCR, care must be taken not to set the risk assessment up for failure by delaying environmental decisions until more research is done (p. 29). The policy may be appealing at first glance: it creates a two-phase process that obligates the agency to give full attention to all available and relevant scientific information and in the absence of some needed information to use defaults rather than allow uncertainties to force an end to an assessment and to related regulatory decision-making. On closer examination, the current policy carries a number of disadvantages. Concerns with EPA’s Current Policy on Defaults Depending on implementation, the position in the current policy as articulated in the 2004 staff paper (EPA 2004a) and 2005 cancer guidelines (EPA 2005a) could represent a radical departure from previous policies. Rather than starting with a default that repre- sents a culmination of a thorough examination of “all the relevant and available scientific information,” this policy has the potential to promote with each assessment a full ad hoc examination of data and the spectrum of inferences they may support without being selective or contrasting them with the default to reflect on their plausibility. There are then no real defaults, and every inference is subject to ready replacement. By definition, a full evaluation of the evidence identifies the best available assumption, whether it is based on chemical-spe- cific information or more general information. Thus, EPA takes on, even more than before, the burden of establishing that existing science does not warrant use of an inference differ- ent from the default. There is also the commitment “to examine all relevant and available data” first. Pushed to the extreme for some chemicals, that can mean retrieving, cataloging, and demonstrating full consideration of thousands of references, many of little utility but nonetheless “relevant.” It also could lead to the reopening of the basis of some of the generic defaults on an ad hoc basis, as discussed below. Those possibilities create further vulnerability to challenge and delay that could affect environmental protection and public health. From a practical management perspective, the mandate to consider “all relevant and available data” may be unworkable for an overburdened and underresourced EPA (EPA SAB 2006, 2007) that is struggling to keep up with demands for analysis of hazard and dose-response

SELECTION AND USE OF DEFAULTS 191 information (Gilman 2006; Mills 2006). It may also have profound ripple effects on regula- tory and risk-management efforts by other agencies at both the federal and state levels. And there is a lack of clarity as to what the policy means in cases in which the database supports a different inference from the default and does not merely replace a default with data. What Is Needed for an Effective Default Policy? Both the current and previous EPA policies on defaults raise a crucial question: How should the agency determine that the available data are or are not “usable,” that is, that   ne member of the committee concluded that the new EPA policy is not unclear, but instead represents a de- O finitive and troubling shift away from a decades-old system that appropriately valued sound scientific information and avoided the paralysis of having to re-examine generic information with every new risk assessment. During its deliberations, the member heard two things clearly from EPA that make the intent of its above language unam- biguous: (1) that EPA regards “data” and inferences as two concepts that can be compared to each other, and that the former should trump the latter (the member heard, for example, that the new policy is intended to repudiate the historical use of “risk assessment without data—just defaults”); and (2) that the goal of the policy shift is to “reduce reliance on defaults” (EPA SAB 2004a; EPA 2007d). This member of the committee questioned both of these premises. First, the member concluded that there are two problems with the notion of pitting “data” against defaults. The logical problem, in this member’s opinion, was that the actual choice EPA faces is a choice among models (inferences, assumptions), which are not themselves “data” but which are ways of making sense of data. For example, reams of data may exist on some biochemical reaction that might suggest that a particular rodent tumor was caused via a mechanism that does not operate in humans. EPA’s task, however, is whether or not to make the assumption that the rodent tumors are relevant, in the absence of a well-posed theory to the contrary, one that is supported by data. Without the alternative assump- tion being articulated, EPA has nothing coherent to do with the data. The more important practical problem with EPA’s new formulation, in this member’s opinion, is that a policy of “retreating to the default” if the chemical- or site-specific data are “not usable” ignores the vast quantities of data (interpretable via inferences with a sound theoretical basis) that already support most of the defaults EPA has chosen over the past 30 years. In order for a decision to not “invoke” a default to be made fairly, data supporting the inference that a rodent tumor response was irrelevant would have to be weighed against the data supporting the default inference that such responses are generally relevant (see, for example, Allen et al. 1988), data supporting a possible nonlinearity in cancer dose- response would have to be weighed against the data supporting linearity as a general rule (Crawford and Wilson 1996), data on pharmacokinetic parameters would have to be weighed against the data and theory supporting allometric interspecies scaling (see, for example, Clewell et al. 2002), and so on. In other words, having no chemi- cal-specific data other than bioassay data does not imply there is a “data gap,” as EPA now claims—it may well mean that vast amounts of data support a time-tested inference on how to interpret this bioassay, and that no data to the contrary exist because no plausible inference to the contrary exists in this case. In short, this committee member sees most of the common risk assessment defaults not as “inferences retreated to because of the absence of information,” but rather as “inferences generally endorsed on account of the information.” Therefore, this committee member concluded that EPA’s stated goal of “reducing reliance on defaults” per se is problematic; it begs the question of why a scientific-regulatory agency would ever want to reduce its reliance on those inferences that are supported by the most substantial theory and evidence. Worse yet, the committee member concluded, it seems to prejudice the comparison between default and alternative models before it starts—if EPA accomplishes part of its mission by ruling against a default model, the “critical analysis of all available informa- tion” may be preordained by a distaste for the conclusion that the default is in fact proper. This committee member certainly endorses the idea of reducing EPA’s reliance on those defaults that are found to be outmoded, erroneous, or correct in the general case but not in a specific case—but identifying those inferior assumptions is exactly what a system of departures from defaults, as recommended in the Red Book, in Science and Judgment, and in this report, is designed to do. EPA should modify its language to make clear that across- the-board skepticism about defaults is not scientifically appropriate. Thus, the committee member concludes that recommendations in this chapter apply whether or not EPA believes it has “evolved beyond defaults.” A system that evaluates every inference for every risk assessment still needs ground rules, of the kind recommended in this chapter, to show interested parties how EPA will decide what data are “usable” or which inference is proper. This committee member urges EPA to delineate what evidence will determine how it makes these judgments, and how that evidence will be interpreted and questioned—and EPA’s current policy sidesteps these tasks.

192 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT they do or do not support an inference alternative to the default? The question underscores the need for guidance to implement a default policy and evaluate its effect on risk decisions and efforts to protect the environment and public health. The committee did not conduct a detailed evaluation, but a cursory examination of some recent assessments shows detailed presentations and analyses of the available data bearing on each assessment, explicit deter- minations that identified data that do not support an inference alternative to such defaults as low-dose linearity and the cross-species scaling of risk, but thus far not the wholesale reconsideration of generic defaults. No matter how one interprets EPA’s current policy on defaults, an effective policy requires criteria to guide risk assessors on factors that would render data “not usable” or supportable of inference alternatives to a default, and therefore requiring that a default be invoked. Therefore it remains the case that • Defaults need to be maintained for the steps in risk assessment that require inferences beyond those that can be clearly drawn from the available data or to otherwise fill common data gaps. • Criteria should be available for judging whether, in specific cases, data are adequate for direct use or to support an inference in place of a default. The “data” that may be usable in place of a default will depend on the role of the par- ticular default in question. For example, some defaults regarding exposure may be readily inferred from observations and in this sense are “measurable,” but many defaults for biologic end points will continue to be based on science and policy judgments. The latter type of defaults is the focus of this report. Readily observable and measurable defaults, such as the amount of air breathed each day or the number of liters of water consumed, may be chosen to make assessments man- ageable or consistent with one another but not to support inferences beyond the available data or what can be readily observed, and they are therefore generally less difficult to justify. Decisions about replacing them with distributions (for variability analysis) or specific values based on survey data tend to be less controversial. In contrast, the defaults involving science and policy judgments, such as the relevance of a rodent cancer finding in predicting low-dose-human risk, are used to draw inferences “beyond the data,” that is, beyond what may be directly observable through scientific study. The next section gives examples of important defaults of that kind related to the hazard- identification and dose-response assessment steps. Inferences are needed when underlying biologic knowledge is uncertain or absent. Indeed, fundamental lack of understanding of key biologic phenomena can remain after many years of research. In some cases, however, research “data”—typically on pharmacokinetic (PK) behavior and modes of toxic action— support an inference different from that implicit in the default. Determining whether such “data” are adequate to support a different inference is often difficult and controversial. Much of the emphasis of this chapter is on the defaults chosen as “inferences” in the presence of considerable uncertainty, not on those chosen to represent observed parameters or to fill gaps in data on readily observable phenomena. In the discussions in this chapter, simply for ease of presentation, the committee uses the term departures in offering its views regarding the use of inferences based on substance- specific data rather than defaults. Departures in the sense used in this report is related to the decision in specific cases as to whether data are adequate to support an inference different from the default and to make it unnecessary to adopt the default. Recognizing the challenge

SELECTION AND USE OF DEFAULTS 193 of interpreting EPA’s policy, the committee, to be consistent with its charge, offers its discus- sions and recommendations in the context of current EPA policy. THE ENVIRONMENTAL PROTECTION AGENCY’S SYSTEM OF DEFAULTS Explicit Defaults The system of inferences used in EPA risk assessments is contained in the agency’s re- ports, staff papers, procedural manuals and guidance documents. These materials provide some advice and information on interpreting the strengths and limitations of various types of scientific datasets and on data synthesis, including whether a body of data supports a default or alternative inference, and risk assessment methods. Guidance is given on assess- ment of risks of cancer (EPA 2005a), neurotoxicity (EPA 1998a), developmental toxicity (EPA 1991a), and reproductive toxicity (EPA 1996); on Monte Carlo analysis (EPA 1997); on assessment of chemical mixtures (EPA 1986, 2000b); on reference-dose (RfD) and ref- erence-concentration (RfC) processes (EPA 1994, 2002a,b); and on how to judge data on whether, for example, male rat kidney tumors (EPA 1991b) or rodent thyroid tumors (EPA 1998b) are relevant to humans (see, for example, Box 2-1 and Table D-1). The toxicity guidance documents also identify some defaults commonly used in assessments covered by the guidance. Tables 6-1 and 6-2 list some of the important defaults for carcinogen and noncarcinogen risk assessments. Missing Defaults In addition to explicitly recognized defaults, EPA relies on a series of implicit or “miss- ing” defaults—assumptions that may sometimes exert great influence on risk characteriza- tion. For a risk assessment to be completed, every “inference gap” must have been “bridged” with some assumption, whether explicitly stated or not. Assumptions analogous to missing defaults are made in every field. For example, it is common to treat a pair of variables as independent when no information exists about any relationship between them. That as- sumption may well be reasonable, but it imposes a powerful condition on the analysis: that the correlation coefficient between the variables is exactly 0.0 rather than any other value between -1 and 1. Use of missing defaults has become so ingrained in EPA risk-assessment practice that it is as though EPA has chosen the same assumptions explicitly. The committee recommends that EPA systematically examine the risk-assessment process and identify key instances of the bridging of an inference gap with a missing default, examine its basis, and consider alternatives if such a default is not sufficiently justified. This committee is concerned particularly about two missing defaults. First, agents that have not been examined sufficiently in epidemiologic or toxicologic studies are insufficiently included in or even excluded from risk assessments. Typically, there is no description of the risks potentially posed by these agents in the risk characterization, so their presence often carries no weight in decision-making. With few notable exceptions (for example, dioxin-like compounds), they are treated as though they pose no risk that should be subject to regula- tion in EPA’s air, drinking-water, and hazardous-waste site programs. Also with very few   cience S and Judgment in Risk Assessment (NRC 1994) coined the term missing default to describe the use of de facto assumptions by EPA without explicit explanation. These de facto assumptions may also be thought of as “implicit defaults.”

194 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT TABLE 6-1  Examples of Explicit EPA Default Carcinogen Risk-Assessment Assumptions Issue EPA Default Approach Extrapolation across “When cancer effects in exposed humans are attributed to exposure to human populations an agent, the default option is that the resulting data are predictive of cancer in any other exposed human population.” (EPA 2005a, p. A-2) “When cancer effects are not found in an exposed human population, this information by itself is not generally sufficient to conclude that the agent poses no carcinogenic hazard to this or other populations of potentially exposed humans, including susceptible subpopulations or lifestages.” (EPA 2005a, p. A-2) Extrapolation of results “Positive effects in animal cancer studies indicate that the agent under from animals to humans study can have carcinogenic potential in humans.” (EPA 2005a, p. A-3) “When cancer effects are not found in well-conducted animal cancer studies in two or more appropriate species and other information does not support the carcinogenic potential of the agent, these data provide a basis for concluding that the agent is not likely to possess human carcinogenic potential, in the absence of human data to the contrary.” (EPA 2005a, p A-4) Extrapolation of metabolic “There is a similarity of the basic pathways of metabolism and the pathways across species, occurrence of metabolites in tissues in regard to the species-to-species age groups, and sexes extrapolation of cancer hazard and risk” (EPA 2005a, p. A-6). Extrapolation of “As a default for oral exposure, a human equivalent dose for adults toxicokinetics across is estimated from data on another species by an adjustment of animal species, age groups, applied oral dose by a scaling factor based on body weight to the 3/4 and sexes power. The same factor is used for children because it is slightly more protective than using children’s body weight.” (EPA 2005a, p. A-7) Shape of dose-response “When the weight of evidence evaluation of all available data are relationship insufficient to establish the mode of action for a tumor site and when scientifically plausible based on the available data, linear extrapolation is used as a default approach, because linear extrapolation generally is considered to be a health-protective approach. Nonlinear approaches generally should not be used in cases where the mode of action has not been ascertained. Where alternative approaches with significant biological support are available for the same tumor response and no scientific consensus favors a single approach, an assessment may present results based on more than one approach.” (EPA 2005a, p. 3-21) exceptions, EPA treats all adults as equally susceptible to carcinogens that act via a linear mode of action (MOA) (see Chapter 5 and, for a recent example, EPA 2007a). Table 6-3 lists those and several other apparently missing EPA defaults. Both explicit and missing defaults used by EPA are a cornerstone of the agency’s ap- proach to facilitating human health risk assessment in the face of inherent scientific limita- tions that may prevent verification of any particular causal model. Understanding of the complications introduced by EPA’s policy and practice regarding defaults is central to evalu- ating EPA’s management of uncertainty.

SELECTION AND USE OF DEFAULTS 195 TABLE 6-2  Examples of Explicit EPA Default Noncarcinogen Risk-Assessment Assumptions Issue EPA Default Approach Relevant human “The effect used for determining the NOAEL, LOAEL,a or benchmark health end point and dose in deriving the RfD or RfC is the most sensitive adverse extrapolation from reproductive end point (that is, the critical effect) from the most animals to humans appropriate or, in the absence of such information, the most sensitive mammalian species.” (EPA 1996, p. 77) Adjustment to account Factor of 1, 3, or 10. (EPA 2002a, p. 2-12) for differences between humans and animal test species Heterogeneity among Factor of 1, 3, or 10. (EPA 2002a, p. 2-12) humans Shape of dose-response “In quantitative dose-response assessment, a nonlinear dose-response relationship is assumed for noncancer health effects unless mode of action or pharmacodynamic information indicates otherwise.” (EPA 1996, p. 75) Human risk estimate Division of the point of departure (for example, NOAEL, LOAEL, or benchmark dose) by the appropriate uncertainty factors to take into account, for example, the magnitude of the LOAEL compared with the NOAEL, interspecies differences, or heterogeneity among members of the human population produces “an estimate (with uncertainty spanning perhaps an order of magnitude) of a daily exposure to the human population (including sensitive subgroups) that is likely to be without an appreciable risk of deleterious effects during a lifetime.” (EPA 1998a, p. 57) aNOAEL = no-observed-adverse-effect level, LOAEL = lowest-observed-adverse-effect level. COMPLICATIONS INTRODUCED BY USE OF DEFAULTS The National Research Council (NRC 1994) noted that although EPA had justified the selection of some of its defaults, many had received incomplete scrutiny by the agency. In the agency’s Guidelines for Carcinogen Risk Assessment (EPA 2005a), it elucidated more fully the bases of many of its defaults. Selection of defaults by EPA has been controversial, and the controversies were described in Science and Judgment in Risk Assessment (NRC 1994, Chapter 6 and Appendices N-1 and N-2). Because choice of defaults involves a blend of science and risk-assessment policy, controversy is inevitable. Some have argued that EPA has selected defaults at each opportunity that are needlessly “conservative” and result in large overestimates of human risk (OMB 1990; Breyer 1992; Perhac 1996). Others have argued—given the large scientific uncertainties surrounding risk assessment, human variabil- ity in both exposure to and response to toxic substances, and various missing defaults with “nonconservative” biases—that risk overestimation might not be common in EPA’s practices and that risk underestimation may occur (Finkel 1997; EPA SAB 1997, 1999). EPA (2004a, p. 20) states that the sum of conservative risk estimates for a chemical mixture overstates risk to a relatively modest extent (a factor of 2-5). In general, estimates based on animal extrapolations have been found to be generally concordant with those based on epidemio- logic studies (Allen et al. 1988; Kaldor et al. 1988; Zeise 1994), and in several cases human

196 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT TABLE 6-3  Examples of “Missing” Defaults in EPA “Default” Dose-Response Assessments • For low-dose linear agents, all humans are equally susceptible during the same life stage (when estimates are based on animal bioassay data) (EPA 2005a). The agency assumes that the linear extrapolation procedure accounts for human variation (explained in Chapter 5), but does not formally account for human variation in predicting risk. For low-dose nonlinear agents, an RfD is derived with an uncertainty factor for interhuman variability of 1-10 (EPA 2004a, p. 44; EPA 2005a, p. 3-24). • Tumor incidence from conventional chronic rodent studies is treated as representative of the effect of lifetime human exposures after species dose equivalence adjustments (EPA 2005a). For chemicals established as operating by a mutagenic mode of action, that holds after adjustment for early-life sensitivity (EPA 2005b). This assumes (1) that humans and rodents have the same “biologic clock,” that is, that rodents and humans exposed for a lifetime to the same (species-corrected) dose will have the same cancer risk, and (2) that a chronic rodent bioassay, which doses only in adulthood and misses late old age (EPA 2002a, p. 41), is representative of a lifetime of rodent exposure. • Agents have no in utero carcinogenic activity. Although the agency notes that in utero activity is a concern, default approaches do not take carcinogenic activity from in utero exposure into account, and risks from in utero exposure are not calculated (EPA 2005b; EPA 2006a, p. 29). • For known or likely carcinogens not established as mutagens, there is no difference in susceptibility at different ages (EPA 2005b). • Nonlinear carcinogens and noncarcinogens act independently of background exposures and host susceptibility (see Chapter 5 for full discussion). • Chemicals that lack both adequate epidemiologic and animal bioassay data are treated as though they pose no risk of cancer worthy of regulatory attention, with few exceptions. They are typically classified as having “inadequate information to assess carcinogenic potential” (EPA 2005a, Section 2.5); consequently, no cancer dose-response assessment is performed (EPA 2005a, p. 3-2). Integrated Risk Information System and provisional peer-reviewed toxicity values are then based on noncancer end points, and cancer risk estimates are not presented. data have indicated that animal-based estimates were not conservative for the population as a whole (see discussion in Chapter 4). In any event, the committee observes that any set of defaults will impose value judgments on balancing potential errors of overestimation and underestimation of risk even if the judg- ments dictate that the balance be exactly indifferent between the two. Thus, the issue is not whether to accept a value-laden system of model choice but which value judgments EPA’s assessments will reflect. Some members of the Science and Judgment in Risk Assessment committee endorsed the view that risk-assessment policy should seek a “plausible conserva- tism” in the choice of default options rather than seeking to impose the alternative value judgment that models should strive to balance errors of underestimation and overestimation exactly (Finkel 1994); others took the view that relative scientific plausibility alone should govern the choice of defaults and the motivation for departing from them (McClellan and North 1994). EPA (2004a, pp. 11-12) acknowledged the debate: EPA seeks to adequately protect public and environmental health by ensuring that risk is not likely to be underestimated. However, because there are many views on what “adequate” protection is, some may consider the risk assessment that supports a particular protection   This use of conservatism is intended to describe the situation in which the assumptions and defaults used in risk assessment are likely to overstate the true but unknowable risk. It is derived from the public-health dictum that when science is uncertain, judgments based on it should err on the side of public-health protection.

SELECTION AND USE OF DEFAULTS 197 level to be “too conservative” (that is, it overestimates risk), while others may feel it is “not conservative enough” (that is, it underestimates risk). . . . Even with an optimal cost-benefit solution, in a heterogeneous society, some members of the population will bear a disproportionate fraction of the costs while others will enjoy a disproportionate fraction of the benefits (Pacala et al. 2003). Thus, inevitably, different seg- ments of our society will view EPA’s approach to public health and environmental protection with different perspectives. In addition to the debate over how “conservative” default assumptions should be, there is tension between their use and the complete characterization of uncertainty. For example, it is possible to imagine eliminating defaults and instead using ranges of plausible assumptions in their place. Doing so, however, could produce such a broad range of risk estimates, with no clear way to distinguish their relative scientific merits, that the result could be useless for the purpose of choosing among various risk-management options for decision-making (see Chapter 8). As explained above, using defaults ameliorates that problem but at the cost of reporting only a portion of the complete range of risk estimates that is consistent with available scientific knowledge. In some cases, use of defaults overstates the central tendency of the complete range; in other cases, it underestimates the central tendency. As discussed below, that pitfall is important because of the ubiquitous nature of tradeoffs that surround most risk-management decisions. How EPA has responded to suggestions to improve its system of defaults reveals three related issues. First, the agency has not published clear, general guidance on what level of evidence is needed to justify use of chemical-specific evidence and not use a default, although EPA has provided some specific guidance for a small number of particular defaults (see below). Second, as part of its current practice of using defaults, EPA often does not quantify the portion of the total uncertainty characterized in the resulting risk estimate or RfD that is due to the presence of competing plausible causal models. EPA in its various guidance documents and reviews has provided a scientific justification for many of its defaults (for example, EPA 1991a, 2002b, 2004a, 2005a,b). In some cases, it has demonstrated that the defaults are plausible, but not the extent to which a default may produce an estimate of the risk or RfD different from that produced by a plausible alternative model. Tables 6-1 and 6-2 list explicit defaults used by EPA. A notable example is the use of the linear no-threshold dose-response relationship for extrapolation of cancer risk below the point of departure when there is no evidence of an MOA that would introduce nonlinearity. That assumption is based on both mechanistic hypotheses and empirical evidence. “Low-dose nonlinear” carcinogens and chemicals without established carcinogenic properties are assumed to follow threshold-like dose-response relationships even when, as in the case of chloroform, it is acknowledged that multiple modes of action, including genotoxicity, cannot be ruled out (EPA SAB 2000, p. 1; EPA 2001, p. 42). The nonlinear effects are also presumed to act independently of background processes although for many mechanisms (such as receptor-mediated ones) there can be endogenous and exogenous agents that contribute to the same disease process present in the population that the toxicant under study contributes to (see Chapter 5). EPA risk-assessment guidance acknowledges that defaults are uncertain (EPA 2002a, 2005a). In practice, the agency addresses the uncertainty by discussing it qualitatively. EPA   he T agency’s most recent cancer and noncancer guidelines do not strictly assume biologic thresholds, because of “the difficulty of empirically distinguishing a true threshold from a dose-response curve that is nonlinear at low doses”; instead, it refers to the dose-response relationships as low-dose nonlinear (EPA 2005a).

198 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT has recently been criticized, however, for not describing the range of risk estimates associ- ated with alternative assumptions quantitatively (NRC 2006a), and it has been encouraged in various forums to begin to develop the methodology and data to describe the uncertainty in dose-response modeling quantitatively (EPA SAB 2004b; NRC 2007a). Third, EPA has not established a clear set of standards to apply when evidence of an al- ternative assumption is sufficiently robust not to invoke a default. EPA (2005a, p. 1-9) states that “with a multitude of types of data, analyses, and risk assessments, as well as the diversity of needs of decision makers, it is neither possible nor desirable to specify step-by-step criteria for decisions to invoke a default option.” The committee agrees that it is neither possible nor desirable to reduce the evaluation of defaults to a checklist. However, failure to establish clear guidelines detailing the issues that must be addressed to depart from a default and the type of evidence that would be compelling can have a number of adverse consequences. The lack of clear standards may reduce the incentive for further research (Finkel 2003). With no guidance on criteria for using an alternative assumption, it is difficult for an interested party to understand the type of scientific information that might be required by the agency, and a lack of clear standards can make the process of deciding whether new research data (instead of a default) are usable appear to be arbitrary. The committee considers that clear evidence standards for deciding to retain or depart from defaults can make the process more transparent, consistent, and fair for all stakeholders involved and enhance their trust in the process. Examples from EPA (discussed below) demonstrate that it is possible to specify criteria for departure from defaults. Risk estimates developed with defaults focus on a portion of the scientifically plausible risk-estimate range. However, because some defaults may lead to the overstatement of the risk posed by a chemical and others to an understatement of risk, EPA needs to be mindful of the influence of defaults on risk estimates when the estimates will influence risk-management decisions. Intervention options often involve tradeoffs, and the tradeoffs being considered (such as replacement of one chemical with another in a production process) might result in risk estimates whose health protectiveness depends on the defaults used in estimation. An example is the tradeoff between the risks resulting from exposure to mercury and PCBs in fish and the nutritional benefit of fish consumption (Cohen et al. 2005). When chemical risks are being compared, the agency can minimize the differential effects of defaults by ensuring that they are applied consistently. When chemical risks are being compared with other considerations whose estimated effects are not influenced by defaults, EPA should emphasize the quantitative characterization of the contribution of the defaults to uncertainty (as discussed below). ENHANCEMENTS OF THE ENVIRONMENTAL PROTECTION AGENCY’S DEFAULT APPROACH This section describes the committee’s recommendations for improving how defaults are chosen, used, and modified. These recommendations include continued and expanded use of the best, most current science to choose, justify, and, when appropriate, revise EPA’s default assumptions; development of a clear standard to determine when evidence supporting an alternative assumption is robust enough that the default need not be invoked and develop- ment of various sets of scientific criteria for identifying when an alternative has met that standard; making explicit the existing assumptions or developing new defaults to address the missing defaults, such as treatment of chemicals with limited information as though they pose risks that do not require regulatory action; and quantifying the risk estimates emerging

SELECTION AND USE OF DEFAULTS 199 from more than one model (assumption) when EPA has determined that an alternative model is sufficiently well developed and validated to be presented alongside the risks resulting from use of the default. Best Use of Current Science to Define Defaults The defaults selected for EPA’s risk assessments and described in the agency’s guidelines should be periodically reviewed to determine their consistency with evolving science. The advance of scientific knowledge relevant to the selection of defaults is typically associated with studies of specific agents that provide insights into the applicability of alternative models to those agents (and perhaps also to related agents). As knowledge accumulates, it may point to the need for revision of one or more defaults for entire classes of related agents or even for all agents. Because general scientific understanding is continually evolving, it is essential that EPA remain committed to evaluating the bases of its defaults. Chapter 5 provides an example of how EPA might evaluate and revise its default dose-response assessment as- sumptions in order to take into account the growing understanding of how dose-response assessment depends on interindividual variability and background exposures to a particular chemical and to chemicals that have similar MOAs. Guidelines describing defaults should include a detailed description of the underlying sci- ence to justify the plausibility of the default for a wide array of circumstances. For example, the assumed relevance of rodent carcinogenicity testing to human risk might be justified by the high degree of common genetics across mammalian species and by empirical evidence that rodents are useful models of human disease processes. The documentation should also include the known and suspected limitations of the default’s applicability in any specific case. In the example above, limitations might include known differences in organ sensitivity and enzyme pathways between rodents and humans. The documentation should systematically establish grounds for departing from the default. None of the possible inference options that is evaluated for its scientific strengths can be shown with high certainty to be generically applicable, but a default must be chosen from among them. As the Red Book pointed out, an element of “risk-assessment policy” will need to be invoked for the selection of defaults. EPA should use available science to the maximum extent and clearly specify the basis of its final selection of defaults. The same process should be used when new defaults are being considered to replace existing ones. Clear Standards for Departures from Defaults In keeping with the Red Book’s recommendations concerning the need for flexibility in the application of EPA’s inference guidelines, EPA has accepted alternatives to defaults in several specific cases. For example, the last decade saw major advances in the develop- ment of physiologically based pharmacokinetic (PBPK) models, and the agency has found these models useful to replace defaults in cross-route and cross-species extrapolation. In the agency’s toxicologic review of 1,1,1-trichloroethane (EPA 2007a), for example, it evaluated 14 PBPK models that had been published in peer-reviewed journals, selected those it judged to be best supported, and then used model results to assess animal-to-human differences in the pharmacokinetic behavior of 1,1,1-trichloroethane. The typical default uncertainty factor (UF) of 10, used to extrapolate animal findings to humans, is assumed by default to be made

200 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT BOX 6-1 Boron: Use of Data-Derived Uncertainty Factors EPA has been struggling with characterization of uncertainty in risk assessments for decades. In most cases involving noncancer health effects, default uncertainty factors are used to account for conversion of subchronic to chronic exposure data, the adequacy of the database, extrapolation from the lowest-observed-adverse-effect level to a no-observed-adverse-effect level, interspecies extrapola- tion, and human variability. Inadequacies in the database often compel the agency to rely on default assumptions to compensate for gaps in data. In the case of the boron risk assessment, data were available, so EPA could apply a “data-derived approach” to develop uncertainty factors. This approach “uses available toxicokinetic and toxicodynamic data in the determination of uncertainty factors, rather than relying on the standard default values” (Zhao et al. 1999). The boron case illustrates issues sur- rounding the development and use of data-derived uncertainty factors by the agency. Without endorsing the specifics, the committee notes that in the boron risk assessment the availability of data lowered the uncertainty factor by roughly one-third, from 100 to 66. Chemical-spe- cific pharmacokinetic and physiologic data were used to derive the factors (DeWoskin et al. 2007). Specifically, data on renal clearance from studies of pregnant rats and pregnant humans were used in determining data-driven interspecies pharmacokinetic adjustments, and glomular-filtration variabil- ity in pregnant women was used to develop the nondefault values for intraspecies pharmacokinetic adjustments. The data-derived approach used in the risk assessment was largely supported by the three ex- ternal reviewers of the risk assessment (see EPA 2004b, p. 110): All three reviewers agreed that the new pharmacokinetic data on clearance of boron in rats and humans should be used for derivation of an uncertainty factor instead of a default factor. Comments included statements that EPA should always attempt to use real data instead of default factors and a statement that this use of clearance data is a significant step forward in the general EPA methodology for deriving uncertainty. The use of data-driven uncertainty factors was not without controversy, as reported in a 2004 Risk Policy Report: “environmentalists are concerned EPA is eroding its long-standing practice of us- ing established safety factors when faced with scientific uncertainties. ‘Our major concern is that this represents a major move by EPA away from the concept of defaults, and towards a concept of default if we think that it’s required, and if there are data to support a default’,’’ a scientist with the Natural Resources Defense Council says. “EPA may use a ‘scrap of evidence’ to support the idea that one chemical is like another, reducing the need for important safety factors, the source says” (Risk Policy Report 2004, p. 3). up of two factors of about 3: one for PK differences and the other for pharmacodynamic (PD) differences. In the draft 1,1,1-trichloroethane assessment, the agency used PBPK model results instead of the default UF of 3; but in the absence of information on PD differences, it retained the default UF of 3. This example reflects increased agency recognition of the value of reliable scientific information to reduce model uncertainties in risk assessment. In another recent example (see Box 6-1), EPA used chemical-specific PK and physiologic data to derive two UFs (for extrapolating from animal to humans and for human variability) in establishing the RfD for boron. Those examples show that EPA has departed from default assumptions in specific cases; however, the committee believes that EPA and the research community would benefit from the development of clear standards and criteria for such departures. Developing clear standards and criteria for departing from defaults requires a system   he assumption that PK and PD are similar in their contribution to interindividual heterogeneity is likely to T be incorrect. Hattis and Lynch (2007) argued that PD factors are likely to be more important.

SELECTION AND USE OF DEFAULTS 201 that has two components: a single “evidentiary standard” governing how EPA considers alternative assumptions in relation to the default and the specific scientific criteria that EPA will use to gauge whether an alternative model has met the evidentiary standard. Evidentiary Standard Because of the effort that EPA has invested in selecting its current defaults and the consistency that defaults confer on the risk-assessment process, the use of an alternative to the default in specific cases faces a substantial hurdle and should be supported by specific theory and evidence. The committee recommends that EPA adopt an alternative assumption in place of a default when it determines that the alternative is “clearly superior,” that is, that its plausibility clearly exceeds the plausibility of the default. Specific Criteria to Judge Alternatives The scientific questions that should be addressed to assess whether an alternative to a default is clearly superior will depend on the particular inference gap that is to be bridged. The committee recommends that EPA establish issue-specific criteria for bridging inference gaps. Important issues that require development of criteria include the use of PBPK models vs allometry to scale doses across species, the relevance of animal tumors to humans, and PD differences between animals and humans. Many of those issues are relevant to the unification of cancer and noncancer dose-response modeling described in Chapter 5. EPA in specific cases has developed criteria for departing from defaults. Three examples are presented below. The committee notes that these cases are presented as starting points for the development of criteria for departing from defaults; and their use does not imply that the committee agrees with their rationale in every detail. Low-dose extrapolation for thyroid follicular tumors in rodents. In 1998, EPA devel- oped guidance for when and how to depart from the default assumption that a substance that causes thyroid follicular tumors in rodents will have a linear dose-response relationship in humans (EPA 1998b). That guidance states clearly that EPA will consider a margin-of- exposure, rather than a linear approach, when it can be demonstrated that a particular rodent carcinogen is not mutagenic, that it acts to disrupt the thyroid-pituitary axis, and that no MOA other than antithyroid activity can account for the observed rodent tumor formation. EPA then presents eight criteria for determining whether the substance disrupts the thyroid-pituitary axis and states that the first five must be satisfied (the remaining three are “desirable”). Relevance to humans of animal α2µ-globulin carcinogens. In the case of criteria for setting aside the relevance of renal tumors that occurred after exposure to agents that act through the α2µ-globulin MOA, EPA developed clear criteria for departure from the default assumption that animal tumors are relevant to human risk. EPA (1991b) specified two conditions that must be satisfied to replace that default. First, for the agent in ques- tion, α2µ-globulin must be shown to be involved in tumor development. For this condition, EPA requires three findings (p. 86): “(1) Increased number and size of hyaline droplets in   n legal parlance, a “beyond a reasonable doubt” standard would be “clearly superior.” The term clearly I superior should not be interpreted quantitatively, but the committee notes that statistical P values can also be used as an analogy. For example, rejecting the null in favor of the alternative only when P < 0.05 could be viewed as insisting that the alternative hypothesis is “clearly superior” to the “default null.”

202 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT renal proximal tubule cells of treated male rats,” “(2) Accumulating protein in the hyaline droplets is α2µ-g[lobulin],” and “(3) Additional aspects of the pathological sequence of lesions associated with α2µ-g[lobulin] nephropathy are present.” If the first condition is satisfied, EPA states that the extent to which α2µ-globulin is responsible for renal tumors must be established. Establishing that it is largely responsible for the observed renal tumors is grounds for setting aside the default assumption of their relevance to humans. EPA states (p. 86) that this step “requires a substantial database, and not just a limited set of informa- tion confined to the male rat. For example, cancer bioassay data are needed from the mouse and the female rat to be able to demonstrate that the renal tumors are male-rat specific.” EPA lists the type of data that are helpful, for example, data showing that the chemical in question does not cause renal tumors in the NBR rat (which does not produce substantial quantities of α2µ-globulin), evidence that the substance’s binding to α2µ-globulin is revers- ible, sustained cell division of the P2 renal tubule segment that is typical of the α2µ-globulin renal-cancer mode of action, structure-activity relationship data similar to those on other known α2µ-globulin MOA substances, evidence of an absence of genotoxicity, and the presence of positive renal-carcinogenicity findings only in male rats and negative findings in mice and female rats (EPA 1991b). Applicability of the safety factor of 10 under the Food Quality Protection Act. EPA’s treatment of the safety factor of 10 to protect infants and children when setting pesticide exposure limits is an example of how the agency could establish a process to determine regularly whether data are sufficient to depart from what is, in effect, a default. The 1996 Food Quality Protection Act (FQPA) mandates the use of a safety factor of 10 unless EPA has sufficient evidence to determine that a different value is more appropriate [§ 408 (b)(2)(c)]. The EPA Office of Pesticide Programs (EPA 2002b) has developed a systematic weight-of- evidence approach that addresses a series of considerations, including prenatal and postnatal toxicity, the nature of the dose-response relationship, PK, and MOA. On the basis of the framework, EPA had found it unnecessary to apply the safety factor of 10 in 48 of 59 cases (reviewed in NRC 2006b). Committee’s Evaluation Those examples provide a starting point for the agency’s development of a standardized approach to departures from defaults. An improvement based on these examples would be greater specificity regarding the type of evidence that is sufficient to justify a departure. Consider, for example, EPA’s guidance for chemicals that cause follicular tumors. Sec- tion 2.2.4 of EPA 1998b (p. 21) requires that “enough information on a chemical should be given to be able to identify the sites that contribute the major effect on thyroid-pituitary function,” but EPA does not indicate what quantity and quality of information are “enough” for a researcher to make such a determination. In addition, the key statement that “where thyroid-pituitary homeostasis is maintained, the steps leading to tumor formation are not expected to develop, and the chances of tumor development are negligible” refers through- out the document to humans in general and does not address interindividual variability in homeostasis. EPA has presented guidance (EPA 2002b) for departing from the use of a safety factor of 10 as provided for in the FQPA. The guidance includes a list of issues to consider and the type of evidence to evaluate. Some of the guidelines provide sufficient specificity as to   n Chapter 5, the committee takes exception to the term safety factor, but it uses it here to avoid confusion I with EPA terminology.

SELECTION AND USE OF DEFAULTS 203 evaluation of departures. For example, a finding of effects in humans or in more than one species militates against departure, as does a finding that the young do not recover as quickly from the adverse effects of a chemical as do adults. In contrast, some of the guidelines lack specificity. In particular, an MOA supporting the human relevance of effects observed in animals militates against departure from the default; this guideline would be more useful if it spelled out specific MOA findings that support the relevance to humans. The committee recommends that EPA review those and other cases in which it has used substance-specific data and not invoked defaults and that it catalog the principles character- izing those departures. The principles can be used in developing more general guidance for deciding when data clearly support an inference that can be used in place of a default. Crafting Defaults That Replace (or Make Explicit) Missing Assumptions: The Case of Chemicals with Inadequate Toxicity Data EPA should work toward developing explicit defaults to use in place of missing defaults. To the extent possible, the new, explicit defaults should characterize the uncertainty associ- ated with their use. Although there appear to be a number of missing defaults, this section focuses on the “untested-chemical assumption” and outlines an approach for characterizing the toxicity of untested or inadequately tested chemicals. The approach attempts to strike a balance between gathering enough information to reduce uncertainty sufficiently to make the resulting estimate useful and making the approach applicable for characterizing a large number of chemicals. In the absence of data to derive a quantitative, chemical-specific estimate of toxicity, EPA treats such chemicals as though they pose risks that do not require regulatory action in its air, drinking-water, and hazardous-waste programs. In the case of carcinogens, EPA assigns no potency factor to a chemical and thus implicitly treats it as though it poses no cancer risk, for example, chemicals whose evidence meets the standard of “inadequate information to assess carcinogenic potential” in the carcinogen guidelines (EPA 2005a, p. 1-12). For noncancer end points, EPA practice limits the product of the uncertainty factors applied to no more than 3,000. When a larger value would be required to address the uncertainty (for example, when “there is uncertainty in more than four areas of extrapolation” [EPA 2002a, p. xvii]), EPA does not derive an RfD or RfC. The vast majority of chemicals now produced lack a cancer slope factor, RfD, RfC, or a combination of these. The effective assumption that many chemicals pose no risk that should be subject to regulation can compromise decision making in a variety of contexts, as it is not possible to meaningfully evaluate net health risks and benefits associated with the substitution of one chemical for another in a production process or interpret risk estimates where there can be a large number of untested chemicals (for example, a Superfund site) that have not been examined sufficiently in epidemiologic or toxicologic studies. To develop a distribution of dose-response relationship estimates for chemicals on which agent-specific information is lacking, a tiered series of default distributions could be con- structed. The approach is based on the notion that for virtually all chemicals it is possible to say something about the uncertainty distribution regarding dose-response relationships. The process begins by selecting a set of cancer and noncancer end points and applying the full distribution of chemical potencies (including a data-driven probability of zero potency) to   hapter C 5 addresses other missing defaults including that in the absence of chemical-specific data, EPA treats all members of the human population as though they are de facto equally susceptible to carcinogens that act via a linear MOA.

204 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT the unknown chemical in question. That initial distribution can then be narrowed by using the various types and levels of intermediate toxicity information. At the simplest level, information on chemical structure can be used to bin chemicals in much the way that EPA uses chemical structures and physicochemical properties to perform quantitative structure activity relationship (QSAR) analyses for premanufacturing notices and for developing distributions of toxicity parameter values derived from data on repre- sentative data-rich chemicals (The Toxic Substances Control Act [TSCA] Section 5 New Chemicals Program [EPA 2007b]). At the next level, the distributions can be further refined by including toxicologic tests and other model or experimental data to create chemical cat- egories. That has been done to fill in data gaps in the U.S. and Organisation for Economic Co-operation and Development high-production-volume chemical programs (OECD 2007). Chemical categories in those programs have been created to help to estimate actual values for the programs’ short-term toxicity tests, but the underlying concepts could be applied to the development of distributions of cancer potencies or dose-response parameters for other chronic-toxicity end points. In the future, the results of intermediate mechanistic tests, in the context of growing understanding of toxicity networks and pathways, are likely to assist in selecting end points and estimating potency distributions. There are descriptions of how to make use of the observed correlation between carcinogenic potency and short-term toxicity values, such as the maximum tolerated dose (Crouch et al. 1982; Gold et al. 1984; Bernstein et al. 1985) and acute LD50 (Zeise et al. 1984, 1986; Crouch et al. 1987). The approach can be updated and expanded to include other data on toxicity from structure-activity and short- term tests. EPA is building databases that could facilitate such development (EPA 2007c; Dix et al. 2007); the National Research Council (NRC 2007b) advocates eventually relying on high and medium throughput assays for risk assessment. Finally, the most sophisticated level can involve development of toxic-potency distributions for chemicals whose structures are clearly similar to those of well-studied substances, such as polycyclic aromatic hydrocarbons and dioxin-like compounds, in a manner like current extrapolation methods (for example, see Boström et al. 2002; EPA 2003; van den Berg et al. 2006). In that way, the agency can take advantage of the wealth of intermediate toxicity data being generated in multiple set- tings at a stage when their precise implications for traditional dose-response estimation are not fully understood. EPA over the long term can develop probability distributions based on results of the intermediate assays, and the potency distribution for a chemical can become narrower as more data become available. Those approaches have a number of limitations. For now, they would be based on results with chemicals that have already been tested in long-term bioassays. If selection for long-term bioassay testing is already associated with indications of toxicity, generalization of the results to untested chemicals could lead to an overestimation of the toxicity of the untested chemicals. The creation of potency distributions for unknown chemicals will have to include a database estimation of the probability of zero potency to reduce the possibility of systematic overestimation. Characterization of the uncertainty surrounding the potency estimates will be necessary, but it should be facilitated by the probabilistic nature of the approach. The lack of sufficient data to estimate potency distributions for a wide variety of end points poses a serious challenge. Creation of such a database may be feasible now for cancer and a small number of noncancer end points but not for many of the end points of great concern, such as developmental neurotoxicity, immune toxicity, and reproductive toxicity. Full implementation of such a system will require about 10-20 years of data and method development. The committee urges EPA to begin to develop the methods for such a system by using existing data and the wealth of intermediate toxicity data being generated

SELECTION AND USE OF DEFAULTS 205 now by U.S. and international chemical priority-setting programs (EC 1993, 1994, 1998, 2003; 65 Fed. Reg. 81686[2000]; NRC 2006b). When necessary, EPA can prioritize efforts to establish missing default information based on the potential impact of this information on the estimated benefits of regulatory action. This impact is most likely to be substantial for chemicals that have exposure levels that could change substantially in response to regulation (for example, chemicals that might be substituted for other chemicals that undergo more stringent control), and for chemicals whose physical and chemical properties increase the likelihood of their relative toxicity. Performing Multiple Risk Characterizations FOR Alternative Models The current management of defaults resembles an all-or-none approach in that EPA of- ten quantifies the dose-response relationship for one set of assumptions—either the default or whatever alternative to the default the agency adopts. Model uncertainty is discussed qualitatively; EPA discusses the scientific merits of competing assumptions. In the long term, the committee envisions research leading to improved descriptions of model uncertainty (see Chapter 4). In the near term, sensitivity analysis could be performed when risk estimates for alternative hypotheses that are sufficiently supported by evidence are reported. This approach would require development of a framework with criteria for judging when such an analysis should be performed. The goal is not to present the multitude of possible risk estimates exhaustively but to present a small number of exemplar, plausible cases to provide the risk manager a context for understanding additional uncertainty con- tributed by considering assumptions other than the default. The committee acknowledges the difficulty of assigning probabilities to alternative estimates in the face of a lack of scientific understanding related to the defaults and acknowledges that much work is needed to move toward a more probabilistic approach to model uncertainty (see Chapter 4). The standard for reporting alternative risk estimates should be less stringent than the “clearly superior” standard recommended for use of alternatives in place of the default. The committee finds that alternative risk estimates should be reported if they are “comparably” plausible relative to the risk estimate based on the default. The standard of comparability should not be interpreted to mean that the alternative must be at least as plausible as the default; this makes sense given that the alternative risk estimates provide information on the implications of tradeoffs associated with the interventions or options to address a given risk and that a risk manager might be interested in possible outcomes even if they are less than 50% probable. The comparability standard, however, does rule out risk estimates that are possibly valid but that are based on assumptions that are substantially less plausible than the default. The purposes are to help to ensure that the set of risk estimates to be considered by the risk manager remains manageable and to prevent distraction by risk estimates that are unlikely to be valid. In the final analysis, making the term comparable operational will depend on EPA’s deciding how large a probability it is willing to accept that its risk assess- ment omitted the true risk. EPA should consider developing guidance that explicitly directs risk assessors to present a broader array of risk estimates in “high stakes” risk assessment situations, that is, situations where there are potentially important countervailing risks or economic costs associated with mitigation of a target risk. The guidance should take into account the analytic cost of developing more extensive information, including the potential additional delay (see discussion of value of information in Chapter 3). As in the case of the “clearly superior” standard to replace the default, the agency should establish guidance for evaluation of plausibility and should issue specific criteria for

206 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT the demonstration that an alternative is “comparably plausible.” EPA should exclude from consideration alternative risk estimates that fail to satisfy the “reasonably” plausible crite- ria, because they can distract attention from the possibilities that have a reasonable level of scientific support. Specifically, the committee discourages EPA from the regular (pro forma) reporting that the risk posed by an evaluated chemical “may be as small as zero” unless there is scientific evidence that raises this possibility to the requisite level of plausibility. Under the proposed approach, the risk assessor would describe, to the extent possible, the rela- tive scientific merits of alternative assumptions and the factors that make the assumptions as “comparably plausible” relative to the default (and the factors that cause it to fall short of a “clearly superior” standard). Such a characterization would identify the risk estimate associated with the default assumptions and identify that estimate as the appropriate basis of risk management. Nonetheless, the risk assessment would also report a small number of other plausible exemplar assessments to convey the uncertainty associated with the preferred risk estimate. That recommendation is consistent with the National Research Council rec- ommendation (NRC 2006a) that encouraged EPA to report risk estimates corresponding to alternative assumptions in its risk assessments. The level of detail in and scientific support for the alternative risk estimates should be tailored to be appropriate for the type of questions that the risk assessment is addressing (see Chapter 3). If potential tradeoffs associated with intervention options under evalua- tion are modest, less detail is needed to discriminate among the intervention options. For example, while maintaining designation of the risk calculated with the default assumptions as the primary estimate, it may be sufficient to provide a range of risk estimates without detailed information about the relative plausibility of alternative values within the range; the information can then be used in screening assessments to identify options whose desirability can be established robustly in the face of uncertainty. Because it is not always possible to know what options will be evaluated, simple characterizations of uncertainty can serve as a starting point for later assessments of alternative options. In all cases, refinement of the uncertainty characterization can proceed in an iterative fashion as needed to address either more serious tradeoffs or the evaluation of options and tradeoffs that were not initially con- templated. The key point is that the options to be evaluated drive the level of detail needed in the assessment (see Chapter 3). Advantages of Multiple Risk Characterizations Presenting a full risk characterization for models other than the default confers several benefits on the risk-assessment process. Retaining alternative risk estimates in the final risk- assessment results gives the risk manager wider latitude to understand the tradeoffs among the risk-management options. However, it is important that any evaluation of the range of risk-assessment outcomes take into account EPA’s mandate to protect public health and the environment. The committee recommends that EPA quantify the implications of using an alternative assumption when it elects to depart from a default assumption. In particular, EPA should describe how use of a default and the selected alternative influences the risk estimate for the risk-management options under consideration. For example, if a risk assessment that departs from default assumptions identifies chemical A as the lowest-risk chemical to use in a production process rather than chemical B, it should also describe which chemical would pose the lower risk if the default assumption were used. It is important for EPA to emphasize that only one assumption deserves primary con- sideration for risk characterization and risk management. If alternative assumptions are presented as “comparably plausible,” the default must be highlighted and given deference.

SELECTION AND USE OF DEFAULTS 207 The proposed approach more completely characterizes the uncertainty in the resulting risk estimate. As explained in Chapter 3, identifying the most appropriate course of action may depend on the degree of uncertainty associated with a risk estimate. Under the framework (Chapter 8), when there are multiple control options and multiple causal models, highlighting the model uncertainty can facilitate finding the optimal choices. Clear standards for depar- ture from defaults can provide incentives for third parties to produce research in that they will know what data need to be produced that could influence the risk-assessment process. Finally, the approach facilitates the setting of priorities among research needs as a necessary component of value-of-information analysis (see Chapter 3). CONCLUSIONS AND RECOMMENDATIONS EPA’s current policy on defaults calls for evaluating all relevant and available data first and considers defaults only when it is determined that data are not available or unusable. It is not known to what extent that is practiced, in contrast with judging the adequacy of available data to depart from a default. Whatever the case, defaults need to be maintained for the steps in risk assessment that require inferences or to fill common data gaps. Criteria are needed for judging whether, in specific cases, data are adequate to support a different inference from the default (or whether data are sufficient to justify departure from a default). The committee urges EPA to delineate what evidence will determine how it makes these judgments, and how that evidence will be interpreted and questioned. Providing a credible and consistent approach to defaults is essential to have a risk-assessment process to support regulatory decision-making. The committee provides the following recommendations to strengthen the use of defaults in EPA: • EPA should continue and expand use of the best, most current science to support or revise its default assumptions. The committee is reluctant to specify a schedule for revising these default assumptions. Factors EPA should take into consideration in setting priorities for such revisions include (1) the extent to which the current default is inconsistent with available science; (2) the extent to which a revised default would alter risk estimates; and (3) the public health (or ecologic) importance of risk estimates that would be influenced by a revision to the default. • EPA should work toward the development of explicitly stated defaults to take the place of implicit or missing defaults. Key priorities should be development of default ap- proaches to support risk estimation for chemicals lacking chemical-specific information to characterize individual susceptibility to cancer (see Chapter 5) and to develop a dose-re- sponse relationship. With respect to chemicals that have inadequate data to develop a dose- response relationship, information is currently available to make progress on cancer and a limited number of noncancer end points. EPA should also begin developing methods that take advantage of information already available in the U.S. or by international prioritiza- tion programs with a goal of creating a comprehensive system over the next 10 to 20 years. When necessary, EPA can prioritize efforts to target chemicals for which this information is most likely to influence the estimated benefits of regulatory action. • In the next 2-5 years, EPA should develop clear criteria for the level of evidence needed to justify use of alternative assumptions in place of defaults. The committee recom- mends that departure should occur only when the evidence of the plausibility of the alterna- tive is clearly superior to the evidence of the value of the default. In addition to a general standard for the level of evidence needed for use of alternative assumptions, EPA should

208 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT describe specific criteria that must be addressed for use of alternatives to each particular default. • When none of the alternative risk estimates achieves a level of plausibility sufficient to justify use in place of a default, EPA should characterize the impact of the uncertainty associated with use of the default assumptions. To the extent feasible, the characterization should be quantitative. In the next 2-5 years, EPA should develop criteria for the listing of the alternative values, limiting attention to assumptions whose plausibility is at least comparable with that of the plausibility of the default. The goal is not to present the multitude of pos- sible risk estimates exhaustively but to present a small number of exemplar, plausible cases to provide a context for understanding the uncertainty in the assessment. The committee acknowledges the difficulty of assigning probabilities to alternative estimates in the face of a lack of scientific understanding related to the defaults and acknowledges that much work is needed to move toward a more probabilistic approach to model uncertainty. • When EPA elects to depart from a default assumption, it should quantify the im- plications of using an alternative assumption, including describing how use of the default and the selected alternative influences the risk estimate for risk-management options under consideration. • EPA needs to more clearly elucidate a policy on defaults and provide guidance on its implementation and on evaluation of its impact on risk decisions and on efforts to protect the environment and public health. References Allen, B.C., K.S. Crump, and A.M. Shipp. 1988. Correlations between carcinogenic potency of chemicals in animals and humans. Risk. Anal. 8(4):531-544. Bernstein, L., L.S. Gold, B.N. Ames, M.C. Pike, and D.G. Hoel. 1985. Some tautologous aspects of the comparison of carcinogenic potency in rats and mice. Fundam. Appl. Toxicol. 5(1):79-86. Boström, C.E., P. Gerde, A. Hanberg, B. Jernström, C. Johansson, T. Kyrklund, A. Rannug, M. Törnqvist, K. Victorin, and R. Westerholm. 2002. Cancer risk assessment, indicators, and guidelines for polycyclic aromatic hydrocarbons in the ambient air. Environ. Health Perspect. 110(Suppl. 3):451-488. Breyer, S. 1992. Breaking the Vicious Circle: Toward Effective Risk Regulation. Cambridge, MA: Harvard Uni- versity Press. Clewell, H.J. III, M.E. Andersen, and H.A. Barton. 2002. A consistent approach for the application of pharmaco- kinetic modeling in cancer and noncancer risk assessment. Environ. Health Perspect. 110(1):85-93. Cohen, J., D. Bellinger, W. Connor, P. Kris-Etherton, R. Lawrence, D. Savitz, B. Shaywitz, S. Teutsch, and G. Gray. 2005. A quantitative risk-benefit analysis of changes in population fish consumption. Am. J. Prev. Med. 29(4):325-334. Crawford, M., and R. Wilson. 1996. Low-dose linearity: The rule or the exception? Hum. Ecol. Risk Assess. 2(2):305-330. Crouch, E.A.C., J. Feller, M.B. Fiering, E. Hakanoglu, R. Wilson, and L. Zeise. 1982. Health and Environmental Effects Document: Non-Regulatory and Cost Effective Control of Carcinogenic Hazard. Prepared for the Department of Energy, Health and Assessment Division, Office of Energy Research, by Energy and Environ- mental Policy Center, Harvard University, Cambridge, MA. September 1982. Crouch, E., R. Wilson, and L. Zeise. 1987. Tautology or not tautology? Toxicol. Environ. Health 20(1-2):1-10. DeWoskin, R.S., J.C. Lipscomb, C. Thompson, W.A. Chiu, P. Schlosser, C. Smallwood, J. Swartout, L. Teuschler, and A. Marcus. 2007. Pharmacokinetic/physiologically based pharmacokinetic models in integrated risk in- formation system assessments. Pp. 301-348 in Toxicokinetics and Risk Assessment, J.C. Lipscomb and E.V. Ohanian, eds. New York: Informa Healthcare. Dix, D.J., K.A. Houck, M.T. Martin, A.M. Richard, R.W. Setzer, and R.J. Kavlock. 2007. The ToxCast program for prioritizing toxicity testing of environmental chemicals. Toxicol. Sci. 95(1):5-12. EC (European Commission). 1993. Commission Directive 93/67/EEC of 20 July 1993, Laying down the Principles for the Assessment of Risks to Man and the Environment of Substances Notified in Accordance with Council Directive 67/548/EEC. Official Journal of the European Communities L227:9-18.

SELECTION AND USE OF DEFAULTS 209 EC (European Commission). 1994. Commission Regulation (EC) No. 1488/94 of 28 June 1994, Laying down the Principles for the Assessment of Risks to Man and the Environment of Existing Substances in Accordance with Council Regulation (EEC) No793/93. Official Journal of the European Communities L161:3-11 [online]. Available: http://www.unitar.org/cwm/publications/cbl/ghs/Documents_2ed/C_Regional_Documents/85_EU_ Regulation148894EC.pdf [accessed Jan. 25, 2008]. EC (European Commission). 1998. Directive 98/8/EC of the European Parliament and of the Council of 16 February 1998 Concerning the Placing of Biocidal Products on the Market. Official Journal of the European Communities L123/1-L123/63 [online]. Available: http://ecb.jrc.it/legislation/1998L0008EC.pdf [accessed Jan. 28, 2008]. EC (European Commission). 2003. Technical Guidance Document in Support of Commission Directive 93/67/ EEC on Risk Assessment for New Notified Substances and Commission Regulation (EC) 1488/94 on Risk Assessment for Existing Substances, and Directive 98/8/EC of the European Parliament and the Council Concerning the Placing of Biocidal Products on the Market, 2nd Ed. European Chemicals Bureau, Joint Research Centre, Ispra, Italy [online]. Available: http://ecb.jrc.it/home.php?CONTENU=/DOCUMENTS/ TECHNICAL_GUIDANCE_DOCUMENT/EDITION_2/ [accessed Jan. 28, 2008]. EPA (U.S. Environmental Protection Agency). 1986. Guidelines for the Health Risk Assessment of Chemical Mix- tures. EPA/630/R-98/002. Office of Research and Development, U.S. Environmental Protection Agency, Wash- ington, DC. September 1986 [online]. Available: http://www.epa.gov/ncea/raf/pdfs/chem_mix/chemmix_1986. pdf [accessed Jan. 24, 2008]. EPA (U.S. Environmental Protection Agency). 1991a. Guidelines for Developmental Toxicity Risk Assessment. EPA/600/FR-91/001. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. De- cember 1991 [online]. Available: http://www.epa.gov/NCEA/raf/pdfs/devtox.pdf [accessed Jan. 10, 2008]. EPA (U.S. Environmental Protection Agency). 1991b. Alpha-2µ-Globulin: Association with Chemically-Induced Renal Toxicity and Neoplasia in the Male Rat. EPA/625/3-91/019F. Prepared for Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. February 1991. EPA (U.S. Environmental Protection Agency). 1994. Methods for Derivation of Inhalation Reference Concentra- tions and Application of Inhalation Dosimetry. EPA/600/8-90/066F. Environmental Criteria and Assessment Office, Office of Health and Environmental Assessment, Office of Research and Development, U.S. Environ- mental Protection Agency, Research Triangle Park, NC. October 1994 [online]. Available: http://cfpub.epa. gov/ncea/cfm/recordisplay.cfm?deid=71993 [accessed Jan. 24, 2008]. EPA (U.S. Environmental Protection Agency). 1996. Guidelines for Reproductive Toxicity Risk Assessment. EPA/630/R-96/009. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. Octo- ber 1996 [online]. Available: http://www.epa.gov/ncea/raf/pdfs/repro51.pdf [accessed Jan. 10, 2008]. EPA (U.S. Environmental Protection Agency). 1997. Guiding Principles for Monte Carlo Analysis. EPA/630/ R-97/001. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. March 1997 [online]. Available: http://www.epa.gov/ncea/raf/montecar.pdf [accessed Jan. 7, 2008]. EPA (U.S. Environmental Protection Agency). 1998a. Guidelines for Neurotoxicity Risk Assessment. EPA/630/ R-95/001F. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. April 1998 [online]. Available: http://www.epa.gov/NCEA/raf/pdfs/neurotox.pdf [accessed Jan. 24, 2008]. EPA (U.S. Environmental Protection Agency). 1998b. Assessment of Thyroid Follicular Cell Tumors. EPA/630/ R-97-002. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. March 1998 [online]. Available: http://www.epa.gov/ncea/pdfs/thyroid.pdf [accessed Jan. 25, 2008]. EPA (U.S. Environmental Protection Agency). 2000a. Risk Characterization Handbook. EPA-100-B-00-002. Office of Science Policy, Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC. December 2000 [online]. Available: http://www.epa.gov/OSA/spc/pdfs/rchandbk.pdf [accessed Feb. 6, 2008]. EPA (U.S. Environmental Protection Agency). 2000b. Supplementary Guidance for Conducting Health Risk As- sessment of Chemical Mixtures. EPA/630/R-00/002. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. August 2000 [online]. Available: http://www.epa.gov/ncea/raf/pdfs/chem_mix/ chem_mix_08_2001.pdf [accessed Jan. 7, 2008]. EPA (U.S. Environmental Protection Agency). 2001. Toxicological Review of Chloroform (CAS No. 67-66-3) In Support of Summary Information on the Integrated Risk Information System (IRIS). EPA/635/R-01/001. U.S. Environmental Protection Agency, Washington, DC. October 2001 [online]. Available: http://www.epa. gov/iris/toxreviews/0025-tr.pdf [accessed Jan. 25, 2008]. EPA (U.S. Environmental Protection Agency). 2002a. A Review of the Reference Dose and Reference Concentration Processes. Final report. EPA/630/P-02/002F. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. December 2002 [online]. Available: http://www.epa.gov/iris/RFD_FINAL%5B1%5D.pdf [accessed Jan. 14, 2008].

210 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT EPA (U.S. Environmental Protection Agency). 2002b. Determination of the Appropriate FQPA Safety Factor(s) in Tolerance Assessment. Office of Pesticide Programs, U.S. Environmental Protection Agency, Washington, DC. February 28, 2002 [online]. Available: http://www.epa.gov/oppfead1/trac/science/determ.pdf [accessed Jan. 25, 2008]. EPA (U.S. Environmental Protection Agency). 2003. Exposure and Human Health Reassessment of 2,3,7,8- Tetrachlorodibenzo-p-Dioxin (TCDD) and Related Compounds. NAS Review Draft. National Center for Environmental Assessment, Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC. December 2003 [online]. Available: http://www.epa.gov/NCEA/pdfs/dioxin/nas-review/ [accessed Jan. 9, 2008]. EPA (U.S. Environmental Protection Agency). 2004a. Risk Assessment Principles and Practices: Staff Paper. EPA/100/B-04/001. Office of the Science Advisor, U.S. Environmental Protection Agency, Washington, DC. March 2004 [online]. Available: http://www.epa.gov/osa/pdfs/ratf-final.pdf [accessed Jan. 9, 2008]. EPA (U.S. Environmental Protection Agency). 2004b. Toxicological Review of Boron and Compounds (CAS No. 7440-42-8) In Support of Summary Information on the Integrated Risk Information System (IRIS). EPA 635/04/052. U.S. Environmental Protection Agency, Washington, DC. June 2004 [online]. Available: http:// www.epa.gov/iris/toxreviews/0410-tr.pdf [accessed Jan. 25, 2008]. EPA (U.S. Environmental Protection Agency). 2005a. Guidelines for Carcinogen Risk Assessment. EPA/630/ P-03/001F. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. March 2005 [online]. Available: http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=116283 [accessed Feb. 7, 2007]. EPA (U.S. Environmental Protection Agency). 2005b. Supplemental Guidance for Assessing Susceptibility for Early-Life Exposures to Carcinogens. EPA/630/R-03/003F. Risk Assessment Forum, U.S. Environmental Pro- tection Agency, Washington, DC. March 2005 [online]. Available: http://cfpub.epa.gov/ncea/cfm/recordisplay. cfm?deid=160003 [accessed Jan. 4, 2008]. EPA (U.S. Environmental Protection Agency). 2006. Modifying EPA Radiation Risk Models Based on BEIR VII. Draft White Paper. Office of Radiation and Indoor Air, U.S. Environmental Protection Agency. August 1, 2006 [online]. Available: http://www.epa.gov/rpdweb00/docs/assessment/white-paper8106.pdf [accessed Jan. 25, 2008]. EPA (U.S. Environmental Protection Agency). 2007a. Toxicological Review of 1,1,1-Trichloroethane (CAS No. 71-55-6) In Support of Summary Information on the Integrated Risk Information System (IRIS). EPA/635/ R-03/013. U.S. Environmental Protection Agency, Washington, DC. August 2007 [online]. Available: http:// www.epa.gov/IRIS/toxreviews/0197-tr.pdf [accessed Jan. 25, 2008]. EPA (U.S. Environmental Protection Agency). 2007b. Chemical Categories Report. New Chemicals Program, Office of Pollution Prevention and Toxics, U.S. Environmental Protection Agency [online]. Available: http://www. epa.gov/opptintr/newchems/pubs/chemcat.htm [accessed Jan. 25, 2008]. EPA (U.S. Environmental Protection Agency). 2007c. Distributed Structure-Searchable Toxicity (DSSTox) Database Network. Computational Toxicology Program, U.S. Environmental Protection Agency [online]. Available: http://www.epa.gov/comptox/dsstox/ [accessed Jan. 25, 2008]. EPA (U.S. Environmental Protection Agency). 2007d. Human Health Research Program: Research Progress to Benefit Public Health. EPA/600/F-07/001. Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC. April 2007 [online]. Available: http://www.epa.gov/hhrp/files/g29888-gpi-gpo-epa- brochure.pdf [accessed Oct. 21, 2008] EPA SAB (U.S. Environmental Protection Agency, Science Advisory Board). 1997. An SAB Report: Guidelines for Cancer Risk Assessment. Review of the Office of Research and Development’s Draft Guidelines for Cancer Risk Assessment. EPA-SAB-EHC-97-010. Science Advisory Board, U.S. Environmental Protection Agency, Washington, DC. September 1997 [online]. Available: http://yosemite.epa.gov/sab/sabproduct.nsf/ 6A6D30CFB1812384852571930066278B/$File/ehc9710.pdf [accessed Jan. 25, 2008]. EPA SAB (U.S. Environmental Protection Agency, Science Advisory Board). 1999. Review of Revised Sec- tions of the Proposed Guidelines for Carcinogen Risk Assessment. Review of the Draft Revised Cancer Risk Assessment Guidelines. EPA-SAB-EC-99-015. Science Advisory Board, U.S. Environmental Protec- tion Agency, Washington, DC. July 1999 [online]. Available: http://yosemite.epa.gov/sab/sabproduct.nsf/ 857F46C5C8B4BE4985257193004CF904/$File/ec15.pdf [accessed Jan. 25, 2008]. EPA SAB (U.S. Environmental Protection Agency, Science Advisory Board). 2000. Review of EPA’s Draft Chlo- roform Risk Assessment. EPA-SAB-EC-00-009. Science Advisory Board, U.S. Environmental Protection Agency, Washington, DC. April 2000 [online]. Available: http://yosemite.epa.gov/sab/sabproduct.nsf/ D0E41CF58569B1618525719B0064BC3A/$File/ec0009.pdf [accessed Jan. 25, 2008].

SELECTION AND USE OF DEFAULTS 211 EPA SAB (U.S. Environmental Protection Agency, Science Advisory Board). 2004a. Commentary on EPA’s Initia- tives to Improve Human Health Risk Assessment. Letter from Rebecca Parkin, Chair of the SAB Integrated Human Exposure, and William Glaze, Chair of the Science Advisory Board, to Michael O. Levitt, Admin- istrator, U.S. Environmental Protection Agency, Washington, DC. EPA-SAB-COM-05-001. October 24, 2004 [online]. Available: http://yosemite.epa.gov/sab/sabproduct.nsf/36a1ca3f683ae57a85256ce9006a32d0/ 733E51AAE52223F18525718D00587997/$File/sab_com_05_001.pdf [accessed Oct. 21, 2008]. EPA SAB (U.S. Environmental Protection Agency, Science Advisory Board). 2004b. EPA’s Multimedia, Multpath- way, and Multireceptor Risk Assessment (3MRA) Modeling System. EPA-SAB-05-003. Science Advisory Board, U.S. Environmental Protection Agency, Washington, DC [online]. Available: http://yosemite.epa.gov/ sab/sabproduct.nsf/99390EFBFC255AE885256FFE00579745/$File/SAB-05-003_unsigned.pdf [accessed Jan. 25, 2008]. EPA SAB (U.S. Environmental Protection Agency, Science Advisory Board). 2006. Science and Research Budgets for the U.S. Environmental Protection Agency for Fiscal Year 2007. EPA-SAB-ADV-06-003. Science Advisory Board, Office of the Administrator, U.S. Environmental Protection Agency, Washington, DC. March 30, 2006 [online]. Available: http://www.epa.gov/science1/pdf/sab-adv-06-003.pdf [accessed Dec. 5, 2007]. EPA SAB (U.S. Environmental Protection Agency, Science Advisory Board). 2007. Comments on EPA’s Strategic Research Directions and Research Budget for FY 2008. EPA-SAB-07-004. Science Advisory Board, Office of the Administrator, U.S. Environmental Protection Agency, Washington, DC. March 13, 2007 [online]. Avail- able: http://www.epa.gov/science1/pdf/sab-07-004.pdf [accessed Dec. 5, 2007]. Finkel, A.M. 1994. The case for “plausible conservatism” in choosing and altering defaults. Appendix N-1 in Sci- ence and Judgment in Risk Assessment. Washington, DC: National Academy Press. Finkel, A.M. 1997. Disconnect brain and repeat after me: “Risk Assessments is too conservative.” Ann. N.Y. Acad. Sci. 837:397-417. Finkel, A.M. 2003. Too much of the “Red Book” is still (!) ahead of its time. Hum. Ecol. Risk Assess. 9(5): 1253-1271. Gilman, P. 2006. Response to “IRIS from the Inside.” Risk Anal. 26(6):1413. Gold, L.S., C.B. Sawyer, R. Magaw, G.M. Backman, M. de Veciana, R. Levinson, N.K. Hooper, W.R. Havender, L. Bernstein, R. Peto, M.C. Pike, and B.N. Ames. 1984. A carcinogenic potency database of the standardized results of animal bioassays. Environ. Health Perspect. 58:9-319. Hattis, D., and M.K. Lynch. 2007. Empirically observed distributions of pharmacokinetic and pharmacodynamic variability in humans: Implications for the derivation of single point component uncertainty factors providing equivalent protection as existing RfDs. Pp. 69-93 in Toxicokinetics and Risk Assessment, J.C. Lipscomb, and E.V. Ohanian, eds. New York: Informa Healthcare. Kaldor, J.M., N.E. Day, and K Hemminki. 1988. Quantifying the carcinogenicity of antineoplastic drugs. Eur. J. Cancer Clin. Oncol. 24(4):703-711. McClellan, R.O., and D.W. North. 1994. Making full use of scientific information in risk assessment. Appendix N-2 in Science and Judgment in Risk Assessment. Washington, DC: National Academy Press. Mills, A. 2006. IRIS from the Inside. Risk Anal. 26(6):1409-1410. NRC (National Research Council). 1983. Risk Assessment in the Federal Government: Managing the Process. Washington, DC: National Academy Press. NRC (National Research Council). 1994. Science and Judgment in Risk Assessment. Washington, DC: National Academy Press. NRC (National Research Council). 2006a. Health Risks from Dioxin and Related Compounds: Evaluation of the EPA Reassessment. Washington, DC: The National Academies Press. NRC (National Research Council). 2006b. Toxicity Testing for Assessment of Environmental Agents: Interim Report. Washington, DC: The National Academies Press. NRC (National Research Council). 2007a. Quantitative Approaches to Characterizing Uncertainty in Human Cancer Risk Assessment Based on Bioassay Results. Second Workshop of the Standing Committee on Risk Analysis Issues and Reviews, June 5, 2007, Washington, DC [online]. Available: http://dels.nas.edu/best/ risk_analysis/workshops.shtml [accessed Nov. 27, 2007]. NRC (National Research Council). 2007b. Toxicity Testing in the Twenty-first Century: A Vision and a Strategy. Washington, DC: The National Academies Press. OECD (Organisation for Economic Co-operation and Development). 2007. Guidance on Grouping Chemicals. Series on Testing and Assessment No. 80. ENV/JM/MONO(2007)28. Environment Directorate, Joint Meeting of the Chemicals Committee and the Working Party on Chemicals, Pesticides and Biotechnology, Organisation for Economic Co-operation and Development. September 28, 2007 [online]. Available: http://appli1.oecd. org/olis/2007doc.nsf/linkto/env-jm-mono(2007)28 [accessed Jan. 25, 2008].

212 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT OMB (Office of Management and Budget). 1990. Current Regulatory Issues in Risk Assessment and Risk Manage- ment in Regulatory Program of the United States, April 1, 1990-March 31, 1991. Office of Management and Budget, Washington, DC. Pacala, S.W., E. Bulte, J.A. List, and S.A. Levin. 2003. False alarm over environmental false alarms. Science 301(5637):1187-1188. Perhac, R.M. 1996. Does Risk Aversion Make a Case for Conservatism? Risk Health Saf. Environ. 7:297. Risk Policy Report. 2004. EPA Boron Review Reflects Revised Process to Boost Scientific Certainty. Inside EPA’s Risk Policy Report 11(8):3. van den Berg, M., L.S. Birnbaum, M. Denison, M. De Vito, W. Farland, M. Feeley, H. Fiedler, H. Hakansson, A. Hanberg, L. Haws, M. Rose, S. Safe, D. Schrenk, C. Tohyama, A. Tritscher, J. Tuomisto, M. Tysklind, N. Walker, and R.E. Peterson. 2006. The 2005 World Health Organization reevaluation of human and mam- malian toxic equivalency factors for dioxins and dioxin-like compounds. Toxicol. Sci. 93(2):223-241. Zeise, L. 1994. Assessment of carcinogenic risks in the workplace. Pp. 113-122 in Chemical Risk Assessment and Occupational Health: Current Applications, Limitations and Future Prospects, C.M. Smith, D.C. Christiani, and K.T. Kelsey, eds. Westport, CT: Auburn House. Zeise, L., R. Wilson, and E.A.C. Crouch. 1984. Use of acute toxicity to estimate carcinogenic risk. Risk Anal. 4(3):187-199. Zeise, L., E.A.C. Crouch, and R. Wilson. 1986. A possible relationship between toxicity and carcinogenicity. J. Am. Coll. Toxicol. 5(2):137-151. Zhao, Q., J. Unrine, and M. Dourson. 1999. Replacing the default values of 10 with data-derived values: A com- parison of two different data-derived uncertainty factors for boron. Hum. Ecol. Risk Asses. 5(5):973-983.

Next: 7 Implementing Cumulative Risk Assessment »
Science and Decisions: Advancing Risk Assessment Get This Book
×
 Science and Decisions: Advancing Risk Assessment
Buy Paperback | $65.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Risk assessment has become a dominant public policy tool for making choices, based on limited resources, to protect public health and the environment. It has been instrumental to the mission of the U.S. Environmental Protection Agency (EPA) as well as other federal agencies in evaluating public health concerns, informing regulatory and technological decisions, prioritizing research needs and funding, and in developing approaches for cost-benefit analysis.

However, risk assessment is at a crossroads. Despite advances in the field, risk assessment faces a number of significant challenges including lengthy delays in making complex decisions; lack of data leading to significant uncertainty in risk assessments; and many chemicals in the marketplace that have not been evaluated and emerging agents requiring assessment.

Science and Decisions makes practical scientific and technical recommendations to address these challenges. This book is a complement to the widely used 1983 National Academies book, Risk Assessment in the Federal Government (also known as the Red Book). The earlier book established a framework for the concepts and conduct of risk assessment that has been adopted by numerous expert committees, regulatory agencies, and public health institutions. The new book embeds these concepts within a broader framework for risk-based decision-making. Together, these are essential references for those working in the regulatory and public health fields.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!