Strategies for Improving Risk Assessment
Previous chapters have examined the various steps of the health risk-assessment process in the sequence developed by the 1983 Red Book committee. In considering the various steps to risk assessment, the committee observed that several common themes cut across the various stages of risk assessment and arise in criticisms of each individual step. These themes are as follows:
The ''Red Book" paradigm should be supplemented by applying a cross-cutting approach that uses those themes. Such an approach could ameliorate the following problems in risk assessment as it is currently practiced within the agency:
Considering the six cross-cutting themes in the planning and analysis of risk assessment will not solve the problems of risk assessment by itself. Indeed, too much emphasis on a cross-cutting vision of risk assessment might create unanticipated problems. On balance, however, the view of risk assessment proposed in Chapters 6-11 will serve two important purposes: it will give the individual cross-cutting themes a more prominent place in the risk-assessment process, and it will encourage the gradual evolution of attempts to improve risk assessment from its current, somewhat piecemeal orientation to a more holistic one, with the goal of improving the precision, comprehensibility, and usefulness for regulatory decision-making of the entire risk-assessment process. Whatever conceptual framework is used, the committee believes that EPA must develop principles for choosing default options and for judging when and how to depart from them. This controversial issue is described in the next section.
The Need For Risk-Assessment Principles
Our scientific knowledge of hazardous air pollutants has numerous gaps. Hence, there are many uncertainties in the health risk assessments of those pollutants. Some of these can be referred to as model uncertaintiesfor example, uncertainties regarding dose-response model choices due to a lack of knowledge about the mechanisms by which hazardous air pollutants elicit toxicity. As discussed more fully in Chapter 6, EPA has developed "default options" to use when such uncertainties arise. These options are used in the absence of convincing scientific information on which of several competing models and theories is correct. The options are not rules that bind the agency; rather, they constitute
guidelines from which the agency may depart when evaluating the risks posed by a specific substance. The agency may also change the guidelines as scientific knowledge accumulates.
The committee, as discussed in Chapter 6, believes that EPA has acted reasonably in electing to issue default options. Without uniform guidelines, there is a danger that the models used in risk assessment will be selected on an ad hoc basis, according to whether regulating a substance is though to be politically feasible or according to other parochial concerns. In addition, guidelines can provide a predictable and consistent structure for risk assessment.
The committee believes that only the description of default options in a risk assessment is not adequate. We believe that EPA should have principles for choosing default options and for judging when and how to depart from them. Without such principles, departures from defaults could be ad hoc, thereby undercutting the purpose of the default options. Neither the agency nor interested parties would have any guidance about the quality or quantity of evidence necessary to persuade the agency to depart from the default options or the point(s) in the process at which to present that evidence.
Moreover, without an underlying set of principles, EPA and the public will have no way to judge the wisdom of the default options themselves. The individual default options inevitably vary in their scientific basis, foundation in empirical data, degree of conservatism, plausibility, simplicity, transparency, and other attributes. If defaults were chosen without conscious reference to these or other attributes, EPA would be unable to judge the extent to which they fulfill the desired attributes. Nor could the agency make intelligent and consistent judgment about when and how to add new default options when "missing defaults" are identified. In addition, the policies that underlie EPA's choice of risk-assessment methods would not be clear to the public and Congressfor example, it would be unclear whether EPA places the highest value on protecting public health, on generating scientifically accurate estimates, or on other concerns.
The committee has identified a number of objectives that should be taken into account when considering principles for choosing and departing from default options: protecting the public health, ensuring scientific validity, minimizing serious errors in estimating risks, maximizing incentives for research, creating an orderly and predictable process, and fostering openness and trustworthiness. There might be additional relevant criteria as well.
The choice of principles inevitably involves choosing how to balance such objectives. For instance, the most open process might not be the one that yields the result most likely to be scientifically valid. Similarly, the goal of minimizing errors in estimation might conflict with that of protecting the public health, inasmuch as (given the pervasiveness of uncertainty) achievement of the latter objective might involve accepting the possibility that a given risk assessment will overestimate the risk.
The committee therefore found it difficult to agree on what principles EPA should adopt. For example, the committee debated whether EPA should base its practices on "plausible conservatism"that is, on attempting to use models that have support in the scientific community and that tend to minimize the possibility that risk estimates generated by these models will significantly underestimate true risks. The committee also discussed whether EPA instead should attempt as much as possible to base its practices on calculating the risk estimate most likely to be true in the light of current scientific knowledge. After extensive discussion, no consensus was reached on this issue.
The committee also concluded that the choice of principles to guide risk assessment, although it requires a knowledge of science and scientific judgment, ultimately depends on policy judgments, and thus is not an issue for specific consideration by the committee, even if it could agree on the substance of specific recommendations. The choice reflects decisions about how scientific data and inferences should be used in the risk-assessment process, not about which data are correct or about what inferences should be drawn from those data. Thus, the selection of principles inevitably involves choices among competing values and among competing judgments about how best to respond to uncertainty.
Many members contended that the committee ought not attempt to recommend principles, but should leave their formulation to the policy process. They concluded that weighing societal values is properly left to those who have been chosen, directly or indirectly, to represent the public. Indeed, in the view of these members, any recommendation by the committee would give the false impression that the choice of principles is ultimately an issue of science; noting the sharp differentiation that Congress made between the tasks of this committee and those of the Risk Assessment and Management Commission established by Section 303 of the Clean Air Act Amendments of 1990. That commission, rather than this committee, appears to have been intended to address issues of policy.
Other members contended that the committee should attempt to recommend principles. They urged that the choice of risk-assessment principles is one of the most important decisions to be made in risk assessment and one on which risk assessment experts, because of their expertise on the scientific issues related to the choice, ought to make themselves heard. They believe that the choice of principles is no more policy-laden than many other issues addressed by the committee, and that the decision not to recommend principles is itself a policy choice. They also note that the scientific elements involved in making the choice distinguish the selection of principles from other pure "policy" issues that the committee agreed not to address such as the use of cost-benefit methods or the implications of the psychosocial dimensions of risk perception.
The committee has decided not to recommend principles in its report. Instead, it has included in Appendix N papers by three of its members that offer various perspectives on the issue. One paper, by Adam Finkel, urges that EPA
should strive to advance scientific consensus while minimizing serious errors of risk underestimation, by adopting an approach of "plausible conservatism." The other, by Roger McClellan and Warner North, argues that EPA should promote risk assessments that reflect current scientific understanding. Those perspectives are not intended to reflect the total range of opinion among committee members on the subject, but are presented to illustrate the issues involved.
Reporting Risk Assessments
As already mentioned, uncertainties are pervasive in risk assessment. When uncertainty concerns the magnitude of a physical quantity that can be measured or inferred from assumptions (e.g., ambient concentration), it can often be quantified, as Chapter 9 suggests.
Model uncertainties result from an inability to determine which scientific theory is correct or what assumptions should be used to derive risk estimates. Such uncertainties cannot be quantified on the basis of data. Any expression of probability, whether qualitative (e.g., a scientist's statement that a threshold is likely) or quantitative (e.g., a scientist's statement that there is a 90% probability of a threshold), is likely to be subjective. Subjective quantitative probabilities could be useful in conveying the judgments of individual scientists to risk managers and to the public, but the process of assessing subjective probabilities is difficult and essentially untried in a regulatory context. Substantial disagreement and misunderstanding about the reliability of quantitative probabilities could occur, especially if their basis is not set forth clearly and in detail.
In the face of important model uncertainties, it may still be undesirable to reduce a risk characterization to a single number, or even to a range of numbers intended to portray uncertainty. Instead, EPA should consider giving risk managers risk characterizations that are both qualitative and quantitative and both verbal and mathematical.
If EPA takes this route, quantitative assessments provided to risk managers should be based on the principles selected by EPA. EPA might choose to require that a risk assessment be accompanied by a statement describing alternative assumptions presented to the agency that, although they do not meet the principles selected by EPA for use in the risk characterization, satisfy some lesser test (e.g., plausibility). For example, EPA generally assumes that no threshold exists for carcinogenicity and calculates cancer potency using the linearized multistage model as the default. Commenters to the agency on a specific substance might attempt to show that there is a threshold for that substance on the basis of what is known about its mechanism of action. If the threshold can be demonstrated in a manner that is satisfactory under the agency's risk-assessment principles, the risk characterization would be based on the threshold assumption. If such a demonstration cannot be made, then the risk characterization would be based on the no-threshold assumption; but if the threshold assumption were found to be
plausible, the risk manager might be informed of its existence as a plausible assumption, its rationale, and its effect on the risk estimate. In this way, risk assessors would receive both qualitative and quantitative information relevant to characterizing the uncertainty associated with the risk estimate.
The Iterative Approach
One strategy component that deserves emphasis is the need for iteration. Neither the resources nor the necessary scientific data exist to perform a full-scale risk assessment on each of the 189 chemicals listed as hazardous air pollutants by Section 112 of the Clean Air Act. Nor, in many cases, is such an assessment needed. Some of the chemicals are unlikely to pose more than a de minimis (trivial) risk once the maximum available control technology is applied to their sources as required by Section 112. Moreover, most sources of Section 112 pollutants emit more than one such pollutant, and control technology for Section 112 pollutants is rarely pollutant-specific. Therefore, there might not be much incentive for industry to petition EPA to remove substances from Section 112's list (or much need for EPA to devote its resources to carrying out risk assessments in response to such petitions).
An iterative approach to risk assessment would start with relatively inexpensive screening techniques and move to more resource-intensive levels of datagathering, model construction, and model application as the particular situation warranted. To guard against the possibility of underestimating risk, screening techniques must be constructed that err on the side of caution when there is uncertainty. (As discussed in Chapter 12, the committee has some doubts about whether EPA's current screening techniques are so constructed.) The results of such screening should be used to set priorities for gathering further data and applying successively more complex techniques. These techniques should then be used to the extent necessary to make a judgment. In Chapter 7, the kinds of data that should be obtained at each stage of such an iterative process are described. The result would be a process that yields the risk-management decisions required by the Clean Air Act and that provides incentives for further research without the need for costly case-by-case evaluations of individual chemicals. Use of an iterative approach can improve the scientific basis of risk-assessment decisions and account for risk-management concerns, such as the level of protection and resource constraints.
EPA's risk-assessment practices rest heavily on "inference guidelines" or, as they are often called, "default options." These options are generic approaches, based on general scientific knowledge and policy judgment, that are applied to various elements of the risk-assessment process when the correct scientific model is unknown or uncertain. The 1983 NRC report Risk Assessment in the Federal Government: Managing the Process defined default Option as "the option chosen on the basis of risk assessment policy that appears to be the best choice in the absence of data to the contrary" (NRC, 1983a, p. 63). Default options are not rules that bind the agency; rather, as the alternative term inference guidelines implies, the agency may depart from them in evaluating the risks posed by a specific substance when it believes this to be appropriate. In this chapter, we discuss EPA's practice of adopting guidelines containing default options and departing from them in specific cases.
Adoption Of Guidelines
As our discussion of risk assessment has made clear, current knowledge of carcinogenesis, although rapidly advancing, still contains many important gaps. For instance, for most carcinogens, we do not know the complete relationship between the dose of a carcinogen and the risk it poses. Thus, when there is evidence of a carcinogenic effect at a high concentration (for instance, in the workplace or in animal testing), we do not know for certain how strong the effect (if any) would be at the lower concentrations typically found in the environment. Similarly, we do not know how much importance to attach to experiments that
show that exposure to a substance causes only benign tumors in animals or how to adjust for metabolic differences between animals and humans in calculating the carcinogenic potency of a chemical.
Other uncertainties are not peculiar to carcinogenesis, but are characteristic of many aspects of risk assessment. For example, calculating the doses received by individuals might require knowledge of the relationship between emission of a substance by a source and the ambient concentration of that substance at a particular place and time. It is impossible to install a monitor at every place where people might be exposed; moreover, monitoring results are subject to error. Thus, regulators attempt to use air-quality models to predict ambient concentrations. But because our knowledge of atmospheric processes is imperfect and the data needed to use the models cannot always be obtained, the predictions from atmospheric-transport models can differ substantially from measured ambient concentrations (NRC, 1991a).
In time, we hope, our knowledge and data will improve. Indeed, we believe that EPA and other government agencies must engage in scientific research and be receptive to the results of sound scientific research conducted by others. In the meantime, decisions about regulating hazardous air pollutants must be made under conditions of uncertainty. It is vital that the risk-assessment process handle uncertainties in a predictable way that is scientifically defensible, consistent with the agency's statutory mission, and responsive to the needs of decisionmakers.
These uncertainties, as we explain further in Chapter 9, are of two major types. One type, which we call parameter uncertainty, is caused by our inability to determine accurately the values of key inputs to scientific models, such as emissions, ambient concentrations, and rates of metabolic action. The second type, model uncertainty, is caused by gaps in our knowledge of mechanisms of exposure and toxicitygaps that make it impossible to know for certain which of several competing models is correct. For instance, as mentioned above, we often do not know whether a threshold may exist below which a dose of a carcinogen will not result in an adverse effect. As we discuss in Chapter 9, model uncertainties, unlike parameter uncertainties, are often difficult to quantify.
The Red Book recommended that model uncertainties be handled through the development of uniform inference guidelines for the use of federal regulatory agencies in the risk-assessment process. Such guidelines would structure the interpretation of scientific and technical information relevant to the assessment of health risks. The guidelines, the report urged, should not be rigid, but instead should allow flexibility to consider unique scientific evidence in particular instances.
The Red Book described the advantages of such guidelines as follows (pp. 7-8):
The use of uniform guidelines would promote clarity, completeness, and consistency in risk assessment; would clarify the relative roles of scientific and other factors in risk assessment policy; would help to ensure that assessments reflect the latest scientific understanding; and would enable regulated parties to anticipate government decisions. In addition, adherence to inference guidelines will aid in maintaining the distinction between risk assessment and risk management.
This committee believes that those considerations continue to be valid. In particular, we stress the importance of inference guidelines as a way of keeping risk assessment and risk management from unduly influencing each other. Without uniform guidelines, risk assessments might be manipulated on an ad hoc basis according to whether regulating a substance is thought to be politically feasible. In addition, we believe that inference guidelines can provide a predictable and consistent structure for risk assessment and that a statement of guidelines forces an agency to articulate publicly its approach to model uncertainty.
Like the committee that produced the 1983 NRC report, we recognize that there is an inevitable interplay between risk assessment and risk management. As the 1983 report states (pp. 76, 81), "risk assessment must always include policy, as well as science," and "guidelines must include both scientific knowledge and policy judgments." Any choice of defaults, or the decision not to have defaults at all, therefore amounts to a policy decision. Indeed, without a policy decision, the report stated, risk-assessment guidelines could do no more than "state the scientifically plausible inference options for each risk assessment component without attempting to select or even suggest a preferred inference option" (NRC, 1983a, p. 77). Such guidelines would be virtually useless. The report urged that risk-assessment guidelines include risk-assessment policy and explicitly distinguish between scientific knowledge and risk-assessment policy to keep policy decisions from being disguised as scientific conclusions (NRC, 1983a, p. 7). That report urged that for consistency, policy judgments related to risk assessment ought to be based on a common principle or principles.
We believe that EPA acted reasonably in electing to issue Guidelines for Carcinogen Risk Assessment (EPA, 1986a). Those guidelines set out policy judgments about the accommodation of model uncertainties that are used to assess risk in the absence of a clear demonstration that a particular theory or model should be used.
For instance, the default options indicate that, in assessing the magnitude of risk to humans associated with low doses of a substance, "in the absence of adequate information to the contrary, the linearized multistage procedure will be employed" (EPA, 1986a, p. 33997). The linearized multistage procedure implies low-dose linearity. At low doses, if the dose is reduced by, say, a factor of 1,000, the risk is also reduced by a factor of 1,000; dose is linearly related to risk. Departure from this default option is allowed, under EPA's current guide-
lines, if there is "adequate evidence" that the mechanism through which the substance is carcinogenic is more consistent with a different modelfor instance, that there is a threshold below which exposure is not associated with a risk. Thus, the default option in guiding a decision-maker, in the absence of evidence to the contrary, assigns the burden of persuasion to those who wish to show that the linearized multistage procedure should not be used. Similar default options cover such important issues as the calculation of effective dose, the treatment of benign tumors, and the procedure for scaling animal-test results to estimates of potency in humans.
Some default options are concerned with issues of extrapolationfrom laboratory animals to humans, from large to small exposures (or doses), from intermittent to chronic lifetime exposures, and from route to route (as from ingestion to inhalation). That is because few chemicals have been shown in epidemiologic studies to cause measurable numbers of human cancers directly, and epidemiologic data on only a few of these are sufficient to support quantitative estimates of human epidemiologic cancer risk. In the absence of adequate human data, it is necessary to use laboratory animals as surrogates for humans.
One advantage of guidelines, as already noted, is that they can articulate both the agency's choice of individual default options and its rationale for choosing all of the options. EPA's guidelines set out individual options but do not do so with ideal clarity. Nor has the agency explicitly articulated the scientific and policy bases for its options. Hence, there might be disagreement about precisely what the agency's default options are and the rationales for these options. We attempt here to identify the most important of the options (numbered points in the 1986 guidelines are cited):
EPA has never articulated the policy basis for those options. As we discuss in the previous introductory section (Part II), the agency should choose and explain the principles underlying its choices to avoid the dangers of ad hoc decision-making. The agency's choices are for the most part intended to be conservativethat is, they represent an implicit choice by the agency, in dealing with competing plausible assumptions, to use (as default options) the assumptions that lead to risk estimates that, although plausible, are believed to be more likely to overestimate than to underestimate the risk to human health and the environment. EPA's risk estimates thus are intended to reflect the upper region of the range of risks suggested by current scientific knowledge.
EPA appears to use conservative assumptions to implement Congress's authorization in several statutes, including the Clean Air Act, for the agency to undertake preventive action in the face of scientific uncertainty (see, e.g., Ethyl v. EPA, 541 F.2d 1 (D.C. Cir.) (en banc), certiorari denied 426 U.S. 941 (1976), ratified by Section 401 of the Clean Air Act Amendments of 1977) and to set standards that include a precautionary margin of safety against unknown effects and errors in calculating risks (see Environmental Defense Fund v. EPA, 598 F.2d 62, 70 (D.C. Cir. 1978) and Natural Resources Defense Council v. EPA, 824 F.2d 1146, 1165 (en banc) (D.C. Cir. 1987)).
EPA's choice of defaults has been controversial. We note, though, that some of the arguments about EPA's practices are directed less at conservatism than at the means of implementation that the agency has adopted. We believe that the iterative approach recommended in the previous chapter combined with quantitative uncertainty analysis will improve the agency's practices regardless of the degree of conservatism chosen by the agency. We also note that with an iterative approach, the agency must use relatively conservative models in performing screening estimates designed to indicate whether a pollutant is worthy of further analysis and comprehensive risk assessment. Such estimates are intended to obviate the detailed assessment of risks that can with a high degree of confidence be deemed acceptable or de minimis (trivial). By definition, therefore, screening analyses must be sufficiently conservative to make sure that a pollutant that could pose dangers to health or welfare will receive full scrutiny.
Over time, the choice of defaults should have decreasing impact on regulatory decision-making. As scientific knowledge increases, uncertainty diminishes. Better data and increased understanding of biological mechanisms should enable risk assessments that are less dependent on default assumptions and more accurate as predictions of human risk.
In evaluating EPA's risk-assessment methods, we are aware that the agency's guidelines, to use the terminology of the earlier NRC report, are in part statements of science policy, rather than purely statements of scientific fact. The guideline cited above dealing with extrapolation of high doses to low doses is illustrative. The guideline is not a claim that it is known that the relationship between dose and response is linear; that the true relationship between dose and response is uncertain and could be nonlinear is readily acknowledged. Rather, the guideline is based (1) on the scientific conclusion that the linear model has substantial support in current data and biologic theory and that no alternative model has sufficient support to warrant departure from the linear model for most chemicals identified as carcinogens; (2) on the further scientific conclusion that the linear model is more conservative than most alternative plausible models; and (3) on the policy judgment that a conservative model should be chosen when there is model uncertainty.
Departures From Default Options
Agency policies should encourage further scientific research. Risk assessors and managers must be receptive to new scientific information about the character and magnitude of the toxic effects of a chemical substance. Putting this receptivity into practice, though, has proved difficult. The 1983 NRC report criticized how agencies had implemented their guidelines. The report noted that ''the application of inference options to specific risk assessments has been marked by a general lack of explicitness" and that that made it "difficult to know whether assessors adhere to guidelines" (NRC, 1983a, p. 79). The NRC report recognized the need to prevent ad hoc and undocumented departures from guidelines in specific risk assessments. But the NRC report made it clear that well-designed guidelines "should permit acceptance of new evidence that differs from what was previously perceived as the general case, when scientifically justifiable." NRC urged a recognition of the need for a tradeoff between flexibility on the one hand and predictability and consistency on the other (NRC, 1983a, p. 81).
The NRC advocated that agencies seek a middle path between inflexibility and ad hoc judgments, but steering this course is difficult. Consistency and predictability are served if an agency sets out criteria for departing from its guidelines. If such criteria are themselves too rigidly applied, the guidelines could ossify into inflexible rules; but without such criteria, the guidelines could be subverted at will with the potential for political manipulation of risk assessment.
NRC's approach requires that agencies regard their inference options not as binding rules, but rather as guidelines that are to be followed unless a sufficient showing is made. In the decade since the NRC report, EPA has never articulated clearly its criteria for a departure. We believe that a structured approach would give better guidance to the scientific community and to the public and would ensure both that the default options are set aside only when there is a valid scientific reason for doing so and that decisions to set aside defaults are scientifically credible and receive public acceptance.
EPA's practice appears to be to allow departure in a specific case when it ascertains that there is a consensus among knowledgeable scientists that the available scientific evidence justifies departure from the default option. The agency apparently considers both the quality of the data submitted and the robustness of the theory that is used to justify the departure.
EPA needs to be more precise in describing the kind and strength of evidence that it will require to depart from a default option. Because the decision as to the evidentiary burden to be required is ultimately one of policy, and because we could not reach agreement on proposed language to implement such a standard (see Appendixes N-1 and N-2), we do not urge any particular standard; moreover, we are conscious of the difficulties of capturing the nuances of judgment in any verbal formula that will not be open to misinterpretation.
We believe that the agency must continue to rely on its Science Advisory Board (SAB) and other expert bodies to determine when departing from a default option is warranted according to default options EPA will develop. EPA has increasingly used peer review and workshops as a way to ensure that it carefully considers the propriety of departing from a default. These and other devices should continue to ensure broad peer and scientific participation to guarantee, as much as possible, that the agency's risk-assessment decisions are made with access to the best science available.
We note that here, too, EPA has a difficult path to tread. EPA has been criticized for delay in deciding whether to depart from default options. Increased procedural formality raises the possibility of further delays, especially in a period of budgetary stringency such as EPA can expect to face for some time. It is likely that EPA will be cutting back on hiring personnel at the salary ranks necessary to attract scientists with the needed experience and training to judge whether departure from a default option is justifiable. Congress ought to be aware of the need for greater agency resources to carry out the mandates of the Clean Air Act and similar legislation.
Even if a default option is not set aside, we believe that decision-makers ought to be informed in a narrative way of any specific information suggesting that, in specific cases, alternatives to the default options might have equal or greater scientific support, and believe that the characterization of risk should include a discussion of the effect of the alternative options on risk estimates.
Current Epa Practice In Departing From Default options
As discussed above, EPA needs simultaneously to be receptive to evidence indicating the need to depart from a default option and to be careful that it departs from a default in a specific case only when a departure is justifiable. In addition, the agency needs to follow a process that allows peer participation and review.
We discuss below some of the cases in which EPA has addressed the issue of whether to depart from default options. In each of these cases, EPA decisions to depart from default options lessened its estimate of the risk; however, it is important to note that new scientific data could also increase the estimate of risk above that reached by using the default options.
Example 1: Use of Animal-Cancer Bioassay Data
The example that follows illustrates a departure from the two default options that: (1) positive animal-bioassay results for cancer induction are sufficient proof of cancer hazard in humans; and (2) humans are at least as sensitive as the most sensitive responding animal species. It involves induction of kidney cancer in male laboratory rats by a number of chemicalsmost important, 1,4-dichlorobenzene, hexachloroethane, isophorone, tetrachloroethylene, dimethyl methyl phosphorate, d-limonene, pentachloroethane, and unleaded gasoline (EPA, 1991d). The first four have been classified as hazardous air pollutants by the Clean Air Act Amendments of 1990.
Male rats exposed to those chemicals develop dose-related kidney cancer; the highest incidence is usually 25% or less. The tumors do not occur in other organs or other species or in female rats. Because of the economic importance of several of the compounds and unleaded gasoline, extensive studies were conducted to understand the mechanisms involved in the development of the tumors. The studies suggested that a special mechanism was responsible for the tumors in male rats. When the chemicals in question are inhaled by male rats, the chemicals, or products of their metabolism, reach the bloodstream and form complexes with a specific protein, alpha-2-globulin, that is produced in the male rat liver and removed from the blood by the kidneys. As the complex is cleared from the blood by the kidneys, it accumulates there in the form of hyaline droplets, which lead to the development of kidney disease characterized by cell death, cast formation, mineralization, and hyperplasia. This accumulation, as well as statistically significant increases in tumors that result from exposure to the chemicals, occurs only in male rats.
In contrast, female rats, which do not have the same concentrations of alpha-2-globulin protein, do not develop statistically significant increases tumors as a result of exposure. Similarly, the protein is not present in detectable quantities
in humans, so no risk of kidney-cancer development by this mechanism would be expected in humans exposed to the chemicals in question. It was therefore suggested that, inasmuch as a special mechanism not found in humans seemed to be responsible for the tumors, EPA ought to depart in this case from its default option that a substance that is carcinogenic in animals is also a human carcinogen. In response, EPA (1991d) evaluated the evidence of production of kidney tumors in male rats by chemicals inducing alpha-2-globulin accumulation (CIGAs), such as those in question. EPA's review suggested that kidney cancer in male rats from exposure to CIGAs is due only to the kidney disease that CIGAs cause through accumulation of alpha 2-globulin. For instance, EPA noted, the CIGAs are not known to react with DNA and are generally negative in short-term tests for genotoxicity. In contrast, classical kidney carcinogens (or their active metabolites) are usually electrophilic species that bind covalently to macromolecules and form DNA adducts. With the classical kidney carcinogens, which presumably are carcinogenic in both laboratory animals and humans, the kidney carcinogenesis is presumed to result from the interaction of the compounds or their metabolites with DNA. Classical kidney carcinogens, such as dimethylnitrosamine, induce renal tubule cancer in laboratory animals at a high incidence in both sexes after short periods of exposure, with a clear increase in kidney tumor incidence with increased dose. Thus, the classical kidney carcinogens and CIGAs appear to act via different mechanisms.
After reviewing the data, EPA (1991d) provided specific decision criteria for categorizing a chemical as a CIGA. A substance may be so classified only if it meets all the decision criteria, and classification of a chemical as a CIGA does not keep it from being considered as a carcinogen because of other modes of action. In that way, the agency precisely tailored its proposed departure from default options. EPA concluded that renal tubule tumors in male rats attributable solely to chemically induced alpha-2-globulin accumulation should not be used for human-cancer hazard identification or for dose-response extrapolations. Furthermore, EPA noted that even in the absence of renal tubule tumors in the male rat, if the lesions of alpha-2-globulin syndrome are present, the associated nephropathy in male rats should not contribute to determinations of noncarcinogenic hazard or risk.
EPA's documents reviewed and synthesized the available scientific information in a document that was then presented to peers in a public meeting, reviewed by the SAB's Environmental Health Committee and later endorsed by the SAB Executive Committee, and transmitted to the administrator (EPA, 1991d). Transmission to the administrator was accompanied by endorsement by the SAB that the document outlined a scientifically sound policy for departing from the default option for this specific class of compounds. This policy has been generally supported by the scientific community. However, it is noteworthy that some researchers (see, e.g., Melnick, 1992) believe that another mechanism to explain all of the observed data is equally or more plausible than the one
EPA endorsed. Alpha-2-globulin may be a carrier protein that transports certain chemicals to the kidney, where toxic metabolites can be released; this mechanism defines alpha-2-globulin accumulation as an indicator, rather than the cause of renal toxicity. If so, humans may have other carrier proteins that could transport toxins to the kidney and cause toxicity or carcinogenicity in the absence of protein droplet information, and the assumption that the rat studies are irrevelant to humans might therefore be erroneous.
Example 2: Linkages Between Exposure, Dose, and Response
In the previous example, a departure from default options occurred at the hazard-identification stage. As discussed in examples 2 and 3, such departures can also be used to refine the unit risk estimate of a carcinogen.
Calculating the unit risk through quantitative risk assessment requires an understanding of the relationship between exposure to a substance and response. One part of this relationship involves the link between exposure (that is, intake of a substance) and dose (that is, the amount of the substance, or harmful metabolites, that is taken up by bodily organs). However, that understanding is incomplete. EPA's default options assume that all species are equally sensitive to a given target-tissue dose of the toxicant or its metabolites. The surface-to-area ratios in the test species and humans are used as the key to relating the dose received by the test species to the dose that would cause similar effects in humans (see pp. 6-7, III.A.3). As the following examples show, however, evidence can sometimes support departing from this default option.
Epidemiological studies on whether exposure to methylene chloride causes cancer in humans have produced equivocal results. Thus, assessment of methylene chloride's carcinogenic risk depends on use of laboratory animal data and especially on several long-term bioassays. Syrian hamsters did not show a tumor response at any site at exposures up to 3,500 ppm for 6 hr/day 5 days/week, but mice and rats exposed at up to 4,000 ppm for 6 hr/day 5 days/week had treatment-related tumorigenic effects. EPA, after evaluating the data, classified methylene chloride as a probable human carcinogen (B2).
In accord with the default options of EPA's guidelines, the carcinogenic potency of methylene chloride was estimated by scaling the laboratory animal data to humans with a body surface-area conversion factor. The resulting cancer risk estimate was 4.1 × 10-6 for exposure at 1 µg/m3 (Table 6-1). After further consideration, EPA has decreased this estimate by an order of magnitude (EPA, 1991d). The reduction is based on research on the pathways through which methylene chloride is metabolized. As with some other carcinogens, the risk of cancer arises not from methylene chloride itself, but rather from its metabolites.
A correct calculation of the risk posed by methylene chloride therefore rests on understanding the human body's processes for metabolizing this chemical.
Research with animal species used in the bioassays and human tissue has shed light on the metabolism of methylene chloride. Much of the research was conducted with the goal of providing input for physiologically based pharmacokinetic (PBPK) models (Andersen et al., 1987, 1991; Reitz et al., 1989). The data were modeled in various ways, including consideration of two metabolic pathways. One involves oxidation by mixed-function oxidase (MFO) enzymes, and the other involves a glutathione-S-transferase (GST). Both pathways involve the formation of potentially reactive intermediates: formyl chloride in the MFO pathway and chloromethyl glutathione in the GST-mediated pathway. The MFO pathway was modeled as having saturable, or Michaelis-Menten, kinetics, and the GST pathway as a first-order reaction, i.e., proportional to concentration. The analyses suggested that a reactive metabolite formed in the GST pathway
was responsible for tumor formation. This pathway, according to the analyses, contributes importantly to the disposition of methylene chloride only at exposures that saturate the primary MFO pathway. The analyses further indicated that the GST pathway is less active in human tissues than in mice. This suggests that the default option of scaling for surface area yields a human risk estimate that is too high to be plausible. EPA incorporated the data on pharmacokinetics and metabolism into its most recent risk assessment for methylene chloride, although it retained a surface-area correction factornow identifying it as a correction for interspecies differences in sensitivity. The new risk estimate is 4.7 × 10-7 for continuous exposure at 1 µg/m3 (Table 6-1).
The process by which EPA arrived at the current risk estimate for methylene chloride with PBPK modeling involved use of peer-review groups and SAB review to achieve a scientifically acceptable consensus position on the validity of the alternative model. After EPA's re-evaluation, however, articles in the peer-reviewed literature began to focus attention on parameter uncertainties in PBPK modeling, which neither EPA nor the original researchers in the methylene chloride case had considered. In the specific case of methylene chloride, at least one of the analyses (Portier and Kaplan, 1989) suggested that according to the new PBPK information EPA should have raised, rather than lowered, its original unit risk estimate if it wanted to continue to take a conservative stance. The more general point, which we discuss in Chapter 9, is that EPA must simultaneously consider both the evidence for departing from default models and the need to generate or modify the parameters that drive both the alternative and default models.
The toxicity and carcinogenicity of formaldehyde, a widely used commodity chemical, have been intensely studied and recently reviewed (Heck et al., 1990; EPA, 1991e). Concern for the potential human carcinogenicity of formaldehyde was heightened by the observation that exposure of rats at high concentrations (14.3 ppm) resulted in a very large increase in the incidence of nasal cancer. That observation gave impetus to the conduct and interpretation of epidemiologic studies of formaldehyde-exposed human populations. In the aggregate, the 28 studies that have been reported provide limited evidence of human carcinogenicity (EPA, 1991e). The "limited" classification is used primarily because the incidence of cancers of the upper respiratory tract has been confounded by exposure to other agents known to increase the rate of cancer, such as cigarette smoke and wood dusts.
The effects of chronic inhalation of formaldehyde have been investigated in rats, mice, hamsters, and monkeys. The principal evidence of carcinogenicity comes from studies in both sexes and two strains of rats and the males of one strain of mice, all showing squamous cell carcinomas of the nasal cavity.
The results of the rat bioassay have been used to derive quantitative risk estimates for cancer induction in humans (Kerns et al., 1983). Table 6-2 shows these animal data and the estimates of human cancer risk based on different exposure-dose models. (The table uses the inhalation cancer unit riskthe lifetime risk of developing cancer from continuous exposure at 1 ppm.) The 1987 EPA risk estimate (EPA, 1987c) measured exposure as the airborne concentration of formaldehyde. The rat bioassay shows a steep nonlinear exposure-response relationship for nasal-tumor induction. For example, two tumors were observed at 5.6 ppm, whereas 37 would have been expected from linear extrapolation from 14.3 ppm. Similarly, no tumors were observed at 2 ppm, whereas linear extrapolation from 14.3 ppm would have predicted 15.
The key issue became whether the same exposure-response relationship exists in people as in rats. To determine the answer, researchers directed substantial effort toward investigating the mechanisms by which formaldehyde exerted a carcinogenic effect. One avenue of investigation was directed toward character-
izing DNA-protein cross-links as a measure of internal dose of formaldehyde (Heck et al., 1990). That work, initially conducted in rats, demonstrated a steep nonlinear relationship between formaldehyde concentration and formation of DNA-protein cross-links in nasal tissue, where most inhaled formaldehyde is deposited in rats. This suggested a correlation between such cross-links and tumors.
When the studies were extended to monkeys, a similar nonlinear relationship was observed between exposure concentration and DNA-protein cross-links in nasal tissue, but the concentration of DNA-protein cross-links per unit of exposure concentration was substantially lower than in the rat. Because the breathing patterns of humans more closely resemble those of monkeys than those of rats, the results of these studies suggested that using rats as a surrogate for humans might overestimate doses to humans, and hence the risk presented to humans by formaldehyde. EPA's most recent risk assessment (EPA, 1991e) used DNA-protein cross-links as the exposure indicator and estimated the human cancer risk (Table 6-2). EPA noted that the cross-links were being used only as a measure of delivered dose and that present knowledge was insufficient to ascribe a mechanistic role to the DNA-protein cross-links in the carcinogenic process.
The EPA risk estimates for formaldehyde have been the subject of extensive peer review and review by the SAB. The 1992 update was reviewed by the SAB Environmental Health Committee and Executive Committee. The SAB recommended that the agency attempt to develop an additional risk estimate using the epidemiological data and prepare a revised document reporting all the risk estimates developed by the alternative approaches with their associated uncertainties. The two examples just discussed used mechanistic data and modeling to improve the characterization of the exposure-dose link. It is possible that as knowledge increases, models can be developed that link dose to response; the possibility is further discussed in Chapter 7.
The same is true of the linearized multistage model. As noted earlier, this model assumes that risk is linear in dose. As noted earlier, however, rats exposed to formaldehyde show a steep nonlinear exposure-response relationship. This raises the possibility that the linearized multistage model might be inappropriate for at least some chemicals. It is possible that advances in knowledge of the molecular and cellular mechanisms of carcinogenesis will show a need to use other models either case by case or generically. More discussion of this matter can be found in Chapter 7.
The strategy advocated for formaldehyde would build on multistage models of the carcinogenic process that describe the accumulation of procarcinogenic mutations in target cells and the consequent malignant conversion of these cells (Figure 6-1). The Moolgavkar-Venzon-Knudson model substantially oversimplifies the carcinogenic process but provides structural framework for integrating and examining data on the role of DNA-protein cross-links, cell replication, and other biologic phenomena in formaldehyde-induced carcinogenesis (Mool-
gavkar and Venzon, 1979; Moolgavkar and Knudson, 1981; Moolgavkar et al., 1988; NRC, 1993b). Key features of this model are definition of the relationship of target-tissue dose to exposure and the use of that dose as a determinant of three outcomes: reactivity with DNA, mitogenic alterations, and cytolethality. These, in turn, cause further biologic effects: DNA reactivity leads to mutations, the mitogenic stimuli increase the rate of cell division, and cells die (cell death stimulates compensatory cell proliferation). Models like that shown provide a structured approach for integrating data on a toxicant, such as formaldehyde. It is anticipated that modeling will provide insight into the relative importance, at various exposure concentrations, of the two mechanisms that appear to have a dominant role in formaldehyde carcinogenesis: mutation and cell proliferation. Improved insight into their role could provide a mechanistic basis for selecting between the linearized multistage mathematical model now used for extrapolation from high to low doses and alternative models that might have more biologic plausibility.
Trichloroethylene (TCE) is a chlorinated solvent that has been widely used in the industrial degreasing of metals. TCE is a concern to EPA as an air pollutant, a water pollutant, and a substance frequently present in ground water at Superfund sites. EPA carried out a risk assessment for TCE documented in a health assessment document (HAD) (EPA, 1985d) and a draft addendum incor-
porating additional inhalation-bioassay data (EPA, 1987e). Both documents were reviewed by the SAB (EPA, 1984a; EPA, 1988j,k). The second document has not been issued in final form, and no further revision of EPA's risk assessment on TCE has been made since 1987.
The carcinogenic potency of TCE is based on the liver-tumor response in B6C3F1 mice, a strain particularly prone to liver tumors. The carcinogenicity of TCE might result from trichloroacetic acid (TCA), a metabolite of TCE that is itself known to cause liver tumors in mice. TCA is one of a number of chemicals that cause proliferation of peroxisomes, an intracellular organelle, in liver cells. Peroxisome proliferation has been proposed as a causal mechanism for the liver tumors, and proponents have asserted that such tumors should receive treatment in risk assessments different from evaluation under EPA's default assumptions. In particular, human liver cells might be much less sensitive than mouse liver cells to tumor formation from this mechanism, and the dose-response relationship might be nonlinear at low doses.
The SAB held a workshop in 1987 on peroxisome proliferation as part of its reviews on risk assessments for TCE and other chlorinated solvents. While endorsing a departure from the default on the alpha-2-globulin mechanism described in example 1 above, the SAB declined to endorse such a departure for peroxisome proliferation, noting that a causal relationship for this mechanism was ''plausible but unproven." The SAB strongly encouraged further research, describing this mechanism for mouse liver tumors as "most promising for immediate application to risk assessment" (EPA, 1988k). The SAB criticized EPA on the draft Addendum on TCE (EPA, 1987e) for not adequately presenting uncertainties and for not seriously evaluating recent studies on the role of peroxisome proliferation (EPA, 1988l).
In the TCE case, departure from the defaults was rejected after an SAB review that recognized the peroxisome proliferation mechanism as plausible. Controversy over the interpretation of liver tumors in B6C3F1 mice continues. Some scientists assert that EPA's use of the tumor-response data from this particularly sensitive strain has been inappropriate (Abelson, 1993; ILSI, 1992). In the TCE example, departure from the defaults might become appropriate, on the basis of improved understanding of mouse liver tumors and their implications for human cancer. Although the SAB declined to endorse such a departure in 1987, it strongly encouraged further research as appropriate for supporting improved risk assessment.
Cadmium compounds are naturally present at trace levels in most environmental media, including air, water, soil, and food. Substantial additional amounts might result from human activities, including mining, electroplating, and disposal of municipal wastes. EPA produced an HAD on cadmium (EPA, 1981b) and
later an updated mutagenicity and carcinogenicity assessment (EPA, 1985e). The latter went through SAB review (EPA, 1984b), which pointed out many weaknesses and research needs for improving the risk assessment. No revision of the risk assessment on cadmium has occurred since 1985.
EPA used epidemiological data for developing a single unit risk estimate for all cadmium compounds. Use of the estimate from the best available bioassay would have given a unit risk for cadmium compounds higher by a factor of 50. The SAB and EPA in its response to SAB comments (EPA, 1985f) agreed that the solubility and bioavailability of different cadmium compounds were important in determining the risk associated with different cadmium compounds and that such differences might explain the discrepancy between the epidemiological data and the bioassay data. No implementation of the principle that cadmium compounds should be evaluated on the basis of bioavailability has yet been devised, although its importance to risk assessment for some air pollutants that contain cadmium is clearly set forth in EPA's response to the SAB (EPA, 1985f).
EPA's existing risk assessment for cadmium might be judged adequate for screening purposes. But the SAB review and the EPA response to it suggest that the carcinogenic risk associated with a specific cadmium compound could be overestimated or underestimated, because bioavailability has not been included in the risk assessment. A refined version of the risk assessment that includes bioavailability might be appropriate, especially if residual risks for cadmium compounds appear to be important under the Clean Air Act Amendments of 1990.
Nickel compounds are found at detectable levels in air, water, food, and soil. Increased concentrations of airborne nickel result from mining and smelting and from combustion of fuel that contains nickel as a trace element. Nickel compounds present in smelters that use the pyrometallurgical refining process are clearly implicated as human carcinogens. EPA's HAD on nickel (EPA, 1986b) lists dust from such refineries and nickel subsulfide as category A (known human) carcinogens. A rare nickel compound, nickel carbonyl, is listed, on the basis of sufficient evidence in animals, as category B2. Other nickel compounds are not listed as carcinogens, although EPA states (EPA, 1986b, p. 2-11):
The carcinogenic potential of other nickel compounds remains an important area for further investigation. Some biochemical and in vitro toxicological studies seem to indicate the nickel ion as a potentially carcinogenic form of nickel and nickel compounds. If this is true, all nickel compounds might be potentially carcinogenic with potency differences related to their ability to enter and to make the carcinogenic form of nickel available to a susceptible cell. However, at the present time, neither the bioavailability nor the carcinogenesis mechanism of nickel compounds is well understood.
The SAB reviewed the nickel HAD and concurred with EPA's listing of only the three rare nickel species as category A and B2 carcinogens (EPA, 1986c).
The results of bioassays on three nickel species by the National Toxicology Program are due to be released soon, and these results should provide a basis for revision of risk assessments for nickel compounds.
The cadmium and nickel examples point out an important additional default option: Which compounds should be listed as carcinogens when it is suspected that a class of chemical compounds is carcinogenic? Neither the cadmium risk assessment, the nickel risk assessment, or EPA's Guidelines for Carcinogen Risk Assessment (EPA, 1986a) provide specific guidance on this issue.
Dioxins is a commonly used name for a class of organochlorine compounds that can form as the result of the combustion or synthesis of hydrocarbons and chlorine-containing substances. One isomer, 2,3,7,8-tetrachlorodibenzo-p-diox-in (TCDD), is one of the most potent carcinogens ever tested in bioassays. EPA issued an HAD for dioxins (EPA, 1985g), which the SAB criticized for its treatment of the non-TCDD isomers that may contribute substantially to the overall toxicity of a mixture of dioxins (EPA, 1985h).
The potency calculation for TCDD has continued to be a subject of controversy. Research indicates that the toxic effects of TCDD may result from the binding of TCDD to the Ah (aromatic hydrocarbon) receptor. In 1988, EPA asked the SAB to review a proposal to revise its risk estimate for TCDD. SAB agreed with EPA's criticism of the linearized multistage model and its assessment of the promise of alternative models based on the receptor mechanism. But SAB did not agree that there was adequate scientific support for a change in the risk estimate. SAB carefully distinguished its recommendation from a change that EPA might wish to make as part of risk management (EPA, 1989f)
The Panel thus concluded that at the present time the important new scientific information about 2,3,7,8-TCDD does not compel a change in the current assessment of the carcinogenic risk of 2,3,7,8-TCDD to humans. EPA may for policy reasons set a different risk-specific dose number for the cancer risk of 2,3,7,8-TCDD, but the Panel finds no scientific basis for such a change at this time. The Panel does not exclude the possibility that the actual risks of dioxin-induced cancer may be less than or greater than those currently estimated using a linear extrapolation approach.
A recent conference affirmed the scientific consensus on the receptor mechanism for TCDD, but there was not a consensus that this mechanism implied a basis for departure from low-dose linearity (Roberts, 1991). After the conference, and after the recommendations of the SAB (EPA, 1989f), EPA initiated a new study to reassess the risk for TCDD. That study is now in draft from and scheduled for SAB review in 1994.
The potencies of other dioxin isomers and isomers of a closely related chemical class, dibenzofurans, have been estimated by EPA with a toxic-equivalency-factor (TEF) method (EPA, 1986d). The TEF method was endorsed by the SAB as a reasonable interim approach in the absence of data on these other isomers (EPA, 1986e). The SAB urged additional research to collect such data. Municipal incinerator fly ash was used as an example of a mixture of isomers of regulatory importance that might be appropriate for long-term animal testing.
The EPA initiative for a review of TCDD is one of the few instances in which the agency has initiated revision of a carcinogen risk assessment on the basis of new scientific information. Dioxins and dibenzofurans are unique in that potency differences within this class of closely related chemical isomers are dealt with through a formal method that has undergone peer review by the SAB.
Example 3: Modeling Exposure-Response Relationship
If chemicals act like radiation at low exposures (doses) inducing canceri.e., if intake of even one molecule of a chemical has an associated probability for cancer induction that can be calculatedthe appropriate model for relating exposure-response relationships is a linearized multistage model.
Of the 189 hazardous air pollutants, unit risk estimates are available for only 51: 38 with inhalation unit risks, which are applicable to airborne materials, and 13 with oral unit risks. The latter probably have less applicability to estimating the health risks associated with airborne materials. All 38 inhalation unit risk values have been derived with a linearized multistage model; i.e., it is assumed that the chemicals act like radiation. That might be an appropriate assumption for chemicals known to affect DNA directly in a manner analogous to that of radiation. For other chemicalse.g., such nongenotoxic chemicals as chloroformthe assumption of a mode of action similar to that of radiation might be erroneous, and it would be appropriate to consider the use of biologically-based exposure-response models other than the linearized multistage model.
The process of choosing between alternative exposure-response models is difficult because the models cannot be validated directly for their applicability for estimating lifetime cancer risks at exposures of regulatory concern. Indeed, it is possible to obtain cancer incidence data on exposed laboratory animals and distinguish them from the control incidence only over a narrow range, from some value over 1% (10-2) to about 50% (5 × 10-1) cancer incidence. In regulation of chemicals, the extrapolation may be over a range of up to 4 orders of magnitude (from 10-2 to 10-6), going from experimental observations to estimated risks of cancer incidence at exposures of regulatory concern. One approach to increasing the accuracy with which comparisons between measured outcome and model projections can be made involves increasing the size of the experimental populations. However, statistical considerations, the cost of studying large numbers of animals, and the greater difficulty of experimental control in
larger studies put narrow limitations on the use of this approach. Similar problems exist in conducting epidemiological studies.
An attractive alternative is to use advances in knowledge of the molecular and cellular mechanisms of carcinogenesis. Identification of events (e.g., cell proliferation) and markers (e.g., DNA adducts, suppressor genes, oncogenes, and gene products) associated with various steps in the multistep process of carcinogenesis creates a potential for modeling these events and products at low exposure. Direct tests of the validity of exposure-response models at risks of around 10-6 are not likely in the near future. However, with an order-of-magnitude improvement in sensitivity of detection of precancerous events with a probability of occurrence down to around 10-3-10-2, the opportunity will be available to evaluate alternative modes of action and related exposure-response models at substantially lower exposure concentrations than has been possible in the past. For example, it should soon be possible to evaluate compounds that are presumed to have different modes of action (direct interaction with DNA and genotoxicity versus cytotoxicity) and alternative models (linearized multistage versus nonthreshold) that might yield markedly different risks when extrapolated to realistic exposures and low risks.
Findings And Recommendations
Use of Default Options
FINDING: EPA's practice of using default options when there is doubt about the choice of appropriate models or theory is reasonable. EPA should have a means of filling the gap when scientific theory is not sufficiently advanced to ascertain the correct answer, e.g., in extrapolating from animal data to responses in humans.
RECOMMENDATION: EPA should continue to regard the use of default options as a reasonable way to cope with uncertainty about the choice of appropriate models or theory.
Articulation of Defaults
FINDING: EPA does not clearly articulate in its risk-assessment guidelines that a specific assumption is a default option.
RECOMMENDATION: EPA should clearly identify each use of a default option in future guidelines.
Justification for Defaults
FINDING: EPA does not fully explain in its guidelines the basis for each default option.
RECOMMENDATION: EPA should clearly state the scientific and policy basis for each default option.
Alternatives to Default Options
FINDING: EPA's practice appears to be to allow departure from a default option in a specific case when it ascertains that there is a consensus among knowledgeable scientists that the available scientific evidence justifies departure from the default option. EPA, though, has not articulated criteria for allowing departures.
RECOMMENDATION: The agency should consider attempting to give greater formality to its criteria for a departure, to give greater guidance to the public and to lessen the possibility of ad hoc, undocumented departures from default options that would undercut the scientific credibility of the agency's risk assessments. At the same time, the agency should be aware of the undesirability of having its guidelines evolve into inflexible rules.
Process For Departures
FINDING: EPA has relied on its Science Advisory Board and other expert bodies to determine when a consensus among knowledgeable scientists exists.
RECOMMENDATION: EPA should continue to use the Science Advisory Board and other expert bodies. In particular, the agency should continue to make the greatest possible use of peer review, workshops, and other devices to ensure broad peer and scientific participation to guarantee that its risk-assessment decisions will have access to the best science available through a process that allows full public discussion and peer participation by the scientific community,
FINDING: EPA has not stated all the default options in each step in the risk-assessment process, nor the steps used when there is no default. Chapters 7 and 10 elaborate on this matter and identify several possible "missing defaults."
RECOMMENDATION: EPA should explicitly identify each generic default option in the risk-assessment process.