In recent decades, the public has become increasingly aware of seemingly innumerable reports of health threats from the environment. Myriad announcements about pesticides in food, pollutants in the air, chemical contaminants in drinking water, and hazardous-waste sites have created public concern about the chemical products and byproducts of modern industrial society. Alongside that concern is public skepticism about the reliability of scientific predictions concerning possible threats to human health. The skepticism has arisen in part because scientists disagree. But it is also apparent that many people want to understand the methods for assessing how much their exposures to chemicals threaten their health and well-being.
Many environmental issues that have risen to public prominence involve carcinogenssubstances that can contribute to the development of cancer. Sometimes the decision that a substance is a carcinogen is based on evidence from workers exposed to high concentrations in the workplace, but more often it is based on evidence obtained in animals exposed to high concentrations in the laboratory. When such substances are found to occur in the general environment (even in much lower concentrations), efforts are made to determine the exposed population's risk of developing cancer, so that rational decisions can be made about the need for reducing exposure. However, scientists do not have and will not soon have reliable ways to measure carcinogenic risks to humans when exposures are small. In the absence of an ability to measure risk directly, they can offer only indirect and somewhat uncertain estimates.
Responses to these threats, often reflected in legislation and regulations, have led to reduced exposures to many pollutants. In recent years, however,
concerns have arisen that the threats posed by some regulated substances might have been overstated and, conversely, that some unregulated substances might pose greater threats than originally believed. Questions have also been raised about the economic costs of controlling or eliminating emissions of chemicals that might pose extremely small risks. Debates about reducing risks and controlling costs have been fed by the lack of universal agreement among scientists about which methods are best for assessing risk to humans.
Epidemiological studiestypically, comparisons of disease rates between exposed and unexposed populationsare not sufficiently precise to find that a substance poses a carcinogenic risk to humans except when the risk is very high or involves an unusual form of cancer. For this reason, animal studies generally provide the best means of assessing potential risks to humans. However, laboratory animals are usually exposed to toxicants at concentrations much higher than those experienced by humans in the general population. It is not usually known how similar the toxic responses in the test animals are to those in humans, and scientists do not have indisputable ways to measure or predict cancer risks associated with small exposures, such as those typically experienced by most people in the general environment.
Some hypotheses about carcinogens are qualitative. For example, biological data might suggest that any exposure to a carcinogen poses some health risk. Although some scientists disagree with that view or believe that it is not applicable to every carcinogen, its adoption provides at least a provisional answer to a vexing scientific question, namely whether people exposed to low concentrations of substances that are known to be carcinogenic at high concentrations are at some risk of cancer associated with the exposure. The view has dominated policy-making since the 1950s but is not always consistent with new scientific knowledge on the biological mechanisms of chemically induced cancer.
Beginning in the 1960s, toxicologists developed quantitative methods to estimate the risks associated with small exposures to carcinogens. If it were reliable, quantitative risk assessment could improve the ability of decision-makers and to some extent the public to discriminate between important and trivial threats and improve their ability to set priorities, evaluate tradeoffs among pollutants, and allocate public resources accordingly. In short, it could improve regulatory decisions that affect public health and the nation's economy.
During the 1970s and 1980s, methods of risk assessment continued to evolve, as did the underlying science. It became increasingly apparent that the process of carcinogenesis was complex, involving multiple steps and pathways. The concept that all cancer-causing chemicals act through mechanisms similar to those operative for radiation was challenged. Some chemicals were shown to alter DNA directly and hence to mimic radiation. But evidence developed that other chemicals cause cancer without directly altering or damaging DNA, for example, through hormonal pathways, by serving as mitogenic stimuli, or by causing excess cell death with compensatory cell proliferation. Biologically
based and pharmacokinetic models were introduced in some cases to describe exposure-response relationships more accurately. During the same period, substantial advances were made in modeling the dispersion of airborne materials from sources to receptors and in conducting exposure assessments. Furthermore, important advances have been made in the last 10 years in understanding the basic biology of chemical toxicity. All these advances are beginning to have a major impact on the estimation of risks associated with hazardous air pollutants.
Regulation Of Hazardous Air Pollutants
Before the enactment of the Clean Air Act Amendments of 1990 (1990 Amendments), Section 112 of the Clean Air Act required that the Environmental Protection Agency (EPA) set emission standards for hazardous air pollutants "to protect the public health with an ample margin of safety." In 1987, the District of Columbia Circuit Court of Appeals, in Natural Resources Defense Council v. EPA (824 F.2d 1146) interpreted this language to mean that EPA must first determine the emissions level that is safeone that represents an acceptable degree of riskand then add a margin of safety in light of the uncertainties in scientific knowledge about the pollutant in question. The agency was permitted to consider technological feasibility in the second step but not in the first.
In response, EPA decided that it would base its regulatory decisions largely on quantitative risk assessment. The agency adopted a general policy that a lifetime cancer risk of one in 10,000 for the most exposed person might constitute acceptable risk and that the margin of safety should reduce the risk for the greatest possible number of persons to an individual lifetime risk no higher than one in 1 million (10-6).
The 1990 Amendments rewrote Section 112 to place risk assessment in a key role but one secondary to technology-based regulation. As altered, Section 112 defines a list of substances as hazardous air pollutants, subject to addition or deletion by EPA. Sources that emit hazardous air pollutants will be regulated in two stages. In the first, technology-based emissions limits will be imposed. Each major source of hazardous air pollutants must meet an emission standard, to be issued by EPA, based on using the maximum achievable control technology (MACT). Smaller sources, known as area sources, must meet emissions standards based on using generally available control technology.
In the second stage, EPA must set "residual-risk standards that protect public health with an ample margin of safety if it concludes that the technology-based standards have not done so." The establishment of a residual-risk standard is required if the MACT emission standard leaves a lifetime cancer risk for the most exposed person of greater than one in a million. In actually setting the standard, though, EPA is free to continue to use its present policy of accepting higher risks. Quantitative risk assessment techniques will be relevant to this second stage of regulation, as well as to various decisions required in the first stage.
Charge To The Study Committee
Section 112(o) of the Act (quoted in full in Appendix M) directs the EPA to arrange for the National Academy of Sciences to:
The Academy's report must be considered by EPA in revising its present risk assessment guidelines.
Current Risk-Assessment Practices
Methods for estimating risk to humans exposed to toxicants have evolved steadily over the last few decades. Not until 1983, however, was the process codified in a formal way. In that year, the National Research Council released Risk Assessment in the Federal Government: Managing the Process. This publication, now known also as the Red Book, provided many of the definitions used throughout the environmental-health risk-assessment community today. The Red Book served as the basis for the general description of risk assessment used by the present committee.
Risk assessment entails the evaluation of information on the hazardous properties of substances, on the extent of human exposure to them, and on the characterization of the resulting risk. Risk assessment is not a single, fixed method of analysis. Rather, it is a systematic approach to organizing and analyzing scientific knowledge and information for potentially hazardous activities or for substances that might pose risks under specified conditions.
In brief, according to the Red Book, risk assessment can be divided into four steps: hazard identification, dose-response assessment, exposure assessment, and risk characterization.
Closely related to risk assessment is risk management, the process by which the results of risk assessment are integrated with other informationsuch as political, social, economic, and engineering considerationsto arrive at decisions about the need and methods for risk reduction. The authors of the Red Book advocated a clear conceptual distinction between risk assessment and risk management, noting, for instance, that maintaining the distinction between the two would help to prevent the tailoring of risk assessments to the political feasibility of regulating the substance in question. But they also recognized that the choice of risk-assessment techniques could not be isolated from society's risk-management goals. The result should be a process that supports the risk-management decisions required by the Clean Air Act and that provides appropriate incentives for further research to reduce important uncertainties on the extent of health risks.
In 1986, EPA issued risk-assessment guidelines that were generally consistent with the Red Book recommendations. The guidelines deal with assessing risks of carcinogenicity, mutagenicity, developmental toxicity, and effects of chemical mixtures. They include default options, which are essentially policy judgments of how to accommodate uncertainties. They include various assumptions that are needed for assessing exposure and risk, such as scaling factors to be used for converting test responses in rodents to estimated responses in humans.
As risk-assessment methods have evolved and been applied with increasing frequency in federal and state regulation of hazardous substances, regulated industries, environmental organizations, and academicians have leveled a broad
array of criticisms regarding the processes used by EPA. The concerns have included
Strategies For Risk Assessment
The committee observed that several common themes cut across the various stages of risk assessment and arise in criticisms of each individual step. These themes are as follows:
By addressing each of those themes in each step in the risk-assessment process, EPA can improve the accuracy, precision, comprehensibility, and utility of the entire risk-assessment process in regulatory decision making.
Flexibility and the Use of Default Options
EPA's risk-assessment guidelines contain a number of "default options." These options are used in the absence of convincing scientific knowledge on which of several competing models and theories is correct. The options are not rules that bind the agency; rather, they constitute guidelines from which the agency may depart when evaluating the risks posed by a specific substance. For the most part, the defaults are conservative (i.e., they represent a choice that, although scientifically plausible given existing uncertainty, is more likely to result in overestimating than underestimating human risk).
EPA has acted reasonably in electing to formulate guidelines. EPA should have principles for choosing default options and for judging when and how to depart from them. Without such principles, the purposes of the default options could be undercut. The committee has identified a number of criteria that it believes ought to be taken into account in formulating such principles: protecting the public health, ensuring scientific validity, minimizing serious errors in estimating risks, maximizing incentives for research, creating an orderly and predictable process, and fostering openness and trustworthiness. There might be additional relevant criteria.
The choice of such principles goes beyond science and inevitably involves policy choices on how to balance such criteria. After extensive discussion, the committee found that it could not reach consensus on what the principles should be or on whether it was appropriate for this committee to recommend principles. Thus, the committee decided not to do so. Appendix N contains papers by several committee members containing varied perspectives on the appropriate choice of principles. Appendix N-1 advocates the principles of "plausible conservatism" and N-2 advocates the principle of the maximum use of scientific information in selection of default options. These papers do not purport to represent the views of all committee members.
The committee did agree, though, that EPA often does not clearly articulate in its risk-assessment guidelines that a specific assumption is a default option and that EPA does not fully explain in its guidelines the basis for each default option. Moreover, EPA has not stated all the default options in the risk-assessment process or acknowledged where defaults do not exist.
EPA's practice appears to be to allow departure from a default option in a specific case when it ascertains that there is a consensus among knowledgeable scientists that the available scientific evidence justifies departure from the default option. The agency relies on its Scientific Advisory Board and other expert bodies to determine when such a consensus exists. But EPA has not articulated criteria for allowing departures.
Validation: Methods and Models
Some methods and models used in emission characterization, exposure assessment, hazard identification, and dose-response assessment are specified as default options. Others are sometimes used as alternatives to the default options. The predictive accuracy and uncertainty of these methods and models for risk assessment are not always clearly understood or clearly explained.
A threshold model (i.e., one that assumes that exposures below some level will not cause health effects) is generally accepted for reproductive and developmental toxicants, but it is not known how accurately it predicts human risk. The fact that current evidence on some toxicants, most notably lead, does not clearly reveal a safe threshold has raised concern that the threshold model might reflect the limits of scientific knowledge, rather than the limits of safety.
EPA has worked with outside groups to design studies to refine emission estimates. However, it does not have guidelines for the use of emission estimates in risk assessment, nor does it adequately evaluate the uncertainty in the estimates.
EPA has relied on Gaussian-plume models to estimate the concentrations of hazardous pollutants to which people are exposed. These representations of airborne transport processes are approximations. EPA focuses primarily on stationary outdoor emission sources of hazardous air pollutants. It does not have a
specific statutory mandate to consider all sources of hazardous air pollutants, but this should not deter the agency from assessing indoor sources to provide perspective in considering risks from outdoor sources.
EPA uses the Human-Exposure Model (HEM) to evaluate exposures from stationary sources. It estimates exposures and risk for both individuals and populations. For individuals, it has traditionally used a technique to determine what is called the maximally exposed individual (MEI) by estimating the highest exposure concentration that might be found among the broad distribution of possible exposures. Estimation of the maximum exposure is based on a variety of conservative assumptions, e.g., that the MEI lives directly downwind from the pollution source for his or her entire 70-year lifetime and remains outdoors the entire time. Traditionally, only exposure by inhalation is considered. Recently, in accordance with recommendations of the agency's Science Advisory Board, EPA has begun to replace the MEI estimate with two others: the high-end exposure estimate (HEEE) and the theoretical upper-bound exposure (TUBE).
In dose-response assessment, EPA has traditionally treated almost all chemical carcinogens as inducing cancer in a similar manner, mimicking radiation. It assumes that a linearized multistage model can be used to extrapolate from epidemiological observations (e.g., occupational studies) or experimental observations at high doses in laboratory animals down to the low doses usually experienced by humans in the general population.
Priority-Setting and Data Needs
EPA does not have the exposure and toxicity data needed to establish the health risks associated with all 189 chemicals identified as hazardous air pollutants in the 1990 Amendments. Furthermore, EPA has not defined how it will determine the types, quantities, and quality of data that are needed to assess the risks posed by facilities that emit any of those 189 chemicals or how it will determine when site-specific emission and exposure data are needed.
Many types of variability enter into the risk-assessment process: variability within individuals, among individuals, and among populations. Types of variability include nature and intensity of exposure and susceptibility to toxic insult related to age, lifestyle, genetic background, sex, ethnicity, and other factors.
Interindividual variability is not generally considered in EPA's cancer risk assessments. The agency's consideration of variability has been limited largely to noncarcinogenic effects, such as asthmatic responses to sulfur dioxide exposure. Analyses of such variability usually form the basis of decisions about whether to protect both the general population and sensitive individuals.
There are numerous gaps in scientific knowledge regarding hazardous air pollutants. Hence, there are many uncertainties in risk assessment. When the uncertainty concerns the magnitude of a quantity that can be measured or inferred from assumptions, such as exposure, the uncertainty can be quantified. Other uncertainties pertain to the models being used. These stem from a lack of
knowledge needed to determine which scientific theory is correct for a given chemical and population at risk and thus which assumptions should be used to derive estimates. Such uncertainties cannot be quantified on the basis of data.
The upperbound point estimate of risk typically computed by EPA does not convey the degree of uncertainty in the estimate. Thus, decision-makers do not know the extent of conservatism, if any, that is provided in the risk estimate.
Formal uncertainty analysis can help to inform EPA and the public about the extent of conservatism that is embedded in the default assumptions. Uncertainty analysis is especially useful in identifying where additional research is likely to resolve major uncertainties.
Uncertainty analysis should be an iterative process, moving from the identification of generic uncertainties to more refined analyses for chemical-specific or industrial plant-specific uncertainties. The additional resources needed to conduct the more specific analyses can be justified when the health or economic impacts of the regulatory decision are large and when further research is likely to change the decision.
Typically, people at risk are exposed to a mixture of chemicals, each of which might be associated with an increased probability of one or more health effects. In such cases, data are often available on only one of the adverse effects
(e.g., cancer) associated with each chemical. At issue is how best to characterize and estimate the potential aggregate risk posed by exposure to a mixture of toxic chemicals. Furthermore, emitted substances might be carried to and deposited on other media, such as water and soil, and cause people to be exposed via routes other than inhalation, e.g., by dermal absorption or ingestion. EPA has not yet indicated whether it will consider multiple exposure routes for regulation under the 1990 Amendments, although it has done so in other regulatory contexts, e.g., under Superfund.
EPA adds the risks related to each chemical in a mixture in developing its risk estimate. This is generally considered appropriate when the only risk characterization needed is a point estimate for use in screening. When a more comprehensive uncertainty characterization is desired, EPA should adopt the following recommendations.
Certain expressions of probability are subjective, whether qualitative (e.g., that a threshold might exist) or quantitative (e.g., that there is a 90% probability that a threshold exists). Although quantitative probabilities could be useful in conveying the judgments of individual scientists to risk managers and to the public, the process of assessing probabilities is difficult. Because substantial disagreement and misunderstanding concerning the reliability of single numbers or even a range of numbers can occur, the basis for the numbers should be set forth clearly and in detail.
An Iterative Approach
Resources and data are not sufficient to perform a full-scale risk assessment on each of the 189 chemicals listed as hazardous air pollutants in the 1990 Amendments, and in many cases no such assessment is needed. After MACT is applied, it is likely that some of the chemicals will pose only de minimis risk (a risk of adverse health effects of one in a million or less). For these reasons, the committee believes that EPA should undertake an iterative approach to risk assessment. An iterative approach would start with relatively inexpensive screening techniquessuch as a simple, conservative transport modeland then for chemicals suspected of exceeding de minimis risk move on to more resource-intensive levels of data-gathering, model construction, and model application. To guard against serious underestimations of risk, screening techniques must err on the side of caution when there is uncertainty about model assumptions or parameter values.
Overall Conclusions And Recommendations
The committee's findings are dominated by four central themes:
Risk assessment is a set of tools, not an end in itself. The limited resources available should be spent to generate information that helps risk managers to choose the best possible course of action among the available options.