National Academies Press: OpenBook

Science and Judgment in Risk Assessment (1994)

Chapter: Part III Implementation of Findings: 12 Implementation

« Previous: 11 Aggregation
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 243

Part
III
Implementation of Findings

The committee believes that a major portion of its charge is to consider how its findings and recommendations should be implemented in light of the comprehensive rewriting of Section 112 by Title III of the 1990 amendments to the Clean Air Act. Many of the common problems in health risk assessment might have arisen because of the two most salient features of EPA's implementation of the Red Book paradigm over the last 10 years: the emphasis on single outputs of each step, which are then processed into single numbers for risk; and the separation of the research and analysis functions into discrete, sequential stages.

A tiered system of priority-setting would be an important positive development in the practice of standard-setting and risk analysis. Currently, standards (goals for achieving health and safety) are set in accordance with a Congressional mandate to provide "an adequate margin of safety." Where data do not exist (particularly with respect to responses to low doses and mechanisms of toxicity), EPA has generally chosen default options that, in addition to being in keeping with current scientific knowledge, are intended to be conservative (i.e., health protective) in the outcomes to which they lead. This protective approach provides the basis for developing a stepwise, tiered system for assigning priorities to chemicals to be examined for potential regulation. As a first tier—usually in the absence of data—computations can be made (with the appropriate default assumptions) that lead to a possible regulatory standard. If this standard is readily achievable, no further analysis is called for. If the standard is not achievable, data will be sought to replace the possibly too-conservative default assumptions. Substituting more chemical-specific information for default assumptions will usually lead to less rigid and thus more easily attainable standards (or higher

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 244

"safe" doses). (The rare situations in which this relaxation of the standards does not occur would imply that the default assumptions were not sufficiently health protective and so needed to be re-examined.)

A stepwise process that replaces default assumptions with specific data can be expected to yield more and more firmly established standards (regulatory doses); i.e., uncertainty should be reduced as a consequence of having more information. The tiered process for setting standards thus reflects the philosophical process of proceeding from conjecture ("it is reasonable that …") through information to (one hopes) wisdom.

The issue of implementation is discussed in Chapter 12, the final chapter, from two points of view. First, technical guidance is provided on EPA's implementation of the recommendations in a regulatory context. Second, the committee discusses institutional issues in risk assessment and risk management.

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 245

12
Implementation

Health risk assessment is one element of most environmental decision-making—a component of decisions about whether, how, and to what degree the assessed risk requires reduction. The factors that may be considered by decision-makers depend on the requirements of applicable statutes, precedents established within the responsible government agencies, and good public policy. This chapter discusses how the risk-assessment recommendations in this report could be implemented in the context of Section 112 of the Clean Air Act (as amended in 1990), and it discusses several institutional issues in risk assessment and risk management.

Priority-Setting And Section 112

As we explained in Chapter 2, Section 112 calls for EPA to regulate hazardous air pollutants in two stages. In the first, sources will be required to do what is feasible to reduce emissions. In the second, EPA must set "residual-risk" standards to protect public health with an ample margin of safety if it concludes that implementation of the first stage of standards does not provide such a margin of safety. This second stage will require use of risk assessment.

Neither the resources nor the scientific data exist to perform a full-scale risk assessment on each of the 189 chemicals listed as hazardous air pollutants by Section 112. Nor, as we noted in Part II, is such an assessment needed in many cases.

We therefore urge an iterative approach to risk assessment. Such an approach would start with relatively inexpensive screening techniques and move to

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 246

more resource-intensive levels of data-gathering, model construction, and model application as the particular situation warranted. To guard against the possibility of underestimating risk, screening techniques must be constructed so as to err on the side of caution when there is uncertainty. The results of these techniques should be used to set priorities for the gathering of further data and the application of successively more complex techniques. These techniques should then be used to the extent necessary to make a judgment. The result would be a process that supports the risk-management decisions required by the Clean Air Act and that provides incentives for further research, without the need for costly case-by-case evaluations of individual chemicals.

Under an iterative approach, a screening analysis is followed by increases in the refinement of the estimate, as appropriate. In effect, each iteration amounts to a more detailed screen. As we have explained in Chapter 6, screening analyses need to incorporate conservative assumptions to preclude the possibility that a pollutant that poses dangers to health or welfare will not receive full scrutiny.

Considering the effort required to carry out a "full-scale" risk assessment of 189 potentially hazardous substances and the current resources of the agency, it is unlikely that this task can be accomplished within the time permitted by the act if full-scale risk assessments must be conducted by EPA itself. This committee recommends a priority-setting scheme (as described in the following sections) based on initial assessments of each chemical's possible impact on human health and welfare. But Congress should recognize that the resources now available to EPA probably will not support a full-scale risk analysis for each source or even each source category within the time permitted, even with priority-setting. Thus, EPA will need alternatives to full-scale risk assessment, and attention should be given to setting priorities for the allocation of resources. In addition, a full statement of resource requirements should be developed and presented to Congress for its use in decisions about budget and for its understanding and guidance with regard to reducing the task.

Iterative Risk Assessment

To implement Section 112, the committee generally supports the tiered, iterative risk-assessment process proposed by EPA in its draft document as shown in Appendix J. As stated by EPA, this process is based on the concept that as the comprehensiveness of a risk assessment increases, the uncertainty in the assessment decreases.

In the absence of sufficient data or resources to characterize each risk-assessment parameter accurately, EPA deliberately uses default options that are intended to yield health-protective risk estimates. Lower-tier risk assessments that are used for preliminary screening rely heavily on default options, and their results should be health-protective. If a lower-tier risk assessment indicates that an unacceptable health risk could be associated with a particular exposure and a

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 247

regulated party believes that the risk has been overestimated, a higher-tier risk assessment can be performed. The higher-tier risk assessment would be based on more precise (and less uncertain) exposure and health information instead of relying on the default options. Conversely, if EPA believes that a lower-tier risk assessment has underestimated the health risk associated with a particular exposure, a higher-tier risk assessment might yield a more reliable estimate.

The following sections evaluate each step in the health-risk assessment process with reference to how EPA plans to implement its tiered approach.

Exposure Assessment

EPA (1992f) has proposed a tiered scheme for using health risk assessments to delist source categories and eliminate residual risk. EPA asserts that this scheme provides health-protective estimates of risk by assuming maximal exposure levels, except for cases related to complex terrains (for which an alternative dispersion model should be selected from the complex-terrain models available to EPA to estimate maximal concentrations of chemicals in air and hence maximal exposure levels).

In the initial step of the tiered approach (see Table 12-1), the emission rate for a facility is multiplied by a dispersion value obtained from a table and chosen on the basis of two site-specific parameters: stack height and the approximate distance to the site boundary line. A generic ''worst-case" meteorology applicable to all noncomplex terrain is used to obtain the dispersion factors for a simple Gaussian-plume model with worst-case plant parameters (e. g., zero-buoyancy plume and zero exit velocity).

The second tier uses a simple, single Gaussian-plume model that incorporates site-specific data on the site boundary distance; the stack height, exit velocity, temperature, and diameter; the urban-rural classification; and the building dimensions. Again, a generic worst-case meteorology is used in the calculation.

In the third tier, the modeling would include multiple-point release, local meteorologic characteristics, and the choice of specific local receptor-site locations. The maximal exposure is calculated by multiplying the estimated concentration by residence time. EPA is debating the extent to which it will use less than lifetime residence (i.e., alter the 70-year-lifetime assumption).

In a presentation made to the committee by EPA staff, a fourth tier was described that would incorporate time-activity modeling, as in the Human Exposure Model II (HEM II). HEM-II uses an approach similar to that of the National Ambient Air Quality Standards (NAAQS) Exposure Model (NEM), which has been used in exposure assessments for criteria pollutants (tropospheric ozone, sulfur dioxide, etc.). However, the NEM has not been fully evaluated and validated (NRC, 1991a).

There are a number of difficulties associated with the method in the initial tiers. First, EPA does not specify that a conservative emission rate should be

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 248

TABLE 12-1 Summary of EPA's Draft Tiered Risk-Assessment Approach as Presented to the Committee

Tier 1: Lookup Tables

Two tables: short- and long-term (based on EPA's SCREEN model)

Inputs: emissions rate, release height, fenceline distance

Outputs: Maximum offsite concentration (focus on maximum exposed individual, MEI) Maximum offsite cancer risk (based on unit risk estimate, URE) Chronic noncancer hazard index (based on chronic health thresholds) Acute noncancer hazard index (based on acute health thresholds)

Tier 2: Screening Dispersion

Based on EPA's SCREEN model (uses conversion factor for long-term)

Inputs: Tier 1 + stack diameter, exit velocity and temperature, rural/urban classification, and building dimensions

Outputs: Maximum offiste concentration and downwind distance (focus on MEI) Cancer risk and/or noncancer hazard index

Tier 3: Site-Specific Dispersion Model

Based on EPA's TOXLT, TOXST models (uses the ISC dispersion model)

Inputs: Tier 1 + Tier 2 + local meteorology, release point and fenceline layout, terrain features, release frequency, and duration

Outputs: Long-term - receptor-specific risk, chronic noncancer hazard index (MEI) Short-term - receptor-specific hazard index exceedance rate (MEI)

Ambient monitoring used to enhance modeling or as alternative on case-by-case basis for difficult modeling applications

Tier 4: Site-Specific Dispersion and Exposure Model

Based on EPA's HEM II model

Inputs: Tier 3 + population model

Outputs: Maximum offsite concentration (MEI), exposure distribution, and population risk (incidence) with optional characterization of uncertainties

Personal monitoring used as alterative on case-by-case basis for difficult modeling applications

NOTES:

(1) Approach considers flat or rolling terrain only;
(2) Complex terrain alternatives used on case-by-case basis;
(3) Analysis considers only direct inhalation exposure;
(4) Tiers proceed from most conservative and least data intensive (Tier 1) to least conservative and most data intensive (Tier 4).

SOURCE: Guinnup, 1992 (see Appendix J).

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 249

used; it will use the emission rate for normal operation of a plant at full capacity. In addition, none of EPA's current emission estimation methods accounts for "upset" situations with higher than normal emissions or for the emission-estimate uncertainty. Therefore, current emission estimates cannot be relied on as necessarily conservative.

Second, the committee reiterates its earlier concern (see Chapter 7) about the use of the Gaussian-plume model beyond the lower-tier screening level. Even there, complex terrain can create substantial problems. The EPA complexterrain models have focused on emissions released from tall stacks toward the side of a hill or valley, and not on poor dispersion of material from a point or area source within a valley. Models for complex terrain have been developed and evaluated by the atmospheric-research community. The committee does not recommend any specific model, but suggests that EPA look beyond its set of existing models to find the best possible ones for the dispersion of hazardous air pollutants in the particular type of complex terrain that applies in each case. In addition, models should be considered that account for the possibility of a negative buoyancy plume (i.g., gas heavier than air).

For the conditions under which hazardous air pollutants are emitted from many emission points within a plant, EPA has not demonstrated that the simple, single Gaussian-plume approach (choosing dispersion values from a table generated on the basis of a generic worst-case meteorology and worst-case plant-dispersion characteristics) will be appropriate for all the situations to which it might be applied. The Gaussian-plume models have been tested for the dispersion of criteria pollutants from point sources that typically have good dispersive characteristics (e.g., tall stacks, high thermal buoyancy, and high exit velocity). However, it has not been demonstrated that this generic worst-case meteorology is fully representative of any location, such as cities with substantial local perturbations in the dispersion characteristics (surface roughness, street canyon, heatisland effects, etc.). The committee recommends that, until the evaluations can be completed, exposure assessment for source delisting and evaluating residual risk begin at EPA's current Tier 3, where the industrial source complex (ISC) model with local meteorology and local receptor-site choices will provide better estimates of the worst-case possibilities. If Tiers 1 and 2 can be shown definitively to estimate exposure conservatively, they could be incorporated into the delisting, priority-setting, and residual-risk process.

In accordance with the discussion in Chapter 7, the committee recommends that distributions of pollutant concentration values be estimated with available evaluated stochastic dispersion models that provide more realistic descriptions of the atmospheric dispersion process and that incorporate variability and uncertainty in their estimates. If the screening process suggests that a source cannot be excluded from further review, exposure estimation should be more comprehensive and incorporate more advanced methods of emission characterization, stochastic modeling of dispersion, and time-activity patterns, as discussed in

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 250

Chapter 7. Exposure assessment can be improved as necessary by incorporating more explicit local topographic, meteorologic, and other site-specific characteristics. However, if the regulated sources find it acceptable to be regulated on the basis of a (truly conservative) screening analysis, then there should be no obligation to go further. If they are not content, then the sources should bear the burden of doing the higher-tier analysis, subject to EPA guidelines and review.

Assessment of Toxicity

In EPA's proposed approach, four metrics will be used to determine whether the predicted impact of a source should warrant concern: lifetime cancer risk, chronic noncancer hazard index, acute noncancer hazard index, and frequency with which acute hazard index is exceeded. The toxicity data needed to evaluate these metrics, such as weight-of-evidence characterizations and cancer potencies for carcinogenicity and reference concentrations (RfCs) for noncancer end points, can be found by referring to the Integrated Risk Information System (IRIS) online database (Appendix K). This database is maintained by EPA's Environmental Criteria and Assessment Office within the Office of Research and Development for use by EPA's various program offices, by state air-quality and health agencies, and by other parties that look to EPA to provide current information on chemical toxicity.

The IRIS database will be the primary source of toxicity data for the tiered risk-assessment approach described here. The committee believes that it is appropriate for EPA to use IRIS as its preferred data source for toxicity information, rather than duplicate the effort needed to assemble and maintain such information for those of the 189 chemicals specified in Section 112 that are in IRIS. For chemicals that require a higher-tier risk assessment, EPA could supplement the information in IRIS with additional data, probability distributions, and modeling approaches. For Section 112 chemicals not yet in IRIS, EPA must collect and enter data on carcinogenic and noncarcinogenic effects.

For many of the 189 chemicals now on the Section 112 list, there are no IRIS entries, or the existing entries do not include cancer potencies for suspected carcinogens or RfCs for chemicals suspected of causing acute or chronic noncarcinogenic health effects. In these cases, it will be appropriate for EPA to develop crude screening estimates of cancer potencies and RfCs for use in research planning; if the screening values are entered in IRIS, they should be clearly identified as screening values. These estimates should be combined with exposure estimates to calculate potential cancer risks and the likelihood of acute and chronic noncancer health effects. Such estimates may be based, for example, on in vitro tests for carcinogenicity, expert judgment on structure-activity relationships, and other available information and judgment on the toxicity of the chemical in question. These crude estimates should not be used as a basis for regulatory decisions when the supporting data are not adequate for such use. However,

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 251

an entry can and should summarize current information on the extent to which a chemical might be a potentially important threat to public health. If a bioassay of the chemical is under way through the National Toxicology Program or elsewhere, the estimated date of availability of results should be stated in IRIS.

A review of IRIS by the EPA Science Advisory Board (SAB) noted the importance of IRIS for both EPA and non-EPA users (Appendix K). If IRIS entries are to be used for risk assessments that lead to major risk-management decisions, then EPA must ensure their quality and keep them up to date. It is EPA's standard practice that IRIS files must be assessed in their entirety so that cancer potencies and RfCs are not distributed without an accompanying narrative description of their scientific basis; IRIS is intended not only as a source of numerical data, but also as an important source of qualitative risk-assessment information. The appropriate caveats and explanations of numerical values are important for keeping risk managers and other IRIS users fully informed about health-risk information.

The SAB noted that chemical-specific risk assessments, such as Health Assessment Documents (HADs) and SAB reviews of HADs, should be referenced and summarized in IRIS. Where different risk assessments have yielded different cancer potencies or RfCs, the file should include an explanation that relates these differences to variations in data, assumptions, or modeling approaches. Data deficiencies and weaknesses in a risk assessment that might be remedied through further data collection and research should be described in the file. In this way, IRIS can evolve into a high-quality information support system for the needs of EPA and other users relative to the Section 112 chemicals, providing not just one set of numbers for dose-response assessment, but also a summary of alternative approaches, their strengths and weaknesses, and opportunities for further research that could improve risk estimates.

Summary

The committee supports EPA's general concept of tiered risk assessment with two modifications. First, the tiered approach requires a conservative first level of analysis. EPA asserts that its approach provides a conservative risk estimate, except in the case of complex terrain. But EPA has not yet demonstrated that this assertion is valid. Second, rather than stopping a risk assessment at a particular point, EPA should encourage and support an iterative risk-assessment process wherein improvements in the accuracy of the risk estimate will replace the initial screening estimate. This process will continue until one of three possible conclusions is reached: (1) the risk, assessed conservatively, is found to be lower than the applicable decision level (e.g., 1 in a million excess lifetime risk of cancer); (2) further improvements in the model or data would not significantly change the risk estimate; or (3) the source or source category determines that the cost of reducing emissions of this pollutant are not high enough for it to

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 252

justify the investment in research required for further improvements in the accuracy and precision of analysis. This procedure provides private parties with the opportunity to improve the models and data used in the analysis.

EPA must avoid interminable analysis. At some point, the risk-assessment portion of a decision should end, and a decision should be made. Reasonable limits on time (consistent with statutory time limits) and resources must be set for this effort, and they should be based on a combination of the regulatory constraints and the benefits gained from additional scientific analysis. It is not necessary to determine or measure every variable to high accuracy in the risk-assessment process. Rather, the uncertainties that have the most influence on a risk assessment should be the ones that the risk assessor most seeks to quantify and then reduce.

Epa Practices: Points To Consider

The committee throughout this report has noted differences between the methods EPA is currently using and practices the committee considers useful in the risk-assessment process. The committee's recommendations (summarized below) highlight differences that should be considered in the process of EPA's undertaking its proposed tiered risk assessment approach.

Select and validate an appropriate emission and exposure-assessment model for each given implementation in the risk-assessment process.

Use a carcinogen-classification scheme that reflects the strength and relevance of evidence as a supplement to the proposed narrative description.

Screen the 189 chemicals for programmatic priorities for the assessment of health risks, identify gaps in the data on the 189 chemicals, develop incentives to expedite generation of the needed data, and evaluate the quality of data before their use.

Clarify defaults and the rationales for them, including defaults now "hidden," and develop criteria for selecting and departing from the defaults.

Clarify the sources and magnitudes of uncertainties in risk assessment.

Develop a default factor or procedure to account for the differences in susceptibility among humans.

Use a specific conservative mathematical estimation technique to determine exposure variability.

Conduct pediatric risk assessments whenever children might be at greater risk than adults.

Evaluate all routes of exposure to address multimedia issues.

Use an upper-bound interspecies dose-scaling factor for screening-level estimates.

Fully communicate to the public each risk estimate, the uncertainty in the risk estimates, and the degree of protection.

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 253

Implications for Priority-Setting for Title III Activities

With a large number of hazardous air pollutants, hundreds of source categories, and perhaps hundreds of sources within many of those categories, and with strains on personnel and financial resources. EPA will need to set priorities on its actions under Section 112. In addition, Title IX of the Amendments requires EPA to perform health assessments at a rate sufficient to make them available when needed for the residual risk assessments under Title III (approximately 15 per year). To respond to these requirements, EPA will have to determine data needs, the level of analysis needed, and the criteria for determining priorities under the Clean Air Act, as well as seek sufficient funds for conducting these analyses.

It is important that EPA establish priorities for its risk assessment activities. In the past, EPA has often appeared to base its priorities on the ease of obtaining data on a particular chemical. Rather, EPA should acknowledge the relevance and strength of the existing data on each of the 189 chemicals (and mixtures) on the list, identify the gaps in scientific knowledge, and set priorities for filling the gaps so that research that is likely to contribute the most relevant information in the most time- and cost-effective manner will be conducted first.

At a minimum, an inventory of the relevant chemical, toxicologic, clinical, and epidemiologic literature should be compiled for each of the 189 chemicals (or mixtures). For each chemical without animal test data, a structure-activity evaluation should be conducted; and for each mixture, results of available short-term toxicity tests should be analyzed. If the evidence from this step or from reviews of the clinical, epidemiologic, or toxicologic literature suggests potential human health concerns, aggregate emission data and estimates of potentially exposed populations should be reviewed. The completed preliminary analyses, including a description of the assessment process used and the findings, should be placed in the public domain (e.g., IRIS or another mechanism readily accessible to the public). The inclusion of exposure data would represent a departure from past practices, and the database might need to be restructured to accommodate this new information.

For any chemical (or mixture) for which preliminary results suggest a potential health concern, it is appropriate to use more accurate emission data (including existing source-specific data), information on the environmental fate and transport of the chemical (or mixture), and more accurate characterizations (e.g., types and estimated numbers) of the populations that may be at risk of exposure, including potentially sensitive subpopulations such as children and pregnant women. In addition, a more intensive review of the relevance and strength of the available animal and human evidence (including toxicologic, clinical, and epidemiologic) data should be developed to refine insights into the probable human-health end points. If the evidence on a chemical (or mixture) and exposure still suggests potential human health effects, the agency should conduct a compre-

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 254

hensive risk assessment. This assessment should be conducted and communicated in accord with the recommendations elsewhere in this report, and the limitations of the data and the related assumptions, limitations, uncertainties, and variability should be appropriately stated with the final output of the assessment.

In summary, this iterative approach to gathering and evaluating the existing evidence is intended to produce a risk assessment for each of the 189 chemicals (or mixtures) that is appropriate to the quality and quantity of available evidence, the estimated size of the problem, and the most realistic scientific judgment of potential human-health risks based on that evidence. The committee believes that the process will result in a time- and cost-efficient mechanism that will effectively set priorities among the 189 chemicals (or mixtures) that fit the probable public-health concerns about them.

Model Evaluation and Data Quality

Data should not be used unless they are explicitly judged to be of sufficiently high quality for use in an activity as sensitive as risk analysis. No data should be incorporated into the risk-assessment process unless the method used to generate them has been peer-reviewed before its use. Table 12-2 indicates some steps that EPA could take to substantiate and validate its models and assumptions before use.

EPA should take additional steps to ensure that methods used to generate data for risk assessments are scientifically valid, perhaps through the use of its Science Advisory Board or other advisory mechanisms. A process for public review and comment, with a requirement for EPA to respond, should be available so that industry, environmental groups, or the general public may raise questions regarding the scientific basis of a decision made by EPA on the basis of its risk-assessment process.

Default Options

We have noted in previous chapters that EPA should articulate more explicit criteria by which it will decide whether it is appropriate to use an alternative to a default in risk assessment. Such criteria may be expressed either in the form of a general standard or in terms of specific types of evidence that the agency considers acceptable.

Critics of EPA's use of defaults have characterized the issue of their scientific validity in binary terms: either they are supported by science, in which case they are deemed legitimate, or they are contradicted by new knowledge, in which case they might be too conservative or not sufficiently protective. The reality that EPA confronts is more complex than that dichotomy. New scientific knowledge is rarely conclusive at its first appearance and rarely gains acceptance overnight. Rather, evidence accumulates, and its validity and weight are gradually

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 255

TABLE 12-2 Example of Procedure for Methods, Data, and Model Evaluation

Database Evaluation and Validation

1.

Develop data-quality guidelines that require all data submitted to agency to meet minimal quality level relative to their intended use before use in given risk-assessment tier.

1.

Conduct critical review of data-gathering and data-management systems to ensure that quality and quantity of data are sufficient to meet EPA's risk-assessment responsibilities under act.

3.

Document procedures used to develop data, including why particular analytic or measurement method was chosen and its limitations (e.g., sources of error, precision, accuracy, and detection limits).

4.

Characterize and document data quality by indicating overall robustness, spatial and temporal representativeness, and degree of quality control implemented; define and display accuracy and precision of measurements; indicate how missing information is treated; identify outliers in data.

5.

Account for uncertainty and variability in collection and analysis of data.

Model Evaluation and Validation

1.

Develop model-validation guidelines that indicate minimal quality of model that can be used for given risk-assessment purpose.

2.

Conduct critical review of each model used in risk-assessment process to ensure that quality and quantity of output of each model are sufficient to meet EPA's risk-assessment responsibilities under act.

3.

Assess database and establish and document its appropriateness for model selected.

4.

Conduct sensitivity testing to identify important input-controlling parameters.

5.

Assess accuracy and predictive power of model.

established through a transition period. The challenge for EPA is to decide when in the course of this evolutionary development the evidence has become strong enough to justify overriding or supplementing an existing default assumption.

Management considerations can appropriately be permitted to influence science-policy decisions related to deviations from established default positions. The committee emphasizes the desirability of well-articulated criteria for deviation from defaults. If new scientific evidence suggest that a supposedly conservative default option is not as conservative as previously believed, a new default option might be substituted. EPA needs a procedural mechanism that will allow departure from existing default models and assumptions. A more formal process should be developed.

Uncertainty Analysis

Not characterizing the uncertainty in an analysis can lead to inappropriate decisions. In addition, attempting to incorporate default assumptions of unknown conservatism into each step of a risk assessment can lead to an insufficiently or too conservative analysis.

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 256

The committee believes that the uncertainty on a risk (i.e., risk characterization) can be handled in three ways:

1.

Conduct a conservative screening analysis.

2.

Conduct a generic uncertainty analysis.

3.

Conduct testing or analysis to develop plant-specific and chemical-specific probability distributions.

A possible uncertainty-analysis process is described in Table 12-3. As stated earlier, a key factor in deciding to increase the scope and depth of uncertainty analysis should be the extent to which expected costs and risks might alter decisions.

For parameter uncertainty, enough objective probability data are available in some cases to permit estimation of the probability distribution. In other cases, subjective probabilities might be needed. For example, a committee might conclude on the basis of engineering judgment that emission estimates calculated with emission factors are likely to be correct to within a factor of 100 (see discussion in Chapter 7) and be approximately lognormally distributed. Thus, the median of the estimated distribution would be set equal to the observed or modeled emission estimate, and the geometric standard deviation would be taken as approximately 10. If making such a generic-uncertainty assumption and then picking a conservative estimator from the distribution leads to an estimate that is above the relevant decision-making threshold, that should govern the decision unless affected parties wish to devote more resources to improving the risk characterization. If the risk characterization is sufficient for decision-making purposes, then it will not be necessary to improve it.

Institutional Issues In Risk Assessment And Management

EPA's conduct of risk assessment has been evaluated in previous chapters largely from a technical perspective, with the aim of increasing the scientific reliability and credibility of the process. But EPA operates in a decision-making context that imposes pressures on the conduct of risk assessment, and these contextual pressures have led to recurrent problems of scientific credibility, the most important of which were noted in Chapter 2.

Criticisms of EPA's risk assessments take a variety of forms, but many of them focus on three basic decision-making structural and functional problems: unjustified conservatism, often manifested as unwillingness to accept new data or abandon default options; undue reliance on point estimates generated by risk assessment; and a lack of conservatism due to failure to accommodate such issues as synergism, human variability, unusual exposure conditions, and ad hoc departures from established procedures. Although some of those criticisms might have been overstated (and we provide evidence in earlier chapters that they

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 257

TABLE 12-3 Example of Procedure for Uncertainty Analysis

Preliminary Steps

1.

Conduct generic review of each parameter in each step of risk-assessment process and determine default distribution on the basis of objective probabilities, if possible, or subjective probabilities, if sufficient information is not available. If subjective probabilities cannot be used because of lack of consensus, assume either continuousuniform or discontinuous-dichotomous distribution between reasonable lower and upper bounds or if unimodality is reasonable assumption, triangular distribution (between reasonable upper and lower bounds) or lognormal distribution (with reasonable estimates of geometric mean and standard deviation).

Improving Generic Uncertainty Analysis

2.

If it is decided that default uncertainty analysis should be improved, conduct review of default probability distribution for each parameter to determine whether default distribution is reasonable. If this distribution is not reasonable, conduct step 3 by either of two methods:.

 

3a.

Conduct sensitivity analysis by replacing each component distribution with its corresponding mean.

3a.

Conduct sensitivity analysis by identifying most influential parameters for each model component (sensitivity index = change in model result per unit change in value of parameter)

 

3b.

Select probability distribution for most-sensitive parameters on basis of experience, judgment, and available information from existing data samples, parameter-value ranges, most likely values, or range of most likely values.

3b.

Determine uncertainty in each parameter—e.g., uncertainty index = ratio of standard deviation of parameter × to mean value of parameter ×, or number of orders of magnitude of uncertainty = log(ratio of upper-bound order statistic corresponding lower-bound order statistic).

     

3c.

Determine model sensitivity-uncertainty index by multiplying sensitivity index for each parameter by its uncertainty index.

     

3d.

Select probability distribution for only influential parameters on basis of experience, judgment, and available information from existing data samples, parameter-value ranges, most likely values, or range of most likely values.

Uncertainty Analysis Flowchart

4.

Conduct a Monte Carlo analysis by using probability distributions of parameters as input for simplified versions of each model (e.g., emissions, exposure, and dose-response relationship) to generate a set of synthetic (Monte Carlo) probability distributions for output from each model. Approaches other than Monte Carlo might be equally feasible.

(Table continues on following page.)

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 258

TABLE 12-3 Continued

Uncertainty Analysis Flowchartcontinued

5.

For each plausible scientific model (i.e., the default plus any plausible alternatives), conduct a numerical analysis (e.g., Monte Carlo) to determine the probability-distribution function of risk due to uncertainty in the parameters selected in Step 4. Present each distribution separately or combine them into a single representation, clearly indicating which portions of the distribution are derived from fundamental controversy about which model might be correct.

6.

Conduct a ''reality check" to ensure that resulting risk-estimate distribution makes scientific sense; if not, adjustments may be made. Objective of this analysis is to improve representation of uncertainty. Clearly state that uncertainty representation does not characterize all uncertainty associated with estimated risk.

7.

Repeat analysis for each type of risk measurement (e.g., individual risk, population risk, and years of life lost) needed for decision-making.

Risk Management

8.

Judge what probability provides sufficient level of confidence relative to regulatory decision needed. For example, risk manager might judge mean of upper 5% of distribution to be point estimate that is appropriately "conservative" within context of regulatory decision. This is simple way to guarantee modest but tangible amount of conservatism with respect to both average and upper tail of uncertainty distribution.

might have been), it is important for EPA to understand the features of its internal organization, decision-making practices, and interactions with other federal agencies that lead to these criticisms of its performance. The agency's prevailing assumptions concerning the appropriate role of risk assessment and its relationship to risk management also should be re-examined.

Stability and Change

Like any other complex organization, EPA is subject to many competing institutional pressures that affect the quality and credibility of its decisions. The agency is expected to use the best possible science in risk assessment; yet assessments must often be carried out under conditions that preclude deliberation or continued study. Problems of intra-agency coordination that have persisted throughout EPA's history create communication gaps between risk assessors and managers. The firefighting mode in which the agency all to often operates hinders the design of effective long-range research programs and even the formulation of the right questions for science to answer. As in all bureaucracies, it often seems safest to take refuge in established approaches, even if these have begun to appear scientifically outdated. External pressures, such as the demands of state agencies for precise guidance, strengthen this tendency.

These overarching managerial problems are faced by any regulatory body that is responsible for rendering consistent decisions based on changing scientif-

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 259

ic knowledge. Uncertainty, variability, and imperfections in knowledge make it difficult to control environmental risks. To remain accountable to the public under these circumstances, regulatory agencies like EPA must assess uncertain science in accordance with principles that are fully and openly articulated and applied in a predictable and consistent manner from case to case. Risk-assessment guidelines and default assumptions were designed to accomplish those objectives, and they have succeeded to a large extent in making EPA's both transparent and predictable.

But an unintended side effect of such explicit decision-making rules is that they can run the risk of becoming rigid over time to the detriment of scientific credibility. Science-policy rules might ensure a valuable degree of consistency from one case to another, but they do so in part by sometimes failing to stay abreast of changing consensus in the scientific community. Some have criticized EPA for allowing bureaucratic considerations of consistency to override good scientific judgment. In trying to ensure that like cases are treated alike, the agency might fail to acknowledge, or even recognize, the scientific reasons why a new case is substantially unlike others in ostensibly the same category. In short, risk-assessment guidelines can be applied in practice like unchangeable rules. That is unfortunate, as articulated earlier in the discussion on guidelines versus requirements.

Since the mid-1970s, numerous reports and proposals have addressed the generic problem of enlisting the best possible science for EPA's decision-making. We note, for example, a January 1992 report, Safeguarding the Future (EPA, 1992f), submitted to the EPA administrator and containing detailed recommendations for strengthening EPA's scientific capabilities. Such reports have stressed the need for high-quality scientific advice, expanded peer review, and adequate incentives for staff scientists—clearly important issues that have attracted attention at the highest levels of EPA's administration, but have not been effectively implemented. The agency's decision-making practices have evolved since the mid-1970s, defining a positive, although gradual, learning curve. There can be little doubt that EPA is aware, at a conceptual level, of steps that can be taken to improve both its in-house scientific capabilities and its collaboration with the independent scientific community.

Management As Guide To Assessment

A more subtle and less widely recognized impediment to good decision-making on risk arises from a rigid adherence to the principle of separating risk assessment from risk management. The call to keep these two functions distinct was originally articulated in response to a widespread perception that EPA was making judgments on the risk posed by a particular substance not on the basis of science, but rather on the basis of its willingness to regulate the substance. The purpose of separation, however, was not to prevent any exercise of policy judg-

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 260

ment at all when evaluating science or to prevent risk managers from influencing the type of information that assessors would collect, analyze, or present. Indeed, the Red Book made it clear that judgment (also referred to as risk-assessment policy or science policy) would be required even during the phase of risk assessment. The present committee concludes further that the science-policy judgments that EPA makes in the course of risk assessment would be improved if they were more clearly informed by the agency's priorities and goals in risk management. Protecting the integrity of the risk assessment, while building more productive linkages to make risk assessment more accurate and relevant to risk management, will be essential as the agency proceeds to regulate the residual risks of hazardous air pollutants.

Risk assessment should be an adjunct to the Clean Air Act's primary goal of safeguarding public health, not an end in itself. A legitimate desire for accuracy and objectivity in representing risk can induce such an obsession with numbers that too much energy is expended on representing the results of risk assessment in precise numerical form. Thus, new research might be commissioned because there is insufficient notice of how marginal the results would be in a given case or without consideration of new, less resource-intensive methods of providing essential inputs.

Moreover, there might be a vast difference between having "the truth" and having enough information to enable a risk manager to choose the best course of action from the options available. The latter criterion is more applicable in a world with resource and time constraints. Determining whether "enough information" exists to decide in turn implies the need to evaluate a full range of decisions. Thus, further improvement of a risk-assessment estimate might or might not be the most desirable course in a given situation, especially if the refinement is not likely to change the decision or if disproportionate resources have been directed to studying the risk at the expense of creating a full set of decision options from which to choose.

Comparisons of Risk

It can be questioned whether risk assessment is sufficiently developed for the particular class of decisions regarding "offsets" or other tradable actions. In general, because of the substantial and varied degrees of model and parameter uncertainties in risk estimates, it is almost impossible to rank relative risks accurately unless the uncertainty in each risk is quantified or otherwise accounted for in the comparison. If the regulatory need for comparison of risks is imperative, one might attempt to compute the uncertainty distribution of the ratio of the two risks and choose from it one or more appropriate summary statistics. For example, one might determine in a given case that there is a 90% chance that chemical A is riskier than chemical B and a 50% chance that it is at least 10 times as risky. Also, if EPA decides to undertake the proposed iterative approach to risk assess-

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 261

ment it will not be possible to apply this kind of ratio comparison to estimates derived from different tiers of analysis. That is because the analyses at each level will be conducted differently and will produce risk estimates of differing accuracy and conservatism. The same might be true of aggregation of risks associated with different exposures.

Even more difficult is the issue of the relative degrees of reliability in the risk figures being compared. Is it appropriate, for example, to compare actuarial risks with modeled risks? Those and other difficulties suggest that EPA should pay more attention that it now does to the appropriateness of various procedures for risk comparison. A scientifically sound way to do this would be to modify risk-assessment procedures to characterize more specifically the uncertainties in each comparison of risks—some larger, some smaller than the uncertainties in individual risk assessments—and this could be done across tiers.

Risk Management and Research

Improved cooperation between EPA's Office of Air Quality Planning and Standards (OAQPS), which conducts the regulatory work of the air program, and its Office of Research and Development (ORD), which conducts research and revises the risk-assessment guidelines, would be helpful in ensuring that research needs of the risk-management side were met by the research side. For example, the two groups might jointly publish a research agenda on hazardous air pollutants, submit the agenda for public comment and SAB review, publish a final agenda based on these comments, and then report annually on how much progress has been made on the agenda. EPA should have a review and research-management system that catalogs risk-assessment weaknesses as identified by the SAB and other peer-review activities and that helps to direct research within EPA (and to guide strategies in other federal and state agencies and in the private sector) to remedy the weaknesses when the importance of a risk assessment justifies the expenditure of research funds. In many cases, the regulated parties may be willing to fund research that will enable health-protective default options in risk assessment to be replaced by more complex and less conservative alternatives. EPA will need to maintain its own substantial research capability to understand and evaluate advances in risk assessment. In some cases, EPA will want to support targeted risk-assessment research and data collection on specific chemicals that could lead to revisions in risk assessments of such chemicals. Situations might be discovered where current risk-assessment practice is underestimating health risk or where the information base for a chemical is not sufficient to allow regulation to proceed.

Present EPA practice is to remove IRIS listings while cancer potencies or RfCs are under review. This practice is frustrating to non-EPA users, not only because the information becomes inaccessible, but also because EPA has been reluctant to state when such information will be returned to the system. The

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 262

committee believes that a better practice would be for EPA to retain listings in the database, inform users that it is conducting a review, and perhaps include alternatives that can be used in the interim as the basis for calculated cancer potencies and RfCs. The narrative supporting the information on each chemical in IRIS should inform users about the assumptions underlying each calculation, about sources of data and judgments about uncertainty and variability, and about research under way to improve risk assessment on the chemical in support of future regulatory decisions.

Risk Assessment as a Policy Guide

Allocations of public-health resources reflect, among other things, some estimate of the potential benefits from health improvements achieved, and risk assessment is an important tool for understanding potential public-health impacts. Seen from this perspective, risk assessment should be a principal component of public-health and regulatory programs. Risk-management approaches will differ, perhaps greatly, depending on political choices. But establishing the relative impacts of various resource-allocation for achieving risk reduction, by continuing pursuit of comprehensive assessments of risks, should always be an objective.

For example, the committee is concerned that neither Section 112 nor other legislation provides for appropriate control of toxic emissions from mobile and indoor sources. There is strong evidence that public exposure to chemicals (and radiation) in these settings can give rise to higher public-health risk in many cases than outdoor exposure due to stationary-source emissions.

Focusing regulation on the source, rather than on the overall reduction of the pollutant (and its potential risk to public health), is unlikely to be very cost-effective in reducing disease, although it might effectively reduce high individual risks and reduce public concern over involuntary exposures. Given limited funds for both the analysis and control of environmental problems, some believe that EPA should focus on environmental toxicants that pose the greatest publichealth threat.

Social and Cultural Factors

Although the principle of maximal risk reduction is of central importance, some social and cultural factors that might introduce different risk-management priorities also need to be considered.

First, it is apparent from many studies that people's perceptions of relative risk do not always match those of technical experts. When it comes to comparing risks, most people evaluate not only the mathematical probability than an adverse outcome will occur—the principal concern of the technical expert—but also other less tangible features of the risk context, most of which are not gener-

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 263

ally considered by the risk assessor. These other concerns should be expressed and reflected at the stage of risk management.

For example, people generally feel greater anxiety about relatively low probability events with catastrophic outcomes (such as an airplane crash) than about higher-probability activities that take only a few lives at a time (such as an automobile collision). People are reluctant to accept risks, no matter how small, unless they feel that the risky activity or exposure provides some personal benefit. Risks believed to be imposed by others are less well tolerated than those voluntarily assumed. In a related vein, risks perceived as being of natural origin are less threatening than risks created by other human beings. Risks that scientists do not understand well, and over which they publicly disagree, are more feared than those about which scientific consensus is strong. Buttressing these observations is additional research that helps us to understand why people, and their governments, seem at times much more anxious about, and willing to act against, the risks associated with industrial chemicals than risks that scientists believe are more important from a public-health perspective (Slovic, 1987), We know, for example, that public perceptions of the need for regulation are influenced by such concerns as people's trust in government, their experience with experts' reassurances, and their views about social justice. When public opinion appears to be exaggerating the risks associated with industrial products, their fear might in fact be founded on an understandable mistrust of the institutional context in which those risks are produced, assessed, and eventually controlled.

Summary

Apart from its specific findings and recommendations, the committee's report is dominated by a number of central themes:

1.

EPA should retain its conservative, default-based approach to risk assessment for the purposes of screening analysis for standard-setting; however, a number of corrective actions are necessary for this approach to work properly.

2.

EPA should rely more on scientific judgment and less on rigid procedures by taking an iterative approach to its work. Such judgment demands more understanding of the relationship between risk assessment and risk management and a creative but disciplined blending of the two.

3.

The iterative approach proposed by the committee provides the ability to make improvements in both the models and data used in its analysis. However, in order for this approach to work properly, EPA needs to provide justification for its current defaults and set up a procedure such as that proposed in the report that permits departures from the default options.

4.

When reporting estimates of risk to decision-makers and the public, EPA should report not only point estimates of risk but also the sources and magnitudes of uncertainty associated with these estimates.

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 264

Findings And Recommendations

General findings and recommendations regarding implementation and risk management are presented below.

Tiered vs. Iterative Risk Assessment

EPA proposes to adopt a tiered risk-assessment approach that will begin with a "lookup" table and move to deeper analysis with the amount of conservatism generally decreasing as estimated uncertainty decreases.

Rather than a tiered risk-assessment process, EPA should develop the ability to conduct iterative risk assessments, allowing improvements in the process until the risk, assessed conservatively, is below the applicable decision-making level (e.g., 1 × 10-6, etc.); until further improvements would not significantly change the risk estimate; or until EPA, the source, or the public determines that the stakes are not high enough to warrant further analysis.

Verification of Amount of Risk-Assessment Conservatism

In its tiered approach, EPA plans to use exposure models developed and validated for criteria pollutants, but not fully evaluated for the broader group of situations including hazardous air pollutants. In particular, it has not shown that analysis conducted with a simple, single Gaussian-plume approach with the generic worst-case conditions will necessarily be conservative over all situations in which it would be applied.

Until the accuracy and conservatism of the proposed models can be evaluated, EPA should consider beginning at Tier 3, where site-specific data will provide better estimates needed for such key decisions as delisting, priority-setting, and residual-risk decisions.

Full Set of Exposure Models

Even at Tier 3, EPA plans to use a Gaussian-plume model that does not hold over complex terrain. EPA's complex-terrain models focus on tall stacks, rather than the effects of hills or valleys, and emissions from a low point or area source disperse poorly in these models.

The committee recommends no specific model, but EPA should look beyond the set of models it now uses to find the best possible models of dispersion of hazardous air pollutants in complex terrain.

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 265

IRIS Data Quality

EPA plans to use IRIS as the database for as many as possible of the 189 Section 112 chemicals. The IRIS database has quality problems and is not fully referenced.

EPA should enhance and expand the references in the data files on each chemical and include information on risk-assessment weaknesses for each chemical and the research needed to remedy such weaknesses. In addition, EPA should expand its efforts to ensure that IRIS maintains a high level of data quality. The chemical-specific files in IRIS should include references and brief summaries of EPA health-assessment documents and other major risk assessments of the chemicals carried out by the agency, reviews of these risk assessments by the EPA Science Advisory Board, and the agency's responses to the SAB reviews. Important risk assessments carried out by other government agencies or private parties should also be referenced and summarized.

Toxicity Data Development

Some of the 189 chemicals lack cancer potencies or RfCs.

If IRIS does not contain a cancer potency or RfC, EPA should develop a procedure for making crude screening estimates. These estimates should generally not be used for regulation, but only as a means of setting research priorities for carrying out the animal studies from which cancer potencies and RfCs could be calculated with EPA standard default methods. EPA should develop a summary of health-risk research needs from a review of the IRIS files on the 189 chemicals. EPA should determine which research is most important, how much of it is likely to be carried out by other parties, and what research should be carried out by EPA and other federal agencies under their mandates to protect public health.

Full Data Set for Priority-Setting

EPA often appears to base priorities on the simple availability of data on a particular chemical.

At a minimum, EPA should compile for each of the 189 chemicals an inventory of the existing and relevant chemical, toxicologic, clinical, and epidemiologic literature. For each specific chemical, EPA should have at a minimum a structure-activity evaluation; and for each important mixture, it should complete an analysis of available short-term toxicity tests (such as the Ames test). If review of toxicity information suggests a possible need for regulation to protect human health, it should develop aggregate emission data and estimates of populations potentially exposed.

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 266

Iterative Priority-Setting

EPA sometimes appears to base its priorities on a one-time analysis of incomplete and preliminary data.

EPA should take an iterative approach to gathering and evaluating existing evidence to use in a level of risk assessment for each of the 189 chemicals that is appropriate for the quality and quantity of available evidence and the most realistic scientific judgment of potential human health risks. On the basis of that evidence, EPA should further maintain a continuing oversight of new scientific results so that it can identify needs to re-examine chemicals that it has already assessed.

Full and Complete Documentation of Priority-Setting

EPA does not always clearly communicate the methods and data on which it bases its priority-setting analysis. In addition, emission, exposure, and toxicity information is not often collected in the same database.

Once EPA's preliminary priority-setting analyses are completed for a chemical on the list, a description of the assessment process used, the findings, and the emission, exposure, and toxicity information should be placed in one location in the public domain (e.g., in IRIS).

Guidelines vs. Requirements

EPA and others often interpret the term risk assessment as a specific methodologic approach to extrapolating from sets of human and animal carcinogenicity data, often obtained in intense exposures, to quantitative estimates of carcinogenic risk associated with the (typically) much lower exposures experienced by human populations.

EPA should recognize that the conduct of risk assessment does not require any specific methodologic approach and that it is best seen not as a number or even a document, but as a way to organize knowledge regarding potentially hazardous activities or substances and to facilitate the systematic analysis of the risks that those activities or substances might pose under specified conditions. The limitations of risk assessment thus broadly conceived will be clearly seen as resulting from limitations in our current state of scientific understanding. Therefore, risk-assessment guidelines should be just that—guidelines, not requirements. EPA should give specific long-term attention to ways to improve this process, including changes in guidelines.

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 267

Process for Public Review and Comment

EPA does not always provide a method by which industry, environmental groups, or the general public can raise questions regarding the scientific basis of a decision made by EPA during the risk-assessment process.

EPA should provide a process for public review and comment with a requirement that it respond, so that outside parties can be assured that the methods used in risk assessments are scientifically justifiable.

Petitions for Departure from Default Options

EPA does not have a procedural mechanism that allows those outside EPA to petition for departures from default options.

EPA should develop a formal process to allow those outside the agency to petition for departures.

Iterative Uncertainty Analysis

Because EPA often fails to characterize fully the uncertainty in risk assessments, inappropriate decisions and insufficiently or excessively conservative analyses can result.

The committee believes that the uncertainty in a risk estimate can be handled through an iterative process with the following parts: conduct a conservative screening analysis, conduct a default-uncertainty analysis, and conduct testing or analysis to develop site-specific probability distributions for each important input. The key factor in deciding to increase the intensiveness of uncertainty analysis should be the extent to which changes in estimates of costs and risks could affect risk-management decisions.

Risk Assessment vs. Risk Management

The principle of separation of risk assessment from risk management has led to systematic downplaying of the science-policy judgments embedded in risk assessment. Risk assessment accordingly is sometimes mistakenly perceived as a search for "truth" independent of management concerns.

EPA should increase institutional and intellectual linkages between risk assessment and risk management so as to create better harmony between the science-policy components of risk assessment and the broader policy objectives of risk management. This must be done in a way that fully protects the accuracy, objectivity, and integrity of its risk assessments—but the committee does not see these two aims as incompatible. Interagency and public understanding would

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 268

 

be served by the preparation and release of a report on the science-policy issues and decisions that affect EPA's risk-assessment and risk-management practices.

Comparisons of Risk

EPA often does not elucidate all relevant considerations of technical accuracy when it compares and ranks risks.

EPA should further develop its methods for risk comparison, taking account of such factors as differing degrees of uncertainty and of conservatism in different categories of risk assessment.

Policy Focus on Stationary Sources

Title III focuses primarily on outdoor stationary sources of hazardous air pollutants and does not consider indoor or mobile sources of those pollutants.

EPA should clearly communicate to Congress that emissions and exposure, and thus the aggregate risk to the public, related to indoor and mobile sources might well be higher than those related to stationary sources.

Risk Management and Research

EPA does not appear to use risk assessment adequately as a guide to research and might abandon some important risk-assessment and regulatory efforts prematurely because of data inadequacies.

The conduct of risk assessment reveals major scientific uncertainties in a highly systematic way, so it is an excellent guide to the development of research programs to improve knowledge of risk. EPA should, therefore, not abandon risk assessments when data are inadequate, but should seek to explore the implications for research. Risk-assessment uncertainties can also help to determine the urgency with which such research should be developed. In particular, improved cooperation between EPA's Office of Air Quality Planning and Standards (OAQPS) and its Office of Research and Development (ORD) through such actions as joint publication of a research agenda on hazardous air pollutants would be most helpful.

Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 243
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 244
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 245
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 246
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 247
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 248
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 249
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 250
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 251
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 252
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 253
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 254
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 255
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 256
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 257
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 258
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 259
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 260
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 261
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 262
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 263
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 264
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 265
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 266
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 267
Suggested Citation:"Part III Implementation of Findings: 12 Implementation." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 268
Next: References »
Science and Judgment in Risk Assessment Get This Book
×
 Science and Judgment in Risk Assessment
Buy Paperback | $99.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The public depends on competent risk assessment from the federal government and the scientific community to grapple with the threat of pollution. When risk reports turn out to be overblown—or when risks are overlooked—public skepticism abounds.

This comprehensive and readable book explores how the U.S. Environmental Protection Agency (EPA) can improve its risk assessment practices, with a focus on implementation of the 1990 Clean Air Act Amendments.

With a wealth of detailed information, pertinent examples, and revealing analysis, the volume explores the "default option" and other basic concepts. It offers two views of EPA operations: The first examines how EPA currently assesses exposure to hazardous air pollutants, evaluates the toxicity of a substance, and characterizes the risk to the public.

The second, more holistic, view explores how EPA can improve in several critical areas of risk assessment by focusing on cross-cutting themes and incorporating more scientific judgment.

This comprehensive volume will be important to the EPA and other agencies, risk managers, environmental advocates, scientists, faculty, students, and concerned individuals.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!