Evolution of Risk Analysis at EPA
In examining the quality and utility of Department of Homeland Security (DHS) approaches to risk assessment, the committee decided there would be value in reviewing the practices of other federal agencies that have invested heavily in risk assessment and that now have relatively mature programs. The Environmental Protection Agency (EPA) has a substantial record of performance in this area, but similar activities at agencies such as the Occupational Safety and Health Administration (OSHA), the Food and Drug Administration (FDA), and the Department of Health and Human Services (DHHS) Agency for Toxic Substances and Disease Registry (ATSDR) are also informative.1 In fact, the risk assessment activities of these agencies have much in common, and all draw heavily from a long series of expert reports on risk assessment from the National Academies and other bodies. This appendix contains a brief summary of the essential features of the relatively well established approaches to risk assessment at the EPA, and it also provides a look at how the decision needs of the EPA are satisfied by the approaches taken. Information for this appendix derives from a number of EPA guidelines and policy statements, citations to some of the relevant documents of other federal agencies, and reports of the National Academies, most especially the report released in December 2008 called Science and Decisions: Advancing Risk Assessment. That latter report contains exhaustive documentation of the evolution of these advisory reports and of their implementation over the past 25 years.
This appendix also examines how some of the principles upon which EPA risk assessment approaches are based might be applicable to DHS.
Over a period of several decades, from the 1930s through the 1970s, federal public health and regulatory agencies were given legal authorities to develop scientific information on various agents—chemical, biological, radiological—whose presence in the environment (the workplace, air, water, food, soils, and consumer products) could threaten human health and, further, to take action to limit or eliminate human exposures when health threats were found to be significant. In a few cases, laws require that manufacturers wishing to introduce
certain substances (food additives, pesticides, pharmaceuticals) develop the information needed to evaluate health threats, and they are further required to gain regulatory approvals to market their products. Products requiring premarket approval can be barred from commerce if regulators determine that their safety is questionable. In most cases, however, the agencies are required to develop health-related information, or to use information published in the scientific literature, to assess threats to health and to establish whether the threat is sufficient to support actions to reduce it. This latter model closely approximates the situation at DHS.
Until the mid- to late 1970s, agency approaches to what later came to be called risk analysis were not highly explicit, and they involved no clearly identified and scientifically justified methodology (NRC, 1983). Indeed, scientific and policy controversies of several kinds rose to the surface in the late 1970s and gave rise to much public concern over the use of scientific information by federal agencies. These concerns prompted a congressionally mandated review by the National Academies, resulting in a report entitled Risk Assessment in the Federal Government: Managing the Process issued by the National Research Council in 1983. That report, which is commonly known as “the Red Book,” contained a review and analysis of the scientific and policy controversies that had given rise to it (including allegations that federal risk assessments were often “manipulated” to yield the results desired by decision makers), and it offered a way forward that laid a foundation for risk analysis that continues to this day. Many critics of the 1983 report have focused on the awkwardness of the way it portrayed the relationships of analysis to decision making, and this problem has been corrected in the recent Science and Decisions report (NRC, 2008b). However, the principles for risk analysis set forth in the 1983 report remain in place, and they have been relied upon by the EPA and other federal agencies. The structure of the risk analysis process and definitions of key terms first handed down in the 1983 report remain in place.
Among the several key principles elaborated in the 1983 report, and affirmed in every expert report that has followed, is the need for care in making inferences beyond what has been shown rigorously. Risk-related information collected through various types of scientific investigations (observational and experimental studies) can reveal risks that directly apply only under limited conditions, and the use of such information to assess risks under different conditions requires the imposition of inferences from (or extrapolation beyond) the data. Two examples help to illustrate this problem:
Studies in certain occupational settings in which workers were exposed to high levels of benzene have consistently provided an association between those exposures and excess risks of leukemia. The EPA and other agencies seek to understand whether benzene exposures experienced by the general population, exposures that are typically several orders of magnitude lower than those observed in the workplace, might also pose a risk of leukemia. OSHA is similarly concerned to understand whether the current occupational exposures, again
lower than those found to be associated with excess rates of leukemia, are a health threat. It is nearly impossible to collect risk information for the general population, and it is difficult to collect current occupational exposures because the tools of epidemiology are currently inadequate to these tasks. EPA and OSHA must nevertheless reach some conclusion about general and occupational population risks and then act on that conclusion if risks are found to be excessive.
Studies in experimental animals, usually performed at exposure levels in great excess of human exposures, must be relied on in many circumstances, because human (epidemiology) data either are not available or are insufficient to assess causation.
As the 1983 NRC report noted, the EPA (and other agencies) must either adopt some “inference options” for assessing risks under circumstances different from those under which direct risk information is (or can be) collected and measured, or else conclude that nothing at all can be said about the (unmeasured or unmeasurable) risk. The latter conclusion is not a real option because the EPA and its sister agencies could not then fulfill their legal mandate, which is protection of human health.
Of course, some scientific evidence is required to make the inference that health risks can exist under exposure conditions different from those at which they can be measured directly and also to support the case that data developed in experimental animals are useful for evaluating risks to humans. The problem is the lack of scientific evidence and understanding sufficient to determine with accuracy the nature of the inferences that should be used. Indeed, in many cases it is not even possible to determine how inaccurate any given inference might be.
The 1983 NRC report recommended the development by agencies of specific and generally applicable inference options for each of the many types of inferences required to move from limited data to the assessment of health risks. It was recognized that some scientific support could be found for each of the important inferences (or extrapolations), but that the incompleteness of scientific knowledge limited that support. Moreover, in some cases, several inference options might be available and have similarly incomplete scientific bases.
The NRC (1983) report, faced with these conclusions, urged the agencies to develop general guidelines for the conduct of risk assessments. These guidelines would include the scientific basis for risk assessments and would also include the specific inference options that would generically be applied in the conduct of those assessments. It was recognized that the selection of specific inferences from among the options available would involve both scientific and policy choices (the latter different in kind from the policy choices involved in risk management), but that as long as the bases for the choices were made explicit, the agencies would be on solid ground: their assessments would at least be consistent, if not scientifically accurate, and would not easily be manipulated (by, for example, selecting on a case-by-case basis the inference options that yielded
the decision makers’ preferred result).
The 1983 committee and many subsequent committees, including the one that produced Science and Decisions (NRC, 2008b), also recognized that in specific cases (e.g., evaluating the risks associated with a specific chemical) scientific studies might provide evidence that one or more of the generic inferences used by the agencies could be inappropriate. In such circumstances, the agency was encouraged to move from the generic inference to the scientific data available on that specific chemical.
These issues of inference options and policy choices within the risk assessment process might have some applicability to the way DHS approaches its mandate for risk assessment (see below).
The EPA has developed, and periodically revised, extensive guidelines for the conduct of risk analysis, and over the past three decades it has conducted thousands of risk analyses based on them. The agency has also issued regulations of many types based on these risk analyses. It is well acknowledged that all of these risk analyses contain scientific uncertainties (which vary according to the nature of the data upon which the analyses are based and the number and types of inferences beyond the data that must be made), but they are nevertheless used to support decisions. Although management approaches vary among the different categories of regulated exposures, all regulations are designed to ensure human health protection, by ensuring an adequate degree of risk control.
Most EPA risk analyses focus on chemical contaminants of air, food, water, and soils, but some also include microbial pathogens and radiation-emitting agents. In some cases (e.g., those relating to pesticides or certain industrial chemicals), EPA analyses are directed at commercial products to which people might become exposed. The approach to risk analysis for all of the classes of agents and exposure media is the same, and it is based on the guidelines described earlier. Yet, although risk-analysis methodologies are consistent across different classes of agents, the data upon which these analyses are based can vary greatly among them. Further discussion of this issue is useful, because it may assist understanding of the types of problems DHS encounters in dealing with both risk information that has relatively strong support (natural hazards data) and the far-less-certain information pertaining to terrorist threats.
Thus, EPA’s analyses of health risks associated with the so-called primary air pollutants (nitrogen and sulfur oxides, ozone, lead, particulate matter) are based on large bodies of epidemiological data, providing relatively direct measures of human morbidity and mortality. Risk analyses based on such data require the imposition of relatively few inferences beyond the data. Many other analyses conducted by the agency are based on far less certain data (e.g., data drawn entirely from studies in experimental animals) and cannot be completed without the use of a relatively large number of inferences beyond the data. Analogies can
be drawn between these two examples of EPA risk analyses and natural hazards risk analysis, on the one hand, and terrorist-related risk analysis on the other. It is perhaps possible to draw from the EPA experience to assist DHS in its stated goals of combining natural hazards and terrorist-related risks within a single methodological framework (see below).
One other aspect of EPA risk analyses needs to be noted. These analyses provide estimates of absolute risk: that is, they are designed to characterize the probabilities of different types of harm associated with exposures to hazardous agents. Risk management decisions seek to reduce risks in accordance with specified, absolute risk criteria for human health protection. Many of the risk analyses thus far conducted by DHS involve risk ranking, based on scales of presumed relative risks, and do not include attempts to provide absolute measures of risk. Thus, faced with two major sources of risk—those from natural hazards and those related to terrorist activities—DHS has thus far chosen to examine each source separately and not to compare the absolute risks from the two sources.
RISK ASSESSMENT AND DECISIONS
The 2008 NRC report Science and Decisions placed heavy emphasis on the need to ensure that risk analyses2 are undertaken only when the decisions they are intended to support (or the problems they are intended to deal with) have been well defined and understood by both decision makers and risk analysts. The committee that authored the 2008 report found that, although it is commonly assumed that one must understand how a risk analysis will be used and what decisions it is meant to impact, those questions are not always addressed by agencies prior to conducting the risk analysis, or they may be approached in a less than systematic or complete way.
Risk analyses can be undertaken at many different levels of complexity and completeness and with varying degrees of care regarding uncertainties. Only by ensuring that the analysis is firmly linked in advance to the specific problem that it is intended to evaluate can the utility of the analysis for ultimate decision making be ensured. “Utility” was regarded in the 2008 report as a critical and highly desirable attribute of risk analyses.
The EPA and other agencies were found by the 2008 study to have made significant progress toward incorporating a “Scoping and Problem Formulation” phrase into their practices, to evaluate the purpose of a risk analysis prior to undertaking it. The report strongly urged continuing efforts in this area. It is also clear that this early phase is useful for ensuring that the specific problem to be dealt with is completely delineated and understood by all stakeholders. These important recommendations are applicable in all decision making contexts involving the use of technical information and analysis and certainly include those
that are within the mandate of DHS.
EPA’S DEVELOPMENT OF RESOURCES TO SUPPORT RISK ANALYSIS AND MANAGEMENT
The EPA, over the past three decades, has devoted much effort to building a capacity for risk analysis that is directed at supporting the decision needs of its various regulatory programs. The model for this development has been based on the concept, first elaborated in the 1983 NRC report, that information arising from research and other sources is not useful without evaluation and synthesis, the latter describing the risk analysis process. Thus, an internal staff, comprised of all the necessary scientific disciplines, is now available to conduct risk analyses on behalf of the agency’s decision makers. The staff is augmented by some degree of contractor support, but the agency has found that a strong internal risk analysis capacity is essential. The internal staff not only conducts risk analyses, but also develops and maintains risk analysis guidelines. As noted, these guidelines are essential to ensuring the scientific status and consistency of agency assessments. Internal EPA experts are devoted to conducting analyses (following guidelines) and are also involved in the development of new methods for such analyses.
The research efforts of the EPA are intended to provide the data and knowledge necessary for the development of needed risk analyses. As many reports from the National Academies, including the seminal 1983 report, have emphasized, the conduct of risk analyses reveals clearly the gaps in knowledge and data that need to be filled by research. Risk analysis is thus not only a guide to decisions, but also a sound guide to research. The EPA has adopted this concept, and it would seem to be generally applicable to any institutional context in which a research and data development effort is required to support risk analysis. As with any similar efforts undertaken by large, complex institutions, implementation of such risk-based research programs is bound to be imperfect, but it can be strengthened if an internal staff, focused on the conduct and uses of risk analysis, is firmly entrenched in the life of the agency.
Finally, the use of scientific peer review has become critical to ensuring the quality and utility of EPA risk analyses. Scientific peer review and advisory panels are firmly embedded at several different levels within the EPA.