Consistency with National Research Council and Other Reports
The committee was asked to evaluate the consistency of the proposed risk assessment bulletin issued by the Office of Management and Budget (OMB) with the recommendations of previous committees of the National Research Council (NRC) and those of other expert organizations. Some recommendations arose in general studies of risk assessment and its relationship to other activities, whereas others came from studies directed at specific substances or classes of substances. Some studies are cited by OMB as the source of guidance on general principles, and others are cited to document specific statements. The committee has focused on the studies concerning broader principles, although other studies were also useful. The reports reviewed by the committee are primarily those cited by the bulletin in footnotes 4 and 5. In this report, the committee refers to the entire collection as “the cited studies.”
In this chapter, the committee identifies a set of common themes and recommendations that emerge from the cited studies and bear most directly on the bulletin. The content of the bulletin, most especially that pertaining to standards for risk assessment, is discussed in relation to the cited studies, and areas lacking consistency are identified.
Consistency with the cited studies is important for ensuring the quality and continuing advance of risk assessment. Those studies have drawn on the collective skills and experience of the leading practitioners of risk assessment, and the committee believes that any departures should be fully explained and be based on a consensus of scientific experts.
THEMES AND RECOMMENDATIONS FROM CITED STUDIES
The major recommendations that have emerged from nearly 25 years of study of risk assessment have much in common. The seminal NRC study on risk assessment, cited several times by OMB, yielded the 1983 report Risk Assessment in the Federal Government: Managing the Process (the Red Book).
The recommendations set forth in the 1983 Red Book appear to have been widely accepted in the regulatory and public health communities; indeed, all the cited studies on risk assessment that have followed the 1983 report appear to have adopted the principles first presented in it. Those later reports have done much to clarify and solidify thinking about risk assessment and related activities, but all seem to adopt or accept the following:
The process of risk assessment is carried out within a framework within which diverse sets of scientific information are organized and evaluated for specific uses. The first step, generally called hazard identification, involves assembling and evaluating information on the harmful properties of the substance or activity under review. The second step, called dose-response assessment, describes the relationship between exposure to the substance or activity and the nature and extent of resulting harm. The third step, usually termed human exposure assessment, describes the nature and extent of human exposure to the substance or activity. The fourth step, called risk characterization, integrates the information assembled in the first three steps to assess the likelihood that the hazardous properties of the substance or activity will be expressed in humans. Risk characterization generally has both qualitative and quantitative components, and it also includes a description of the uncertainties in the assessment (NRC 1983). The results of risk characterization have many uses that lie outside the bounds of risk assessment.
The same conceptual framework for risk assessment and the four-step analytic process were adopted and promoted in NRC’s Science and Judgment in Risk Assessment (1994), in Understanding Risk: Informing Decisions in a Democratic Society (1996), and in all the other reports cited in footnote 5 in the OMB bulletin.
The Red Book clarified the distinction between risk assessment and risk management, and the same distinction is maintained in the other cited studies. The first recommendation of the Red Book is the following:
We recommend that regulatory agencies take steps to establish and maintain a clear conceptual distinction between assessment of risks and consideration of risk management alternatives; that is, the scientific findings and policy judgments embodied in risk assessments should be explicitly distinguished from the political, economic, and technical considerations that influence the design and choice of regulatory strategies (NRC 1983, p. 7).
Ideally, relevant scientific data and other information are available for any particular risk assessment. Scientific principles and risk assessment practices establish unequivocably that such data are always preferred to the use of the defaults and inference options discussed here and elsewhere in this report. However, virtually all risk assessments are undertaken in the absence of complete knowledge and information. Risk characterization is thus necessarily uncertain. Moreover, in most cases the results of risk assessments are not scientifically testable in the traditional sense. In the absence of relevant knowledge and information, risk assessments often can be completed only if models and assumptions of uncertain scientific standing are adopted. For that reason, the Red Book committee and others pointed to the need for agencies engaged in risk assessments to specify and make explicit the models and assumptions they would use in advance of the conduct of specific risk assessments.
The Red Book used the term inference options to describe alternative models and assumptions that are needed to complete risk assessments in the absence of complete scientific information or knowledge. Inference guideline was defined as “an explicit statement of a predetermined choice among alternative…options” (NRC 1983, p.4). The predetermined choices of models and assumptions have come to be called defaults. Such standardized defaults are critical if one is to avoid case-by-case manipulations of individual risk assessments to achieve predetermined risk management outcomes. The NRC report Science and Judgment in Risk Assessment (1994) urged the Environmental Protection Agency (EPA)1 to pursue the issue of justifying and specifying the defaults more aggressively in its guidelines for risk assessment. The committee noted that “EPA does not fully explain in its guidelines the basis for each default option” (NRC 1994, p. 7). Selection of defaults has both
scientific and policy components, although risk assessment policies are different in kind and distinguishable from those involved in risk management decisions, as indicated in the first recommendation of the Red Book, which clarified the distinction between risk assessment and risk management.
Although recognizing the need for defaults to achieve consistency and to avoid case-by-case manipulations of risk assessments, the Red Book committee and other committees have urged that the agencies incorporate procedures that allow departures from the defaults in specific cases in which a scientific basis for alternative assumptions or models can be found. Flexibility to incorporate new scientific knowledge, when it becomes available, is urged in most expert studies of risk assessment. The Science and Judgment committee examined the question of whether EPA had “clear and consistent principles for…departing from default options” and found the agency wanting on this point (NRC 1994, p. 79).
The cited studies all emphasize the important issue of uncertainty in risk assessment. Some focus on issues related to its evaluation and expression for purposes of informing risk managers and the public. Others focus on the need to describe uncertainties so that the research community is informed about what is needed to improve risk assessments. Most of the studies discuss both qualitative (descriptive) and quantitative aspects of uncertainty in risk assessment, but none seems to offer highly explicit guidance to agencies. Several caution that the level of uncertainty analyses and description should be influenced by the needs of decision-makers in specific cases. The difficult problem of uncertainty analysis is well-described in Understanding Risk (1996):
Much attention has been given to quantitative, analytic procedures for describing uncertainty in risk characterizations. Participants in decisions need to consider both the magnitude of uncertainty and its sources and character: whether it is due to inherent randomness or to lack of knowledge; and whether it is recognized and quantifiable, recognized and indeterminate; or perhaps unrecognized. Unfortunately, the unrecognized sources of uncertainty—surprise and fundamental ignorance about the basic processes that drive risk—are often important sources of uncertainty, and formal analysis may not help if they are too large. Thus, uncertainty analysis should be conducted with care and in conjunction with deliberation and in full awareness of its limitations, especially in the face of unrecognized sources of uncertainty. It is best to focus on uncertainties that mat-
ter most to ongoing processes of deliberation and decision. The users of uncertainty analysis should remember that both the analysis and people’s interpretations of it can be strongly affected by the social, cultural, and institutional context of the decision (NRC 1996, p. 5).
These cautionary notes regarding the descriptions of uncertainties in risk assessments are echoed in other cited studies, but none offers explicit guidance on the analytic methods best suited to evaluate and express uncertainties in specific contexts.
Although the cited studies generally do not offer explicit guidance on the conduct of uncertainty analysis, much work has been done in probabilistic risk assessment (PRA) to develop standards that explicitly incorporate such analysis. Perhaps the leading efforts are the PRA standards for nuclear power plants that have been developed by various professional societies. For example, the American National Standards Institute, in conjunction with the American Nuclear Society and the American Society of Mechanical Engineers, has developed risk assessment standards for internal and external initiating events at nuclear power plants (ASME 2002; ANS 2003). OMB does not cite those and other such studies. Whether and to what extent the technical details of those standards apply to other types of risk assessments—those for chemical toxicity, for example—is not clear, but it is possible that some of the methods used in them are applicable.
A consistent theme in the cited studies concerns the value of well-described purposes for risk assessments, balanced and clear presentations of all relevant data, and the bases of inferences drawn from the data. Most reports offer general guidance on these matters but do not offer highly explicit instructions. An important theme regarding the risk assessment process is given explicit treatment in Understanding Risk:
The analytic-deliberative process leading to a risk characterization should include early and explicit attention to problem formulation; representation of the spectrum of interested and affected parties at this early stage is imperative. The analytic-deliberative process should be mutual and recursive. Analysis and deliberation are complementary and must be integrated throughout the process leading to risk characterization: deliberation frames analysis, analysis in-
forms deliberation, and the process benefits from feedback between the two (NRC 1996, p. 6).
Some NRC reports on specific substances (perchlorate, methyl-mercury, and arsenic) were reviewed. Those reports appear to adhere to the general principles set forth above although they vary in their presentations of risk results and uncertainties. All discuss uncertainties, but two provide only “point” estimates of risk, and the third (on arsenic) provides a relatively limited range of risk estimates based on application of a single dose-response model. Thus, uncertainties are not expressed quantitatively but are to various degrees discussed qualitatively.
Other themes and recommendations can be found in the many cited studies, but those described in the foregoing are judged to be critical to an understanding and evaluation of the standards proposed in the OMB bulletin.
INFLUENCE OF THE CITED STUDIES
It is difficult to judge the influence of all the many studies on the practices of federal agencies other than EPA. It appears that only EPA has explicitly adopted the general recommendations that have emerged from the reports, and the agency has put much effort into developing guidelines for risk assessment that, at least in principle, are consistent with the themes and recommendations that are described here. (Whether EPA consistently adheres to the guidelines is a different matter and is beyond the scope of the committee’s review.) The need for greater consistency among federal agencies in risk assessment approaches might be satisfied by the development of government-wide guidelines as they have been developed in EPA. Whether the bulletin accomplishes that objective is discussed below.
Although the issue is not explicitly discussed in any of the cited studies, there is no obvious reason why the principles and themes elucidated in them should apply only to risk assessments conducted by federal agencies. Most are directed more broadly at the risk assessment community, and that includes operators of facilities who are seeking licenses and manufacturers submitting risk-related data and assessments to agencies to gain product approvals, licenses, or registrations.
THE OFFICE OF MANAGEMENT AND BUDGET BULLETIN: CONSISTENCY WITH MAJOR THEMES AND RECOMMENDATIONS
The bulletin emphases the need for a clear declaration of objectives, for discussion between risk assessors and the users of risk information, and for ensuring that assessments yield results that are faithful to underlying scientific knowledge and are useful for decision-making. In its call for balanced presentations of all relevant data and the inferences drawn from them, the bulletin is consistent with the many expert recommendations that have shaped current risk assessment practice. And as a general matter, the bulletin’s requirements for a thorough characterization of risk and its associated uncertainties and for a level of effort “commensurate with the importance of the risk assessment” (OMB 2006, p. 23) reflect what most expert studies have recommended. In those many general respects, the bulletin is consistent with the themes and recommendations outlined above.
However, the proposed bulletin is inconsistent with the recommendations that have been discussed on a number of important issues. Concerning its call for formal uncertainty analyses, it attempts to move the standards for risk assessment beyond what the many cited studies have provided. Furthermore, the bulletin does not seem to be a guideline, but rather a highly prescriptive mandate. Therefore, there is a danger that in its present form the bulletin may reduce rather than enhance the quality and objectivity of agency risk assessments. The committee’s principal concerns are the following:
The bulletin does not recognize the importance of what several NRC committees have called policy judgments in risk assessment. As described above, there is a continuing need for defaults to complete risk assessments; without a consistent and justified set of defaults (based on both scientific and policy considerations), there is a danger that risk assessments will be manipulated case by case through the arbitrary selection of models and assumptions that guarantee a predetermined outcome. NRC committees have strongly recommended that agencies throughout the government develop guidelines that justify and specify general defaults. The bulletin’s inattention to this issue might open the door to less well-standardized risk assessment practices.
As described above, most expert studies have recognized that agencies should evaluate new scientific information and knowledge that
suggest that particular defaults may no longer be justified and need to be changed. That may happen in the general case (as, for example, in EPA’s recent proposal to change the model for “scaling” animal doses to equivalent human doses) or in the case of specific substances or activities (as, for example, in EPA’s adoption of a nonlinear model for low-dose extrapolation of carcinogenicity data for chloroform [EPA 2001]). The move away from defaults, either general or substance-specific, has been plagued with difficulties in that there are always questions regarding the scientific rigor with which an alternative model or assumption has been established. One interpretation of the bulletin is that it implicitly recognizes that proposals to depart from defaults often result in protracted and contentious scientific debates and that requiring agencies always to report risk results on the basis of alternative models and assumptions might both circumvent the debate and also provide more balanced views of risk. If that is what OMB is calling for, it might be presented better in the context of the use of defaults, as is described here, and a requirement to consider alternatives to the defaults in specific cases when new data become available. Such a presentation by OMB would be consistent with the body of expert recommendations described.
Alternative models and assumptions based on new scientific data will, like the defaults they may replace, always have some degree of scientific uncertainty. In considering such alternatives, agency risk assessors should not be placed in the position of having to decide “how much evidence is sufficient” to adopt the alternative. Rather, they should attempt to describe the scientific bases of a proposed alternative and describe how certain it is. Deciding whether it is “sufficiently certain” to replace a default or is to be given more weight, equal weight, or less weight than the default may be seen as requiring a combination of scientific and policy considerations that go beyond risk assessment. With this approach, risk assessors do not discard alternative models and assumptions unless they clearly lack substantial scientific merit; rather, they attempt to judge and describe the relative scientific merits. If the bulletin were recast to suggest such consideration of alternatives, it would be more consistent with past expert recommendations.
The bulletin proposes “standards” for the evaluation and description of uncertainties in risk characterization that lack clarity and may foster a reduction in the quality and consistency of risk assessments. Its discussion of “risk ranges” and “central or expected risks” is superficial (see Chapter 4) and mandates analyses that could, if not done with care, yield misleading estimates. The scientific literature on uncertainty analysis has
not been translated into explicit and peer-reviewed guidelines except for the standards that have been developed for PRA (see Chapter 4). In an apparent leap beyond what is offered in the previous reports, OMB is proposing approaches that might be reduced to inappropriate statistical analyses (see the next paragraph). Explicit guidelines are needed to support scientifically sound and useful characterization of uncertainties. Some “demonstration projects” on this topic might be called for, and much might be learned from the PRA standards discussed above and more fully in Chapter 4. In many fields of risk assessment, there is much to be learned about this topic; although the many cited studies clearly point to the need for well-conceived uncertainty analyses, none offers agencies much more than general guidance. Indeed, the NRC reports on perchlorate, arsenic, and methyl mercury do not contain the type of uncertainty analyses with an emphasis on central estimates and risk ranges that the bulletin appears to mandate. The committee notes that the data available to conduct the proposed analyses are often not available.
Several NRC committees have clearly warned that descriptions of “central estimates” of risk, when they are applied to models for high-dose to low-dose extrapolation, have little meaning—the model that best describes the “fit” of the observed dose-response relationship cannot be claimed to describe accurately the dose-response relationship at low doses. If this type of low-dose extrapolation is intended by OMB to yield “central estimates,” the requirement will produce misleading results. If “central estimate” is used only in connection with observed data, it can be highly valuable if properly calculated. The bulletin lacks clarity on this point.
The bulletin says little about the biologic bases of the various models and assumptions that might be used in risk assessments and about how judgments regarding their relative scientific merits are to be encompassed in the expressions of risk and of uncertainty. Perhaps the bulletin intends that such efforts be inherent in the analyses called for, but it could also be read as simply calling for the use of alternative statistical models that have unknown biologic bases. The OMB ambition with regard to uncertainty analyses seems appropriate; but because there are no well-examined and widely accepted guidelines for such analyses outside of some narrow applications of PRA, there is a risk that the bulletin’s requirements will be followed in a rote way and compromise the quality of risk assessments. That outcome also runs the risk of creating even more confusion among the users of risk assessments, and the public they serve, than is now the case. Before steps are taken to mandate the vague
proposals set forth by OMB, the committee urges the development of rigorous scientific methods for uncertainty analysis that meet the information needs of decision-makers.
With respect to chemical toxicity, much of the effort of risk assessors is devoted to substances that are thought to act through threshold mechanisms. EPA reference doses for toxicity, for example, are the products of such efforts. A variety of approaches to uncertainty analysis for such measures could be suggested, but which of these would be preferred is unclear. Further study would be needed before an approach could be selected.
The committee is not suggesting that risk characterizations ignore uncertainties or omit reasonable depictions of the ranges of risk that might be suggested by the data. Indeed, as stated above, the committee urges the use of defaults and, when possible, alternatives to them. But the bulletin seems to go well beyond these modest approaches and can be read as calling for more fully quantitative expressions of the uncertainties in risk than have been offered in most applications of risk assessment. In the absence of clear guidance regarding the conduct of uncertainty analysis, there is a danger that agencies will produce meaningless and confusing ranges of risk estimates and that the development of risk assessments will be delayed to no clear benefit. The possibility of large inconsistencies in risk assessments between and even within agencies is also increased in the absence of explicit and peer-reviewed guidance on this issue.
The bulletin’s inclusion, in its definition of risk assessment, of agency efforts that are directed only to specific steps of the risk assessment process is inconsistent with the definition preferred by the cited studies. Furthermore, OMB redefines risk assessment to include some activities associated with risk management decision-making. See Chapter 3 for a further discussion of OMB’s definition of risk assessment.
The cited NRC studies have focused primarily on general principles for risk assessment and have left development of specific guidelines to the agencies, whereas the proposed bulletin has attempted to prescribe specific approaches. In this respect, the bulletin is inconsistent with the cited studies.
Given the points elaborated above, the committee finds that the bulletin is inconsistent with the cited studies in important ways. It adopts a new definition of risk assessment and ignores without explanation the important role that policy judgments play in risk assessment. Its call for
formal analyses of uncertainties and for undefined “central estimates” may, in the absence of peer-reviewed technical guidance on the evaluation and expression of uncertainties, result in risk characterizations of reduced rather than enhanced quality and consistency. These are serious concerns because any attempt to advance the practice of risk assessment that does not lean heavily on nearly 25 years of expert study of the topic and reflect scientific consensus is likely to produce the opposite effect. The chapters that follow in this report provide further discussion of those concerns.
ASME (American Society of Mechanical Engineers). 2002. Standard for Probabilistic Risk Assessment for Nuclear Power Plant Applications. ASME RA-S-2002. American Society of Mechanical Engineers. December 20, 2002.
ANS (American Nuclear Society). 2003. External-Events PRA Methodology: ANSI/ANS-58.21-2003. American National Standard. American Nuclear Society. December 2003.
EPA (U.S. Environmental Protection Agency). 2001. Toxicological Review of Chloroform (CAS No. 67-66-3): In Support of Summary Information on the Integrated Risk Information System (IRIS). EPA/635/R-01/001. U.S. Environmental Protection Agency, Washington, DC [online]. Available: http://www.epa.gov/iris/toxreviews/ 0025-tr.pdf [accessed Oct. 2, 2006].
NRC (National Research Council). 1983. Risk Assessment in the Federal Government: Managing the Process. Washington, DC: National Academy Press.
NRC (National Research Council). 1994. Science and Judgment in Risk Assessment. Washington, DC: National Academy Press.
NRC (National Research Council). 1996. Understanding Risk: Informing Decisions in a Democratic Society. Washington, DC: National Academy Press.
OMB (U.S. Office of Management and Budget). 2006. Proposed Risk Assessment Bulletin. Released January 9, 2006. Washington, DC: Office of Management and Budget, Executive Office of the President [online]. Available: http://www.whitehouse.gov/omb/inforeg/ proposed_risk_ assessment_bulletin_010906.pdf [accessed Oct. 11, 2006].
PCCRARM (Presidential/Congressional Commission on Risk Assessment and Risk Management). 1997. Risk Assessment and Risk Management in Regulatory Decision-Making, Vol. 2. Washington, DC: U.S. Government Printing Office [online]. Available: http://www.riskworld.com/Nreports/1997/riskrpt/volume2/pdf/v2epa. PDF [accessed Oct. 3, 2006].