Omissions from the Bulletin
Omissions and incomplete information limit the utility of the proposed Office of Management and Budget (OMB) bulletin as balanced and comprehensive risk assessment guidance. Specifically, while prescribing standards to promote quality in a broad array of analyses, the bulletin is silent on several relevant topics and incomplete on others. As documented in comments from affected agencies to the committee, some omissions are related to substantive aspects of the risk assessment process, others to implementation issues.
The committee identified several risk assessment topics not discussed in the OMB bulletin and not discussed in this report. Examples include gene-environment interactions, the problem of mixtures, and cumulative exposure, among others. It would be impractical and inappropriate for this committee to attempt to address all risk assessment issues that might be relevant to one or more federal agencies. Guidance on those and other issues, however, is important, and their importance led to the recommendation that OMB encourage federal agencies to develop individual guidelines tailored to their own needs and practices (see Chapter 7).
The proposed bulletin acknowledges the multiplicity of disciplines involved in the risk assessment process, including “engineering” (OMB 2006, p. 5). It also acknowledges the diversity of risk assessments conducted within the federal government, including “failure analysis of
physical structures” (OMB 2006, p. 7), and it is clearly intended to cover such assessments. Despite these introductory statements, the bulletin focuses mainly on biologic systems, with an emphasis on human health risk assessment, and provides little guidance related to physical (engineered) systems.
In fact, the bulletin gives only minimal attention to risk assessments for which the end point is major failure of an engineered system. The vast majority of examples presented (and authorities cited) apply to toxicologic and other human health end points without corresponding attention to the failure of engineered systems. Moreover, it is unclear whether the bulletin’s occasional mention of such failures refers mainly to human health consequences (for example, death or injury from a nuclear power plant accident) or includes the probability and consequences of the engineered failure itself (for example, bridge collapse or toxic release from a chemical plant without estimating the extent of related human health effects).
The bulletin fails to take advantage of the concepts and methods developed through the engineering community’s investment of hundreds of millions of dollars in quantitative risk assessment for physical systems. Specifically, in referencing the risk studies, the bulletin is deficient in not recognizing the extensive and often effective efforts of the private sector in risk assessment of such subjects as off-shore oil platforms, chemical plants, nuclear reactors, and waste sites. Those studies have influenced positively risk assessment in the federal government. The incomplete and unbalanced approach to engineering risk assessment (as well as ecologic and other types of risk assessment) belies the bulletin’s stated objective of improving the quality of risk assessment across the federal government.
The bulletin has multiple standards and requirements related to “populations” and “subpopulations” (OMB 2006, Section IV, V), but these standards are incomplete in relation to sensitive subpopulations. Specifically, the only reference to sensitive subpopulations, such as “children or the elderly,” appears not in the bulletin, but in the supplementary information (OMB 2006, p. 19). Moreover, the strong emphasis on central estimates in the standards themselves means that the most vulnerable people in a population—who, almost by definition, lie in the tails
of the probability distribution—might be underrepresented, depending on the characterization of the central estimate.
The bulletin's emphasis on central estimates (OMB 2006, Section V), standards calling for a “range of plausible risk estimates” (OMB 2006, Section IV), and cautions against exaggeration and overstatement (OMB 2006, Section IV) could be viewed as restricting use of data from the tails of the probability distribution on the grounds that such information might generate risk estimates considerably higher than central tendency or general population estimates. If federal agencies interpret (or possibly misinterpret) the bulletin in that way, decision-makers could be deprived of risk-related information on vulnerable segments of the population and the potential impacts of measurable exposures that have not been identified as adverse (OMB 2006, Section V).
Such information on the variability of effects across potentially affected populations—due to differences in sensitivity, exposure, or both—is essential to decision-making. With that in mind, experienced risk assessors characterize uncertainty (OMB 2006, p. 17) and variability (OMB 2006, p. 19) as calling not only for the quantitative estimates required by the bulletin but also for qualitative evaluation of hazard and exposure to identify special populations, such as infants, children, the elderly, subsistence subpopulations, environmental-justice subpopulations, and the like, for which risk estimates may be appropriate. However, if implemented literally and in the absence of clarifying language, the bulletin may be interpreted as requiring only quantitative analyses and only for the general population. Both approaches are clearly contrary to prior NRC guidance (NRC 1994).
EXEMPTIONS, WAIVERS, AND DEFERRALS
Unless an agency determines otherwise, the bulletin expressly excludes from its coverage assessments related to “licensing, approval and registration processes for specific product development activities,” “inspections relating to health, safety, or environment,” and an “individual product label” (OMB 2006, p. 10). Those provisions appear to exempt a broad set of risk assessments, including some Food and Drug Administration (FDA) assessments related to pharmaceuticals, assessments required under the Federal Insecticide, Fungicide, and Rodenticide Act for
registering pesticides, and U.S. Department of Agriculture inspection of products destined for the food supply.1
Without providing reasons, the bulletin excludes from its requirements assessments submitted by manufacturers seeking product approvals or registrations—an exclusion that appears to apply to such substances as pesticides. As noted in Chapter 2, the recommendations set forth in the various expert studies apply generally to all risk assessments, and the committee finds no basis in the bulletin for blanket exclusion of assessments related to product approvals and registrations from standards designed to improve the quality of agency assessments. Responding to a committee question in that regard, OMB explained that the Information Quality Guidelines (67 Fed. Reg. 8460 ) do not apply to adjudicatory matters. That is not consistent with the overarching objective of seeking higher-quality risk assessment, and the committee’s concerns remain.
In written comments to the committee, several agencies noted omissions in the provisions related to exemptions, waivers, and deferrals. For example, although the bulletin allows an agency head to waive or defer some or all of the requirements (OMB 2006, Section VIII), the Centers for Disease Control and Prevention (CDC) notes that “the Bulletin provides little or no insight as to how an agency would justify a deferral or waiver, and it is unclear who decides whether an agency’s rationale is ‘compelling’ or whether agencies may be challenged on this issue” (see Appendix E, p. HHS-21). Similarly, EPA notes that the bulletin “does not outline any roles and responsibilities for…resolution of disagreements between agencies and OMB, certifications, waivers, exemptions, and other areas. The document should describe how interactions between OMB and the agencies will work in implementing the Bulletin.… The proposed Bulletin does not describe any criteria for granting a waiver or for providing for exemptions” (see Appendix E, pp. EPA-13-14). FDA points out that the bulletin “omits a ‘time sensitive’ health or
Agency comments on the exemptions were mixed. For example, the Department of the Interior (DOI) expressed concern that the exemption for single-product toxics labeling “might lead to human health and environmental risks that could be foreseen if the exemption was not in place” (see Appendix E, p. DOI-3). In contrast, EPA not only agreed with the exemptions for the program for registering (licensing) and reregistering pesticides but also urged extending the exemption to risk assessments in support of food tolerances (see Appendix E, p. EPA-16).
safety exception and provides only a weak agency deferral and waiver authority that requires the agency to comply with Bulletin requirements as soon as practicable” (see Appendix E, p. HHS-25). The committee finds that the bulletin’s credibility depends partly on the extent to which affected agencies can expect even-handed and predictable administration of provisions that create exceptions, waivers, or deferrals of general policies. Agency comments suggest that the bulletin provides little confidence in that regard.
THE ROLE OF RISK ASSESSMENT POLICY AND DEFAULT OPTIONS
The bulletin and the supplementary information acknowledge the role of “choice” and “professional judgment” in the risk assessment process (OMB 2006, pp. 3, 19, 20, 21, 25) but omit discussion and guidance on the role of such judgments in the selection of defaults for risk assessment. That omission is particularly striking in view of frequent citations of the 1994 National Research Council (NRC) report Science and Judgment in Risk Assessment, which, as the title indicates, gives special attention to the role of professional judgment in making risk assessment decisions in the absence of relevant experimental or field data—a circumstance common to many risk assessments.
At the outset, the committee stresses that relevant data are always preferred and are to be used when available. However, as described in Chapter 2, default options and inference judgments have a legitimate role in the risk assessment practice of many federal agencies. The 1994 NRC report explains that such default options “are used in the absence of convincing scientific knowledge on which of several competing models and theories is correct.… The choice of such principles goes beyond science and inevitably involves policy choices on how to balance such criteria” (NRC 1994, p. 7, emphasis added). As a result, that report emphasizes two related but distinct components in its recommendation that an agency “clearly state the scientific and the policy basis for each default option” (NRC 1994, p. 8, emphasis added).
Informed in the first instance by the available data and analyses, risk assessment policies can have a strong, sometimes pivotal influence on the choices and judgments identified in the bulletin. Familiar examples include choices regarding use or nonuse of data from animal models, uncertainty defaults (for example, one vs 100 vs 1,000), and identification of populations of interest for any particular risk assessment (for ex-
ample, general population vs sensitive subpopulation vs maximally exposed individuals) and among alternative dose-response models based on more or less conservative assumptions.
EPA notes that omission in its comments: “Scientific ‘defaults’ or ‘inference guidelines’ play an important role for EPA in providing a consistent and peer reviewed means of addressing recurring, fundamental issues of science policy in its risk assessments. The proposed Bulletin does not address this aspect of risk assessment practice” (see Appendix E, p. EPA-14). The Fish and Wildlife Service (FWS) offers a specific example: “The Service is concerned that the Bulletin appears to favor ‘central tendencies’ or expected outcomes as the best approach or the best science. It is the view of the Service that the best science is that which is objective, explicit and complete and the end or parts of the distribution that we focus on is guided by policy and social values” (see Appendix E, p. DOI-2).
Policy considerations are particularly important for assessments based on data from the biologic sciences but may also be important for other categories of risk assessment. For example, DOI recommends “expanding the discussion of risk assessments for physical structures to include…expert elicitation (where expert elicitation provides probabilistic valuation integrating data, analysis, experience and professional judgment when statistical data is not readily available)” (see Appendix E, p. DOI-2).
Although the bulletin stresses the importance of describing and analyzing variability and uncertainty, those discussions focus on quantitative factors without recognizing that, in the absence of data, any choice of defaults for use in risk assessment requires policy judgments. The bulletin thus emphasizes completeness and transparency as to the technical aspects of risk assessment but omits any requirements for comparable completeness and transparency regarding an agency’s reasons for selecting from among the available default options.
EXTERNALLY GENERATED RISK ASSESSMENTS
Public participation in and contribution to risk assessment and decision-making are hallmarks of the regulatory process. It takes many forms, from one-page letters urging consideration of a constituent’s view of a risk assessment issue to manuscripts for new yet-to-be-published studies to peer-reviewed alternative assessments developed by highly regarded academics and other science professionals. By law and practice,
the agencies consider the external submissions with data, information, and analyses developed in or commissioned by the agencies.
Although Congress authorizes designated federal agencies, such as FDA and EPA, to require specific data related to product approvals and registrations, the bulletin does not make its proposed standards applicable to externally generated assessments, such as alternative assessments submitted to federal agencies as comment on risk assessments underlying proposed regulations. However, because externally generated assessments or conclusions from them are routinely submitted for agency consideration and use, a question arises as to the entity—submitter or agency—responsible for ensuring attention to and compliance with the requirements in the bulletin.
The bulletin does not address that question. It is important because of potential impacts on time and resources (staffing and funding). Generally, external assessments are submitted when the risk assessment and rulemaking are under way in line with previously established budgets and schedules. If an agency is responsible for determining whether information received from the public and used as part of a risk assessment complies with OMB requirements, additional time and resources would be required. Alternatively, if the submitter is responsible, the agency would know when the information is received whether it meets the standards and thus whether it can be considered without additional analysis and delay.
In a best-case situation in which the submitter is responsible for ensuring compliance with the bulletin, useful information conforming to the requirements could be immediately woven into a risk assessment (for example, see Appendix E, p. DOD-12). In a worst-case situation in which the agency is responsible, an agency could devote staff and time to conducting an analysis of information that proved to be nonconforming and possibly incur substantial delays in completing the overall assessment.
Recognizing that the bulletin’s requirements apply only within the government, many agencies nevertheless responded to an NRC question on the issue. CDC indicated that “it would be helpful if quantitative assessments conducted by external groups met the same requirements when those assessments are used by a government agency” (see Appendix E, p. HHS-24). The Department of Defense noted that “it would be beneficial if contractors and private industry met the OMB Proposed Bulletin requirements” (see Appendix E, p. DOD-12). EPA noted that “the Agency has relied upon assessments conducted by external groups, including NRC panels, the World Health Organization, the Canadian
government, ATSDR, and CAL-EPA. In general, their conformity with the requirements of the Bulletin, as feasible and appropriate, would be a laudable goal” (see Appendix E, p. EPA-16). Similar comments were received from several other agencies (see Appendix E, pp. FWS-12, OSHA-7, DOT-10, DOD-11-12, CPSC-6, and HUD-2).
Responding to a committee question on the issue, OMB contends that it is the responsibility of the federal government to make certain that such assessments meet relevant standards. The committee does not dispute that externally generated assessments and related risk information incorporated into the agency assessment process should conform to standards related to scientific quality and objectivity. The issue is whether the agency or the submitter is responsible for the initial evaluation of conformity to standards of quality and objectivity. For the same reasons that federal agencies are responsible for conforming to standards in proposing any risk assessment, it seems incumbent on external submitters to evaluate and document as part of their submission—that is, to assure the agencies and the public—that risk assessment information offered for use in decision-making conforms to the same relevant standards. If federal agencies are themselves responsible for the initial evaluation of all public submissions of risk assessment information (as defined in the bulletin) for conformity with the bulletin's standards, the risk assessment process and related regulation development could be brought to a standstill.
The supplementary information says that “it does not address in any detail the important processes of … risk communication” (OMB 2006, p. 3).2 That omission is inconsistent with reports issued by the NRC (1983, 1989, 1996), the Institute of Medicine (IOM 1999), the Presiden-
Note that the supplementary information does address an aspect of risk communication in that it instructs the agencies always to communicate risk qualitatively, to communicate risk quantitatively whenever possible, to give a range of plausible estimates and their associated limitations when communicating risk quantitatively, and, to the extent feasible, to follow the Safe Drinking Water Act “quality standard for the dissemination of public information about risks of adverse health effects.” A requirement is also included that instructs the agencies to compare the risks that are the subject of agency risk assessments with other familiar risks (see Chapter 4).
tial/Congressional Commission on Risk Assessment and Risk Management (PCCRARM 1997), the Canadian Standards Association (CSA 1997), the Royal Commission for Environmental Pollution (RCEP 1998), HM Treasury (2005), and others. In the view of those reports, risk assessment is inseparable from risk communication.
The bulletin reflects a simple but incomplete view of risk communication as the last step in a competent risk assessment. Once the technical work has been completed, analysts have a duty to inform those with a stake in the results. That sharing is essential to a democratic society, as well as to the credibility of any regulatory process that depends on the consent of the governed. The relevant scientific research has found that citizens are poorly informed about many of the myriad risks currently or potentially in their lives and about the costs and benefits of possible ways to reduce the risks. It has also found that scientifically developed risk communication can often bring citizens to the level of understanding needed for decision-making purposes. By neglecting the obligation for risk communication, the bulletin is incompatible with NRC and other reports in a way that threatens the credibility of the methodology that it seeks to support.
The accepted view of risk communication is, however, that it constitutes an essential element of all stages of risk assessment, not just the last step. As discussed in the 1996 NRC report Understanding Risk, the primary purpose of risk communication is to share information between interested and affected parties with the aim of improving the quality and relevance of risk assessments; the goal is not to persuade, as indicated by OMB (OMB 2006, p. 11). Those affected by the results of an analysis need an opportunity to provide information on issues based on their experience. For example, regarding exposure analysis, residents of a community or workers in an industry can provide information essential for agencies to use in developing exposure scenarios critical to the risk assessment process. Stakeholders from all points on the spectrum of interested parties—other state and federal agencies, advocacy groups from industry, and affected communities—can be expected to offer perspectives on the risk assessment policies under discussion. In the hazard stage of the assessment, are animal studies reliable predictors of human risk? Regarding the dose-response analysis, what constitutes an adequate margin of safety in a particular situation? Agency decisions on such policy issues influence the course of any assessment, and stakeholder input—that is, communication to an agency—is relevant.
Achieving more transparent analyses as emphasized in the bulletin depends partly on effective risk communication. However, without de-
tailed attention to the broader risk communication principles outlined above, the bulletin is unlikely to accomplish its objective.
As developed in the next chapter, the bulletin omits expected analyses of baseline information, cost-benefit considerations associated with implementing the bulletin, and the potential for adverse impacts on the practice of risk assessment in the federal government.
CSA (Canadian Standards Association). 1997. Risk Management: Guidelines for Decision-Makers. CAN/CSA-Q850-97. Toronto, Canada: Canadian Standards Association.
HM Treasury (Her Majesty’s Treasury). 2005. Managing Risks to the Public: Appraisal Guidance. London: HM Treasury. June 2005 [online]. Available: http://www.hm-treasury.gov.uk/media/8AB/54/Managing_risks_to_the_public.pdf [accessed Oct. 18, 2006].
IOM (Institute of Medicine). 1999. Toward Environmental Justice. Research, Education, and Health Policy Needs. Washington, DC: National Academy Press.
NRC (National Research Council). 1983. Risk Assessment in the Federal Government: Managing the Process. Washington, DC: National Academy Press.
NRC (National Research Council). 1989. Improving Risk Communication. Washington, DC: National Academy Press.
NRC (National Research Council). 1994. Science and Judgment in Risk Assessment. Washington, DC: National Academy Press.
NRC (National Research Council). 1996. Understanding Risk, Informing Decisions in a Democratic Society. Washington, DC: National Academy Press.
OMB (U.S. Office of Management and Budget). 2006. Proposed Risk Assessment Bulletin. Released January 9, 2006. Washington, DC: Office of Management and Budget, Executive Office of the President [online]. Available: http://www.whitehouse.gov/omb/inforeg/proposed_risk_assessment_bulletin_010906.pdf [accessed Oct. 11, 2006].
PCCRARM (Presidential/Congressional Commission on Risk Assessment and Risk Management). 1997. Risk Assessment and Risk Management in Regulatory Decision-Making, Vol. 2. Washington, DC: U.S. General Printing Office [online]. Available: http://www.riskworld.com/Nreports/1997/risk-rpt/volume2/pdf/v2epa.PDF [accessed October 3, 2006].
RCEP (Royal Commission for Environmental Pollution). 1998. Setting Environmental Standards, the Royal Commission for Environmental Pollution 21st Report. London: The Stationary Office [online]. Available: http://www.rcep.org.uk/standardsreport.htm [accessed Oct. 25, 2006].