National Academies Press: OpenBook

Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget (2007)

Chapter: Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions

« Previous: Appendix D: Public Meeting Agenda
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Appendix E
Questions for Federal Agencies from the Committee and Agency Responses to Questions

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

This page intentionally left blank.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

BACKGROUND INFORMATION ON NRC REVIEW OF THE OMB RISK ASSESSMENT BULLETIN

The National Research Council’s Committee to Review the OMB Risk Assessment Bulletin has been tasked with conducting a scientific review of the proposed Risk Assessment Bulletin released by the Office of Management and Budget (OMB). More specifically, the committee was asked to determine whether the application of the proposed guidance will meet OMB's stated objective to “enhance the technical quality and objectivity of risk assessments prepared by federal agencies.” The committee will evaluate generally the impact of the Bulletin on risk practices, identify possible omissions from the Bulletin, and determine whether there are circumstances that might limit applicability. To address its charge, the committee is hoping that the agencies will assist it by responding to the questions below.

QUESTIONS FOR ALL AGENCIES POTENTIALLY AFFECTED BY THE OMB BULLETIN

General questions about current risk assessment practices

  • Please provide a brief overview of your current risk assessment practices. Specifically, do you conduct probabilistic risk assessment? Is there a common approach to both risk assessments and uncertainty analysis? How do you currently address uncertainty and variability in your agency's risk assessments?

  • Please identify any substantial scientific or technical challenges that you may encounter when conducting risk assessments for your agency.

  • What is your current definition of risk assessment, and what types of products are covered by that definition?

  • About how long (that is, from initiation of the risk assessment to delivery to the regulatory decision maker) does it take to produce the various types of risk assessments?

Questions about OMB’s definition of risk assessment and applicability

  • Using the definition of risk assessment described in the OMB Bul-

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

letin, are there work products that would now be considered risk assessments that were not previously considered risk assessments? If so, what are they?

Questions about type of risk assessment (tiered structure)

  • In your agency, is there currently a clear demarcation between risk assessments used for regulatory analysis and those not used for regulatory analysis? Is this clear at the outset of the risk assessment?

  • In your agency, is there currently a clear demarcation between “influential risk assessment” used for regulatory purposes and other risk assessments used for regulatory purposes? Is this clear at the outset of the risk assessment?

Questions about impact of the Bulletin on agency risk assessment practices

  • If applicable, please specify provisions in the Bulletin that can be expected to have a substantial positive effect on the quality, conduct, and use of risk assessments undertaken by your agency.

  • If applicable, please specify provisions in the Bulletin that can be expected to have a substantial negative effect on the quality, conduct, and use of risk assessments undertaken by your agency.

  • If your agency followed the procedures described in the Bulletin, would it affect the time course for production of the risk assessment (that is, the time required from initiation of the risk assessment to delivery to the regulatory decision maker)? If so, please explain why?

  • One of the Bulletin’s reporting standards states the need to be scientifically objective by “giving weight to both positive and negative studies in light of each study’s technical quality.” Please give an example of how this would be implemented by your agency or department.

  • Does your agency use risk assessments conducted by external groups? Would it be helpful to you if risk assessments submitted to your agency by external groups, such as consultants and private industry, met the requirements proposed in the OMB Bulletin?

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

ADDITIONAL QUESTIONS FOR SPECIFIC AGENCIES

DOE

  • What are DOE’s current overall challenges regarding risk assessment? Specifically, please address DOE sites that have to be remediated (e.g., Hanford); DOE facilities (e.g., research and test reactors and processing plants); special projects (e.g., Yucca Mountain); and other sites (e.g., Pantex). How will the OMB Bulletin impact the quality, conduct, and use of risk assessments in these cases?

EPA

  • Regarding pesticides specifically, what risk-assessment activities will be covered by the Bulletin and what risk-assessment activities will be exempted?

  • Does EPA have any examples of the application of the 1996 requirements of the Safe Drinking Water Act, as described on page 13 of the Bulletin? Can any examples be provided to the committee? If none are available, can EPA provide an explanation?

  • Does EPA have a working definition of “expected risk” or “central estimate?” The agency indicated in its 1986 cancer guidelines (51 FR 33992-34003) that central estimates of low-dose risk, based on “best fit” of the observed dose-response relationship, were meaningless—that “fit” in the high-dose region provided no information about “best fit” in the region of extrapolation. The newer cancer guidelines appear to adopt the same thinking. Has the Agency changed its view on this point? If so, why?

FDA

  • Dr. Galson indicated at the public meeting that there were problems with the application of OMB requirements to certain types of assessments. Can FDA suggest specific language to exclude those problematic assessments from OMB requirements, rather than just offering examples of those assessments? In other words, how would FDA describe in general terms the types of assessments it would like to see excluded?

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

QUESTIONS FOR OMB

  • Dr. Graham discussed the recent perchlorate evaluation as an example that would have benefited from this Bulletin. Does the Bulletin support using a “precursor” of an adverse effect or other mechanistic data as the basis of a risk assessment, as was recommended in the National Academies’ perchlorate review.

  • Is it correct that those submitting data and risk assessments to the government to obtain product registrations, approvals, and licenses are excluded from the requirements of the Bulletin?

  • Will the Bulletin require further review by OMB staff of risk assessments that have been peer reviewed in accordance with established peer review procedures and standards, including publication in a reputable peer reviewed journal?

  • Public participants in the risk assessment and rulemaking processes—industry groups, environmental groups, other governmental entities, individual scientists—often provide risk assessments for agency consideration. Will these outside assessments be held to the same standards as agency-generated assessments, that is, to the requirements in the Bulletin?

  • The 1983 NRC report Risk Assessment in the Federal Government: Managing the Process treats “risk assessment” as a term of art that covers four distinct analyses (hazard identification, dose-response assessment, exposure analysis, and risk characterization), each typically based on a number of separate studies and analyses. The OMB Bulletin defines “risk assessment” to apply to “any document” that “could be used for risk assessment purposes, such as an exposure or hazard assessment that might not constitute a complete risk assessment as defined by the National Research Council.” What is the advantage of defining risk assessment in this way?

  • The Bulletin discusses the importance of risk assessors interacting with decision-makers. What safeguards will be built into the process to protect the scientific process from being framed by the decision-maker instead of the science?

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Appendix E
Agency Responses to Questions*

  • Consumer Product Safety Commission

  • Department of Defense

  • Department of Energy

  • Department of Health and Human Services

  • Department of Housing and Urban Development

  • Department of Interior

  • Department of Labor

  • Department of Transportation

  • Environmental Protection Agency

  • National Aeronautics and Space Administration

  • Office of Management and Budget

*Agencies that were sent the committee’s questions but did not provide responses:

  • Department of Homeland Security

  • Nuclear Regulatory Commission

  • Department of Agriculture

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

This page intentionally left blank.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Below are responses developed by the U.S. Consumer Product Safety Commission’s (CPSC) staff to the questions posed by the National Research Council in its scientific review of the proposed Risk Assessment Bulletin released by the Office of Management and Budget. (Note: These comments are those of the CPSC staff, have not been reviewed or approved by, and may not necessarily represent the view of, the Commission.)

General questions about current risk assessment practices

  • Question: Please provide a brief overview of your current risk assessment practices. Specifically, do you conduct probabilistic risk assessment? Is there a common approach to both risk assessments and uncertainty analysis? How do you currently address uncertainty and variability in your agency's risk assessments?

CPSC Staff Response


In general, the CPSC staff performs risk assessments addressing a variety of hazards, including toxicity, electrical, fire and burn, and mechanical hazards. Depending on staff and agency needs, CPSC staff conducts all manner of analyses, both qualitative and quantitative. Some analyses constitute complete risk assessments, while others deal with one or more individual steps of risk assessment, e.g., hazard identification or exposure assessment.


The toxicological risk assessment practices used by the CPSC staff are described in the CPSC Chronic Hazard Guidelines (FR 57: 46626-46653, 1992). The guidelines include sections on cancer, neurotoxicity, reproductive-developmental toxicity, exposure, bioavailability, and acceptable risk. The staff uses either probabilistic methods or sensitivity analysis to assess uncertainty or variability. The approach to evaluating uncertainty and variability is determined by the analyst on a case-by-case basis, based on the purpose of the risk assessment and availability of data.

  • Question: Please identify any substantial scientific or technical challenges that you may encounter when conducting risk assessments for your agency.

CPSC Staff Response


When performing toxicological risk assessments staff may encounter a variety of technical and scientific challenges, such as the lack of complete toxicity or exposure data, or the lack of methodologies to develop such data. These challenges are addressed on a case-by-case basis, and may include performing exposure assessment studies, such as migration and emissions studies, and developing novel laboratory methods. The staff also nominates chemicals for further toxicological testing by the National Toxicology Program.


Consider, for example, the CPSC staff’s risk assessment of diisononyl phthalate (DINP), which is a plasticizer used in teethers and toys made from polyvinyl chloride. CPSC convened a Chronic Hazard Advisory Panel (CHAP)1 to address the toxicity and potential

1

Convening a CHAP is a statutory mandate before CPSC can regulate products based on chronic toxicity of a substance, 15 U.S.C. 2077 and 2080(b).

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

risks from DINP, especially the human relevance of rodent tumors induced by peroxisome proliferation. Lack of exposure data for DINP in children’s products led to the conduct of observational studies of children’s mouthing behavior, as well as the development of methods to measure the mitigation of DINP from certain toys, and laboratory analysis of toys in the market to determine the proportion that contained DINP.

  • Question: What is your current definition of risk assessment, and what types of products are covered by that definition?

CPSC Staff Response


The staff defines risk assessment following the definition of the National Research Council (1983), in which a risk assessment encompasses hazard identification, dose-response assessment, exposure assessment, and risk characterization. Depending on the agency’s needs, the staff may complete one or more of these steps for a particular task, but a risk assessment generally consists of all four steps.


The definition applies to all consumer products under CPSC jurisdiction, and includes a variety of toxicological and physical hazards. However, the CPSC’s Chronic Hazard Guidelines (57:46626-46653, 1992) were developed primarily to address chronic toxicity.

  • Question: About how long (that is, from initiation of the risk assessment to delivery to the regulatory decision maker) does it take to produce the various types of risk assessments?

CPSC Staff Response


The length of the risk assessment process is highly variable, depending on the intended use of the assessment, e.g., for screening or priority setting, or regulatory analysis; the needs of the decision maker; factors such as the availability of data and the amount, quality, and complexity of available data; and the need for public comment and peer review. The simplest assessments may be completed in a matter of days, while more involved analyses take months or years, especially if the agency must perform extensive studies to assess exposure or convene a CHAP.

Questions about OMB’s definition of risk assessment and applicability

  • Question: Using the definition of risk assessment described in the OMB Bulletin, are there work products that would now be considered risk assessments that were not previously considered risk assessments? If so, what are they?

CPSC Staff Response


Using the definition in the OMB Bulletin, almost every work product prepared by the CPSC staff could be considered a risk assessment. This would include:

  • Injury or fatality reports;

  • The agency budget, which employs “risk-based” decision making;

  • Product Safety Assessments—short-turnaround assessments of specific products;

  • Toxicity reviews; and

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
  • Routine testing of products, such as toys and fireworks, for compliance with standards.

Work products from the CPSC’s Directorate for Epidemiology might especially be affected by the expanded definition of risk assessment contained in the Bulletin. For the most part, these work products provide information on injuries and fatalities associated with consumer products and, under the Bulletin’s definitions, would be considered either risk assessments or work products that contain data that are used in risk assessments. Examples include hazard sketches (estimates of the number of product-related injuries and descriptions of injury scenarios), estimates of consumer product-related injuries and deaths as part of Product Safety Assessments, and analyses supporting Commission briefing packages that are associated with regulatory activities.


Some of these work products contain estimates of risk in the form of injuries or deaths per unit exposure. Exposure may be defined as products in use or per unit population possibly subdivided by age group. Exposure-based analyses are more commonly found in staff work products where there are a large number of injuries or deaths. They are less common when there are relatively few casualties and/or valid exposure measures are not available. In those cases, it is likely that most readers would conclude that the risk is small regardless of the exposure measure selected.

Questions about type of risk assessment (tiered structure)

  • Question: In your agency, is there currently a clear demarcation between risk assessments used for regulatory analysis and those not used for regulatory analysis? Is this clear at the outset of the risk assessment?

CPSC Staff Response


There is no clear demarcation between risk assessments used for regulatory analysis and those not used for regulatory analysis. Moreover, the importance to the agency of a specific risk assessment is not necessarily determined only by whether it is used to support a regulation. For example, in the staff risk assessment of DINP in children’s products, it was determined that the risk was low and no regulations were pursued. Nonetheless, it was important to perform the best risk assessment possible to be reasonably certain that the products (soft plastic toys) were not hazardous.


The intended use of a staff risk assessment is usually clear at the outset, e.g., responding to public petitions, evaluating the impact of a regulation, or supporting the development of voluntary standards. In the event that staff objectives or agency needs change during the process, adjustments are made.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
  • Question: In your agency, is there currently a clear demarcation between “influential risk assessment” used for regulatory purposes and other risk assessments used for regulatory purposes? Is this clear at the outset of the risk assessment?

CPSC Staff Response


There is currently no clear demarcation between “influential risk assessments” and other risk assessments used for regulatory purposes. Additionally, staff believes that the a priori determination of whether a risk assessment is influential is problematic since the impact of the action may not be easily predicted. For example, a determination that something is an “influential risk assessment” may depend upon both the magnitude of the risk and the eventual scope of the regulatory action.


Because of the practical difficulties in distinguishing between influential and noninfluential risk analyses at the outset of a project, and because of the additional resources that would be required to prepare influential risk assessments, it would be useful for OMB to provide clarification on how agencies should make this determination.

Questions about impact of the Bulletin on agency risk assessment practices

  • Question: If applicable, please specify provisions in the Bulletin that can be expected to have a substantial positive effect on the quality, conduct, and use of risk assessments undertaken by your agency.

CPSC Staff Response


It is unclear whether the provisions of the Bulletin will have a substantial positive effect. As a matter of routine, the CPSC staff strives to perform risk assessments that are scientifically defensible and of the highest quality by using the CPSC’s Chronic Hazard Guidelines that clearly define how risk assessments should be performed and by having significant CPSC staff risk assessments peer-reviewed in accordance with OMB guidelines. The staff believes that it appropriately applies the best practices in risk assessment consistent with agency needs and resources.

  • Question: If applicable, please specify provisions in the Bulletin that can be expected to have a substantial negative effect on the quality, conduct, and use of risk assessments undertaken by your agency.

CPSC Staff Response


The staff believes a number of provisions in the Bulletin could have a negative effect on the quality, conduct, and use of risk assessments undertaken by the CPSC. Several examples follow.

  1. While many of the proposed requirements seem reasonable, meeting the standards could come at significant cost in terms of time and other resources. For example, while the proposed Bulletin addresses the need to consider resources in Section III: Goals, it is

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

not clear that the flexibility implied in this section is reflected in the language elsewhere in the Bulletin. CPSC staff believes that lack of flexibility would result in unnecessarily applying requirements that will not actually improve assessments in all cases (i.e., a one size fits all approach is likely not possible or desirable). Further, staff expects that during the process of planning a risk assessment, there will be discussions about which Bulletin standards will be applicable. Such discussions will be a priori, i.e., before the risk assessment has been conducted. Because the applicability of Bulletin standards is ultimately made on the basis of the risk findings and potential regulatory action, it is entirely possible that the standards chosen at the design stage and those required subsequently based on the findings (or potential regulatory action implied by the findings) may be different. This can have serious resource implications.

  1. The Bulletin’s general requirement (Section IV, 6) that Executive Summaries should “place the estimates of risk in context/perspective with other risks familiar to the target audience” could have three negative effects. First, staff resources will be needed for the analysis of other risk assessments to determine (a) comparability and (b) validity of the analysis. In some cases, the comparable risk may be in areas outside the expertise of CPSC staff and outside assistance may be necessary. Second, we expect that there will be challenges to the selection of comparable risks, especially when the choice of appropriate comparisons is limited. Third, putting comparative risk information in an Executive Summary, without an explanation of the context in which it was derived, could mislead the reader.

    If this requirement is implemented, it would be useful for OMB to provide more information on how this requirement might be met.

  2. The requirement to revise each risk assessment as new information becomes available could have a negative impact. CPSC staff agrees that some risk assessments remain a source of information years after they are conducted, and such important assessments should be updated as information becomes available. However, many CPSC risk assessments are conducted for specific purposes, e.g., preliminary assessments conducted to support decisions on the disposition of petitions, and may never again be used for informational or regulatory purposes. While the proposed Bulletin states that resources should be considered in meeting this requirement, CPSC staff believes that the flexibility implied in this statement would not necessarily be realized and that scarce resources would be spent on inconsequential, outdated, assessments.

  3. Section VII of the Bulletin says that the agency shall include a certification as part of the risk assessment document, explaining that the agency has complied with the requirements of the Bulletin and the applicable Information Quality Guidelines. This requirement needs clarification since the method of certification, which is unspecified in the Bulletin, could have resource implications.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
  • Question: If your agency followed the procedures described in the Bulletin, would it affect the time course for production of the risk assessment (that is, the time required from initiation of the risk assessment to delivery to the regulatory decision maker)? If so, please explain why?

CPSC Staff Response


CPSC staff believes that the effect of the proposed Bulletin on the time course of a risk assessment would in part depend on the level of flexibility afforded the assessor. If, for example, the Bulletin requires certain steps that the assessor previously might have determined to be unnecessary, then the time course might be lengthened significantly. This would be especially applicable to many routine work products, such as screening level risk assessments and other tasks not normally considered risk assessments.

  • Question: One of the Bulletin’s reporting standards states the need to be scientifically objective by “giving weight to both positive and negative studies in light of each study’s technical quality.” Please give an example of how this would be implemented by your agency or department.

CPSC Staff Response


This issue is addressed in the Chronic Hazard Guidelines. CPSC staff considers “all of the available data” in performing risk assessments.

  • Question: Does your agency use risk assessments conducted by external groups? Would it be helpful to you if risk assessments submitted to your agency by external groups, such as consultants and private industry, met the requirements proposed in the OMB Bulletin?

CPSC Staff Response


The CPSC issued Chronic Hazard Guidelines in 1992, in part, to guide manufacturers in complying with the requirements of the Federal Hazardous Substances Act. CPSC staff generally does not use risk assessments performed by outside groups, but sometimes it will consider an external risk assessment if it is applicable and if it provides information that the staff does not have. To the extent that such externally-derived assessments would then be used by staff in performing its work, the staff believes that it would be appropriate that such assessors follow accepted risk assessment practices, including the CPSC Chronic Hazard Guidelines, as well as other requirements of the federal government.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Department of Defense Response to Questions for All Agencies Potentially Affected by the Draft OMB Risk Assessment Bulletin – July 2006

  1. The Department of Defense (DoD) appreciates the opportunity to respond to the questions posed by the National Research Council’s Committee chartered to review the Office of Management and Budget’s (OMB’s) proposed Risk Assessment Bulletin. The Committee was tasked to determine if the proposed guidance will meet OMB’s stated objective to “enhance the technical quality and objectivity of risk assessments prepared by federal agencies.”

  2. A wide variety of risk and hazard assessments are performed by many different offices and organizations across DoD with varying missions ranging from basic research to civil works. These include risk assessments performed for:

    • Developing DOD environment, safety and occupational health (ESOH) standards.

    • Assessing site-specific human health and ecological risks from environmental contamination.

    • Assessing ESOH risks from operating weapons systems and military platforms (e.g., community noise level from aircraft operations; risks to military personnel from weapons firing).

    • Assessing materials being considered for use in weapons systems and platforms.

    • Assessing the risks of infectious diseases to DoD’s operating forces.

  1. The responses below focus primarily on risk assessments performed in the functional areas of environmental protection, human safety and health and facilities/civil works. Due to time constraints for developing responses and the sensitive or classified nature of certain national defense programs, the responses do not cover such areas as military operations/threat assessments, munitions, or all areas of weapons systems development and acquisition.

DoD Responses to Questions


1. General questions about current risk assessment practices


a. Please provide a brief overview of your current risk assessment practices.

Risk assessment methods and characterization of uncertainty are dependent upon and tailored to the specific purpose or function being assessed. There are some common approaches prescribed within functional areas, but no over-arching approach for all types of risk assessments.


The following provides some examples of the types of risk assessments performed by DoD and the approach used.

Occupational Health Risk Assessments:


DoD develops internal exposure limits for occupational hazards when a regulatory standard is not available, or when DoD determines the regulatory standard does

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

not sufficiently reduce the risk to DoD personnel or operations. In the development of such internal standards, a comprehensive health risk assessment would normally be prepared.


Environmental Risk Assessments:


Site-specific risk assessments for releases of hazardous substances, pollutants, and contaminants resulting in environmental contamination are conducted under the Defense Environmental Restoration Program following the process set forth in the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) and the Resource Conservation and Recovery Act (RCRA). The majority of the human health assessments conducted by DoD follow the methodology outlined in the Environmental Protection Agency’s (EPA’s) Risk Assessment Guidance for Superfund (RAGS), Volume I, Human Health Evaluation Manual, Parts A through E. The EPA’s Ecological Risk Assessment Guidance for Superfund (ERAGS) is used for conducting ecological risk assessments. The Department is currently developing a methodology to assess the hazards associated with military munitions and explosives of concern in collaboration with EPA.


The Department occasionally conducts risk assessments pursuant to RCRA authorities. For example, at installations that have hazardous waste combustion facilities or activities, RCRA assessments are usually conducted. The human health portion of RCRA assessments follow the methodology outlined in the Human Health Risk Assessment Protocol for Hazardous Waste Combustion Facilities. These types of risk assessments are almost exclusively screening in nature, but often the results are used to make permitting decisions.


Health Hazard Assessments:


Health hazard assessments are conducted following a formal approach or standard operating procedure for various programs within the DoD. The assessments are completed by a team of professional subject matter experts (e.g., industrial hygienists, toxicologists, acoustic engineers, physicians, epidemiologists, etc.) as warranted by the specific assessment. The results of these assessments are documented in a formal health hazard assessment report.


A hazard assessment may use multiple inputs to assess the significance of a hazard including:

  • Benchmark system design standards (e.g., military standards, industry standards, consensus standards);

  • Established risk criteria (e.g., Occupational Safety and Health Administration’s Permissible Exposure Limits, American Conference of Governmental Industrial Hygienists Threshold Limit Values, other military unique criteria); or

  • Experience from previous systems, safety assessments, human factor assessments, operational requirement documents, management documents, test documents, user manuals, and field observations.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Examples of the application of health hazard assessments follow:

  • The control of health hazards associated with the life cycle management of new and modified equipment to identify potential hazards early in the life cycle and eliminate hazards in the design phase.

  • The evaluations of materials being considered for various applications, such as use aboard submarines.

Civil Works:


The Army Corps of Engineers (COE) is expanding the use of risk assessment in dam safety including a screening level portfolio risk assessment. Currently, the Louisiana Coastal Protection and Restoration (LaCPR) study is proposed to include a multifaceted risk assessment, the incorporation of large uncertainty scenario drivers, and a risk-informed decision process. The National Research Council (NRC) reviewed the Army Corps of Engineers risk assessment approach to flood damage reduction and published their findings in 1999. Generally, NRC thought the approach was a great improvement but identified some issues for further consideration. The continuing need for risk assessments was reinforced by the events surrounding hurricane Katrina.


The COE also uses risk assessments in evaluating the appropriate options for the disposal of dredged material during the maintenance and construction of the nation’s waterways. The COE has developed a variety of guidance manuals and procedures for the evaluation and testing of dredged material. Some of the COE work in this area was reviewed previously by the NRC (e.g., Contaminated Sediments in Ports and Waterways: Cleanup Strategies and Technologies, 1997).


b. Specifically, do you conduct probabilistic risk assessment?


Probabilistic risk assessments may be performed within DoD for past or predictive effects on health, although rarely in support of baseline risk assessments conducted for the Defense Environmental Restoration Program. Probabilistic techniques have been explored but dismissed in a number of cases because of lack of scientifically defensible technical information; lack of acceptance by the regulatory community; difficulty in communicating the results to the public; and/or significant time, resource, and cost restraints. Probabilistic risk assessments are not always needed to adequately inform the decision-makers and stakeholders about the risks and hazards present and should be performed if necessary to aid decision making.


Markov Chain Monte Carlo Analysis has been used for chemical specific risk assessments in conjunction with the development of pharmacokinetic models.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

c. Is there a common approach to both risk assessments and uncertainty analysis? How do you currently address uncertainty and variability in your agency's risk assessments?


There is a common approach to the conduct of DoD risk assessments depending on the functional area and purpose for which the assessment is being done (e.g., environmental site assessments follow EPA RAGS and ERAGS guidance as addressed above).


There is not a common approach for uncertainty analysis for the diversity of risk assessments that DoD conducts. Typically, uncertainty and variability are addressed in risk assessments either qualitatively or quantitatively. The uncertainty analyses performed in individual risk assessments vary by the type of assessment produced and time/resource constraints. Levels of effort are not consistent; some uncertainty sections in some risk documents are very detailed, others are not. Variability is often addressed by statistical approaches and spatial analyses.


Below are some specific comments related to uncertainty analyses found in DoD risk assessments:

  • Within the uncertainty sections of the assessment, specific areas may be examined (e.g., for ecological risk assessments, area use factors (AUFs) are typically considered).

  • While cancer risks and hazard quotients are generally summed across chemicals and exposure pathways, there is usually no discussion regarding the underlying scientific uncertainty of this approach.

  • It is common practice to direct environmental sampling in a biased manner (e.g., directed to wastewater outfalls). This biased approach is consistent with most regulatory guidance and attempts to ensure human health protection. This practice incorporates a wide margin of safety to account for uncertainty as to the exact exposure point and variability in types of exposure. However, the uncertainty is not captured by current site attribution methods. It is common practice to use this type of biased sampling data in comparison to ambient/background for the purpose of attributing contamination to the entire site.

  • In the face of scientific uncertainty associated with site characterization, it is common practice to use either the maximum detected concentration or if sufficient data are available, the 95th upper confidence limit (UCL) of the mean concentration as being representative of the site. The associated uncertainty and variability is rarely included in the risk characterization, although it is sometimes discussed in a qualitative manner.

  • Although the scientific uncertainty associated with chemical-specific/toxicological risk assessment (e.g., IRIS risk assessment) is carried into each site-specific chemical risk assessment, risk characterizations rarely discuss the uncertainty associated with the safety and uncertainty factors assigned to toxicity criteria found in IRIS.

d. Please identify any substantial scientific or technical challenges that you may encounter when conducting risk assessments for your agency.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Listed below are some of the substantial/scientific challenges DoD encounters when conducting risk assessments.

  • Assigning Risk Assessment Codes (RACs) for health hazard assessments

    When assigning RACs for life cycle management of new and modified equipment and other safety analyses, variability is introduced because of the subjective, professional judgment used in assigning severity and probability values. While risk assessments may use state-of-the-art techniques, they have inherent limitations based on the capabilities of current technologies to predict ESOH effects (e.g., limitations in laboratory toxicology studies to predict human health effects related to new materials).

  • Consistency and satisfying the various regulatory agencies in regards to transparency

    The degree to which a risk assessment is considered minimally or not transparent to one agency may be considered efficient preparation to another. Setting a minimal standard for transparency would facilitate more efficient production of risk assessments. In addition, the various federal and state program offices with which we interact often have different interpretations of the same guidance documents or the same regulations. Consequently, the risk assessment “target” is constantly moving, making it difficult to effectively produce a risk assessment that meets all regulatory requirements.

  • Effectively communicating complex and highly technical risk assessment information

    Stakeholders unfamiliar with the risk assessment process or individuals who have emotional attachment to the issue present a challenge for risk communication. Mandatory performance of even more complex risk assessments, such as probabilistic risk assessments, can amplify this challenge. Standardizing the types of risk assessments and more clearly defining when and how each type of risk assessment is to be conducted would be a significant improvement.

  • A lack of scientifically defensible and/or agreed upon input information.

    Toxicity data, especially for the acute portion of the risk assessments and for the dermal pathways, is absent for many of the chemicals included in our risk assessments. Likewise, fate and transport data are often unavailable, as are scientifically defensible exposure inputs and statistical distributions for these exposure inputs. Consequently, this absence of information has hindered the use and performance of probabilistic risk assessments. Targeting research to fill these information gaps would allow risk assessors to produce more comprehensive and technically defensible products.

  • Calculating risk for intermittent exposure(s)

    From an applied perspective, exposures being assessed may be intermittent and the risk assessment model and associated toxicity data are not sufficiently refined to account for intermittent exposures. Consequently, exposures may be averaged over some exposure

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

duration, resulting in an underestimation or overestimation of risk, depending on the chemicals involved. Further development of the existing model or development of a new model, specific for intermittent exposures, would be a good first step to removing this challenge. Toxicity data representative of intermittent exposures would also need to be developed.

  • Over-estimating risk


    The current approach to ensuring health protection in the face of scientific uncertainty was devised almost 30 years ago. That approach is to multiply a default factor of up to 10 for each of four types of uncertainty assumed to act independently. Uncertainty factors are applied for inter-human variability/sensitivity, animal to human extrapolation, LOAEL to NOAEL extrapolation, and sub-chronic to chronic extrapolation. Today, many health risk assessors believe that multiplying default uncertainty factors overestimates risk. When coupled with the use of non-peer reviewed toxicity values, the approach may lead to significantly overestimated risk values and thus overly conservative cleanup levels.

  • Evaluating the vapor intrusion pathway


    Regulators frequently require DOD to evaluate the vapor intrusion pathway under residential scenarios. This is problematic because: 1) the methodology remains technically complex and controversial among risk assessors; 2) residential indoor air is not regulated; and, 3) standards for residential indoor air have not been established.

  • Lack of toxicity values for emerging contaminants


    Regulators frequently request that DOD conduct risk assessments on contaminants for which toxicity values have not been established and for which inadequate toxicological information exists.

The following is a list of subjects identified by DoD risk assessment professionals as lacking policy or guidance, or consistency in policy or guidance.

  • Consistent and reasonable policies and practices on the use of background data (anthropogenic and naturally-occurring background) and quantifying and accounting for background.

  • Guidance for identifying and characterizing genetic polymorphisms (genotype-environment interactions) and inter-individual differences in susceptibility to toxicants.

  • Consistent policies and practices on evaluating ecological habitats.

  • Guidance for estimating exposure concentrations of contaminants in soil and groundwater in human-health risk assessments.

  • Policy or requirements for defining the extent of site characterization required to inform a risk management decision for a site.

  • Guidance for determining home ranges for receptors being evaluated in ecological risk assessments.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
  • Guidance, policy, or requirements for selecting toxicity values from a range of possible values.

  • Guidance for determining the weight-of-evidence in carcinogen assessments.

  • Policy or requirements for the appropriate use of screening concentrations in risk assessments.

  • Guidance for addressing inconsistencies with statistical approaches for use in risk assessments.

  • Guidance or standards for assessing risks of contaminants when analytical limits of detection or analytical capability may not be developed/available to meet existing public health goals.

e.What is your current definition of risk assessment, and what types of products are covered by that definition?

For different programs and different agencies within DoD, there are slightly different definitions that relate specifically to the type of assessment being performed. Some of the definitions are presented below:


Occupational Health Program:


Risk assessment is defined as a structured process to identify and assess hazards. An expression of potential harm, described in terms of hazard severity, accident probability, and exposure to hazard. Sub-definitions follow:

  • Hazard Severity. An assessment of the expected consequence, defined by degree of injury or occupational illness that could occur from exposure to a hazard.

  • Accident Probability. An assessment of the likelihood that, given exposure to a hazard, an accident will result. An accident receives a specific classification based on an established criteria scheme.

  • Exposure to Hazard. An expression of personnel exposure that considers the number of persons exposed and the frequency or duration of the exposure.

Environmental Program:


Risk assessment is the collection and evaluation of scientific information for the purpose of determining potential adverse health impacts to human and/or ecological populations from exposure to substances (chemical or biological) released into the environment.


Health Hazard Assessment Program:


Risk assessment is an organized process used to describe and estimate the likelihood of adverse health outcomes from occupational or environmental exposures to hazards. It consists of four steps: hazard identification, toxicity assessment, exposure assessment and risk characterization.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

In the Defense Environmental Restoration Program, a site-specific risk assessment is used in risk management decisions to determine the extent of risks at a site and the need for response actions.


Health hazard assessment is a methodical evaluation of the consequences of exposure to a hazard(s) with particular focus on potential adverse human effects. The HHA process may incorporate hazard identification, characterization, assessment and communication. It may be used to support a regulatory program or policy position and meet one or more of the following criteria:

  • Focus on significant emerging issues

  • Support major regulatory decisions or policy/guidance of major impact

  • Establish a significant precedent, model, or methodology

  • Support major regulatory decisions or policy/guidance of major impact

  • Have significant inter-agency implications

  • Consider an innovative approach for a previously defined problem, process, or methodology

  • Satisfy a statutory or other legal mandate for peer review

Civil Works


The COE does not have risk "terms of reference" nor overall risk assessment standards. As the COE explores an appropriate approach to implementing the OMB bulletin, the necessary Engineering Regulations will be revised in accordance with the requirements of Section IV of the bulletin.


f. About how long (that is, from initiation of the risk assessment to delivery to the regulatory decision maker) does it take to produce the various types of risk assessments?


The length of time to produce a risk assessment varies greatly depending on the complexity of the subject and the type of risk assessment. Health hazard assessments, as addressed in this response, typically take 30 to 90 days from receipt of a complete package for review. Human health risk assessments for the Defense Environmental Restoration Program sites can vary from months for simple sites to five years or greater for complex sites.


The time needed to produce a risk assessment depends greatly on the amount of information available at the initiation of the risk assessment and/or the specific requirements for conducting the assessment. The time required can be significant in situations where (1) no sampling has been performed, (2) risk communication is just beginning, (3) toxicological information does not exist or has to be developed, and/or (4) the exposure/health effects are not known or well understood. In urgent situations, there may be a need to provide as accurate an estimate of risk as possible in a very short timeframe. In these cases, a risk estimate may be made in as little as a few hours.


2. Questions about OMB’s definition of risk assessment and applicability

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Using the definition of risk assessment described in the OMB Bulletin, are there work products that would now be considered risk assessments that were not previously considered risk assessments? If so, what are they?


The term “risk assessment” is a very broad term that the OMB Bulletin correctly recognized can involve many different methodologies in the varied disciplines that utilize the assessment of risks as a decision making tool. However, we do not believe that it will significantly change what products we consider risk assessments at this time.


The applicability of the OMB Bulletin requirements to some DoD activities some projects is somewhat unclear. For example, the second paragraph of Section II states, “[t]his Bulletin does not apply to risk assessments that arise in the course of individual agency adjudications or permit proceedings…” Additional confusion arises from the sentence, “[t]his Bulletin also shall not apply to risk assessments performed with respect to inspections relating to health, safety, or environment.” Therefore, it is possible that the Bulletin would not be applicable to some inspection work products.


3. Questions about type of risk assessment (tiered structure)


a. In your agency, is there currently a clear demarcation between risk assessments used for regulatory analysis and those not used for regulatory analysis? Is this clear at the outset of the risk assessment?


Typically, there is usually a clear distinction between risk assessments used for regulatory analysis and those that are not (i.e., used for internal DoD purposes). Many of the environmental risk assessments are site-specific and are performed to meet statutory (e.g., CERCLA) and regulatory requirements. Whereas chemical-specific toxicological risk assessments are done to determine reference doses or concentrations and typically have the potential to impact the state of the science, the published values may be used by other agencies for regulatory purposes. These are typically done by DoD for military-specific chemicals.


Other risk assessments may be done to answer military-specific, force protection, or threat assessment questions.


b. In your agency, is there currently a clear demarcation between “influential risk assessment” used for regulatory purposes and other risk assessments used for regulatory purposes? Is this clear at the outset of the risk assessment?


There is no clear demarcation between “influential risk assessment” used for regulatory purposes and other risk assessments used for regulatory purposes.


4. Questions about impact of the Bulletin on agency risk assessment practices

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

a. If applicable, please specify provisions in the Bulletin that can be expected to have a substantial positive effect on the quality, conduct, and use of risk assessments undertaken by your agency.


The general framework provided by the Bulletin will be useful to DOD for improving scientific rigor for its risk assessment procedures. Below are some specific improvements that will likely be realized:

  • Increased transparency of the science and assumptions in the risk assessment.

  • Improving the scientific defensibility of risk assessments as a result of the provisions listed in Section IV: “General Risk Assessment and Reporting Standards.”

  • Defining the central tendency (CT) as an “expected effect” and the requirement to express risk as a range should produce more realistic risk management decisions. However, it would be beneficial if the OMB Bulletin provided examples of when it may be appropriate to regulate using the expected effect vice the most conservative estimate.

  • A more comprehensive characterization of the sources of uncertainty via use of quantitative approaches will be included in risk assessments performed. We consider this extremely important and beneficial for chemical-specific risk assessments (whereas this may not be as necessary for more routine, site-specific risk assessments). Perhaps more importantly, is the recognition and use of this uncertainty information in risk management decisions.

  • More detailed discussion(s) of the full range of uncertainty will be generated by modeling of data (the strengths and weaknesses associated with various assumptions/modeling). This is frequently lacking in health risk assessments. These modeling assumptions include those associated with dose-response curves and point-of-departure (POD); dose ranges and associated likelihood estimates for identified human health outcomes.

  • More detailed discussions of variability (the range of risks reflecting true differences among members of the population due to, for example, differences in susceptibility) and uncertainty (the range of plausible risk estimates arising because of limitations in knowledge) will have a positive effect on the outcome of the risk assessment. Failure to characterize variability and uncertainty thoroughly can convey a false sense of precision in the conclusions of the risk assessment.

  • For cancer health risk estimates, quantitative estimates of the POD corresponding to central, upper-bound, and lower-bound estimates; the use of different plausible POD values; different plausible mathematical functions fit to the observed epidemiological data, where available, and different assumptions for estimating historical exposures among human subjects (epidemiological data), when applicable, should significantly improve the risk assessments.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
  • For non-cancer health risk estimates for chemical-specific risk assessments, characterization of the uncertainty associated with fitting a dose-response relationship to the available data and selection of a POD. Where applicable, it should be acknowledged that the information available remains insufficient to support a meaningful point estimate.

b. If applicable, please specify provisions in the Bulletin that can be expected to have a substantial negative effect on the quality, conduct, and use of risk assessments undertaken by your agency.


The adherence to the provisions listed in Section V: “Special Standards for Influential Risk Assessment” and in Section IV: “General Risk Assessment and Reporting Standards”, the performance of risk assessments will be more labor and resource intensive.


Additional labor will be required to:

  • Collect the necessary information and data to characterize risk as outlined in the OMB Bulletin.

  • Negotiate with regulatory authorities about the scope of the risk assessment. When deciding specific inputs, there will now be a wider range of choices, rather than one or two choices.

  • Communicate the results to people unfamiliar with the risk assessment process, due to the increased complexity of the risk characterization portion and the increase in the amount of material requiring explanation.

  • Increase the level of expertise needed to perform quantitative uncertainty analysis for completing a risk assessment. Finding the expertise in a timely fashion may present challenges.

  • Review products due to increased time associated with more complex risk assessments.

c. If your agency followed the procedures described in the Bulletin, would it affect the time course for production of the risk assessment (that is, the time required from initiation of the risk assessment to delivery to the regulatory decision maker)? If so, please explain why?


The time required will vary depending on the organization and type of risk assessment being conducted. No expected change is anticipated for some risk assessments while a significant increase in time may be required for others. Some organizations within DoD believe that adherence to the provisions listed in Section V: Special Standards for Influential Risk Assessments and in Section IV: General Risk Assessment and Reporting Standards may impact the ability to meet critical and/or regulatory prescribed deadlines unless the allowable timeframes are extended to accommodate the expanded assessments.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

d. One of the Bulletin’s reporting standards states the need to be scientifically objective by “giving weight to both positive and negative studies in light of each study’s technical quality.” Please give an example of how this would be implemented by your agency or department.


A requirement to give weight to both positive and negative studies in light of each study’s technical quality would generally be a beneficial change. The key point in the question is “in light of each study’s technical quality.” DoD upholds the principles of scientific objectivity and consideration of all peer-reviewed literature, with an emphasis on appropriate and technically relevant study design for the research.


The ability to be able to select site-specific exposure assumptions and toxicity parameters based upon the latest science, vice the default values required by some regulatory agencies, would be very beneficial. Risk assessors should have the option to evaluate the various studies and discuss in the risk assessment the justification for deviating from the standard default value(s). Currently, some agencies are reluctant to allow the use of site specific exposure assumptions.


The EPA’s Final Cancer Guidelines state that well-conducted human studies that fail to detect a statistically significant positive association may have value and should be judged on their own merit. However, it may be difficult to have EPA consider negative studies of “equal weight” with positive studies, particular since the Cancer Guidelines also have a default assumption that states when cancer effects are not found in an exposed human population, this information, by itself, is not generally sufficient to conclude that the chemical poses no carcinogenic hazard to potentially exposed human populations.


Deciding whether to give weight to both positive and negative studies in site-specific risk assessments could be determined by the complexity of the risk assessment necessary for a scientifically sound decision and the benefits, if any, of conducting such an evaluation, since this may significantly increase the time and resources needed to conduct the assessment. If the requirements of the risk assessment include the development of parameters for use in the risk assessment, both positive and negative studies are likely to be used. If parameter development is not required, one may choose to use default parameters.


e. Does your agency use risk assessments conducted by external groups? Would it be helpful to you if risk assessments submitted to your agency by external groups, such as consultants and private industry, met the requirements proposed in the OMB Bulletin?


Products produced by external groups are occasionally used and frequently reviewed by DoD. Risk assessments from external groups are often used when there is a lack of existing regulatory guidance. Contractors frequently conduct human health and ecological risk assessments as part of the DoD Installation Restoration Program. DoD also considers risk assessments published in open scientific literature when examining chemicals for which no regulatory standards exist. Although it would result in an increased contract requirement, it would be beneficial if contractors and private industry met the OMB Proposed Bulletin requirements. Potential benefits include:

  • More consistent DoD risk assessments,

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
  • More rapid quality analysis/quality control (QA/QC) review(s),

  • Increased transparency when using products prepared by others, and

  • Better information on which a risk manager can base a decision.

The use by federal agencies of risk assessments submitted by external organizations, such as consultants and private industry, may increase the pace of such risk assessments and increase the number of toxicity benchmarks available by removing the burden for all toxicity benchmark development from EPA. The use of credible and scientifically defensible risk assessment by external groups would allow EPA to focus on those chemicals of national importance.


Assuming a “zero-sum” game in most programs, the aforementioned requirements may result in additional costs per assessment and thus fewer assessments may be conducted. The value of additional information and analysis would have to be considered along with the importance and impact of the assessment and the effects on overall programs.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

This page intentionally left blank.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

DEPARTMENT OF ENERGY

Response to Questions By the National Research Council Regarding OMB’s Proposed Risk Assessment Bulletin
July 26, 2006

National Research Council’s general questions about current risk assessment practices:

  • Please provide a brief overview of your current risk assessment practices. Specifically, do you conduct probabilistic risk assessment? Is there a common approach to both risk assessments and uncertainty analysis? How do you currently address uncertainty and variability in your agency's risk assessments?

  • Please identify any substantial scientific or technical challenges that you may encounter when conducting risk assessments for your agency.

  • What is your current definition of risk assessment, and what types of products are covered by that definition?

  • About how long (that is, from initiation of the risk assessment to delivery to the regulatory decision maker) does it take to produce the various types of risk assessments?

DOE Response:


We will address the third bullet first -- What is your current definition of risk assessment, and what types of products are covered by that definition?


In addressing these questions, we do not make a distinction between risk assessments performed by management and operating (M&O) contractors and the Department of Energy (Department or DOE) itself. There is no single definition for “risk assessment;” the term has numerous meanings and uses throughout DOE operations. There are project and budget risk assessments the purpose of which is to assess the risk of specific engineering options and funding risks associated with proceeding with a project. There are accident-related risk assessments the goal of which is to assess the probability of a given event and its consequences to determine risks to assist in planning mitigating actions or design requirements. There are health and environmental risk assessments the goal of which is to assess the potential risk to the public, environment or work force from various DOE actions or alternative actions to support decision-making. There are risk assessments that relate to regulatory decisions. These can overlap; for example, a health risk assessment may be part of a project risk assessment. Common environment, health and safety related risk assessments conducted by DOE include:

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
  • Risk/dose1 assessments used in optimization analyses (As Low As is Reasonably Achievable, ALARA) studies to support radiation control decisions which can include:

    • Selection of control equipment to minimize releases to the environment

    • Selection of operating procedures that protect workers and the public

    • Development of authorized limits for control and release of property

  • Risk/dose assessments in National Environmental Policy Act (NEPA) documents such as Environmental Impact Statements to support DOE programmatic or project decisions

  • Risk/dose assessments that support safety analysis reports (SARs) to support nuclear safety and facility safety planning

  • Risk/dose assessments in the form of performance assessments and composite analyses to support waste management authorizations.

In responding to these questions, we are assuming that the National Research Council’s primary interest, and the focus of the Office of Management and Budget’s (OMB) risk guidelines, is health and environmental related risk assessment consistent with the OMB definition in its proposed Risk Assessment Bulletin. We note that most of the risk assessments conducted by DOE would likely not be “influential risk assessments” as defined in OMB’s proposed Bulletin.


Regarding the first bullet -- Please provide a brief overview of your current risk assessment practices. Specifically, do you conduct probabilistic risk assessment? Is there a common approach to both risk assessments and uncertainty analysis? How do you currently address uncertainty and variability in your agency's risk assessments?


Probabilistic risk assessment is sometimes employed by DOE in areas such as performance assessments for waste management facilities, safety analyses, and analyses to support real property release limits (e.g., cleanup standards). However, historically, deterministic assessments have been more frequently used. Except where required by regulation or statute (e.g., 40 CFR Part 191 requires certain probabilistic assessments to demonstrate compliance of high-level waste disposal repositories with the standard), the Department allows the analyst (risk assessor) the flexibility of using either deterministic or probabilistic approaches. In either case, analyses most times include an evaluation of uncertainty and variability (or parameter and assumption sensitivity) of the analytical results of the risk assessment. In some cases, these may be addressed through qualitative evaluations or estimates of doses or risks under bounding conditions.


DOE provides guidance and tools to support such assessments and, to the extent possible, to standardize them for the specific type of assessment. For example, DOE

1

It is noted that because one of the major regulatory functions of the Department is radiological protection and nuclear safety, in its assessments, radiation dose is frequently used instead of “health risk.” We consider dose to be a surrogate for risk and hence, in responding to the questions, we use risk assessment and dose assessment interchangeably.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

developed and maintains the RESRAD family of codes for conducting dose and risk assessments to support radiological decontamination and cleanup decisions for lands, structures and other property (http://web.ead.anl.gov/resrad/home2/). These codes and models provide the ability to conduct either deterministic assessments, with sensitivity and uncertainty assessment capability, or probabilistic assessments.


There also may be specific requirements for assessing bounding risk where for example, in the development of authorized limits for release of property, DOE requires the doses to be assessed for likely and expected uses and then contingency analyses for the worst plausible use (i.e., the use causing the highest potential human exposure given a plausible use) of the property to be released. In the case of low-level waste (LLW) disposal site performance assessments, the primary performance standard assumes undisturbed performance of the closed site. However, additional assessments are required to determine risks caused by intrusion into the site. DOE provides guidance for most areas to help standardize the risk assessments; however, given their varied purposes, a risk assessment for one activity will not necessarily be the same as for another. In addition, consistent with the proposed OMB Bulletin, DOE recommends that the resources expended for a risk or dose assessment be commensurate with the importance of the risk assessment, taking into consideration the nature of the potential hazard, the available data, and the decision’s needs. DOE is currently drafting a policy and guidance on risk methodology consistent with OMB Information Quality Guidelines in response to a request from the Defense Nuclear Facilities Safety Board.


Regarding the second bullet -- Please identify any substantial scientific or technical challenges that you may encounter when conducting risk assessments for your agency.


The primary challenge for DOE when conducting risk assessments usually relates to maintaining consistency between various risk assessments and explaining differences when they are warranted. In trying to make assessments be representative of real risks, it is frequently difficult to not assess worst case conditions, particularly where these have been used by others in the past. There is always one more scenario, or one more approach that someone feels deserves assessment. It is frequently a challenge to balance the desire to evaluate some new option with the need to make a timely decision and complete the action. Integrating or taking into consideration the newest science is also a challenge. It generally is difficult to move away from a particular practice or data that has been used in the past, particularly when the new approach may be less conservative (e.g., a situation where the linear-no threshold model of risk may be in question). However, as noted in the questions on “risk assessment practices” below, for the most part, DOE does not deal with such issues because we base our risk assessments on risk and dose factors developed by other agencies and organizations.


One of the technical difficulties is the paucity of data. For example, DOE facilities have very low accident rates, which create large uncertainties in the determination of

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

accident likelihood, especially for high-consequence events. Unlike the commercial nuclear industry, most DOE facilities vary significantly in design and hazard; developing models for the behavior of systems and predicting outcomes is challenging.


Regarding the fourth bullet -- About how long (that is, from initiation of the risk assessment to delivery to the regulatory decision maker) does it take to produce the various types of risk assessments?


The time it takes DOE to complete a “risk assessment” varies greatly and is typically commensurate with the scope, complexity and controversy associated with a project. The range may be from a few months to several years for extremely complex or controversial projects.


National Research Council questions about OMB’s definition of risk assessment and applicability:

  • Using the definition of risk assessment described in the OMB Bulletin, are there work products that would now be considered risk assessments that were not previously considered risk assessments? If so, what are they?

DOE Response: No. As noted above, the term has been used more broadly at DOE than in the definition of “risk assessment” in the proposed OMB Bulletin, but all products that meet the OMB definition of risk assessments have always been considered risk assessments by DOE.


National Research Council questions about type of risk assessment (tiered structure):

  • In your agency, is there currently a clear demarcation between risk assessments used for regulatory analysis and those not used for regulatory analysis? Is this clear at the outset of the risk assessment?

DOE Response: Yes, the analyst and reviewers know if the risk assessment being developed is to comply with a specific regulation, to support development of a regulation or just to support decision-making with regard to design development, alternative selection or impact assessment.

  • In your agency, is there currently a clear demarcation between “influential risk assessment” used for regulatory purposes and other risk assessments used for regulatory purposes? Is this clear at the outset of the risk assessment?

DOE Response: Yes, the proposed OMB definition seems clear, and the proposed Bulletin contains adequate discussion to use as a basis for such

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

determinations. More specifically, the proposed Bulletin states that the term “influential” should be interpreted consistently with OMB’s government-wide Information Quality Guidelines and the agencies’ guidelines. DOE has found the term “influential scientific information,” which establishes the same standard, to be clear and workable in practice. The term “influential risk assessment” in the proposed OMB Bulletin has not been used by DOE in the past, but in the future, it will be clear to DOE program offices when an assessment is an influential risk assessment.


National Research Council questions about impact of the Bulletin on agency risk assessment practices:

  • If applicable, please specify provisions in the Bulletin that can be expected to have a substantial positive effect on the quality, conduct, and use of risk assessments undertaken by your agency.

  • If applicable, please specify provisions in the Bulletin that can be expected to have a substantial negative effect on the quality, conduct, and use of risk assessments undertaken by your agency.

DOE Response: We anticipate no substantial effects on agency risk assessments. DOE requirements and guidance for risk assessments are generally consistent with the OMB proposed guidance and hence, risk assessments should not be greatly influenced. As previously noted, most DOE risk assessments would not be classified as “influential risk assessments” under the OMB definition.

  • If your agency followed the procedures described in the Bulletin, would it affect the time course for production of the risk assessment (that is, the time required from initiation of the risk assessment to delivery to the regulatory decision maker)? If so, please explain why?

DOE Response: The only issue identified by DOE relates to the peer review process, which Section III of the proposed Risk Assessment Bulletin includes as one of the goals of risk assessment. Although DOE risk assessments undergo peer review, the explicit requirements for peer review in OMB’s Bulletin for Peer Review under some circumstances could reduce flexibility and add cost and time to a project if additional mechanisms for peer review need to be employed.

  • One of the Bulletin’s reporting standards states the need to be scientifically objective by “giving weight to both positive and negative studies in light of each study’s technical quality.” Please give an example of how this would be implemented by your agency or department.

DOE Response: For the most part, this is not relevant to DOE risk assessments. Risk factors and dose factors, which typically are the most controversial element of a risk assessment and for which there are differing scientific views and studies,

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

are not generated by the Department but rather by other organizations or agencies. For example, for radiation dose and risk assessment DOE uses the recommendations of the Environmental Protection Agency (EPA) (e.g., Federal guidance reports #11, #12 and #13), the National Academies (e.g., BEIR reports), the International Commission on Radiological Protection (ICRP), and the National Council on Radiation Protection and Measurements (NCRP).


Similarly for toxic chemicals, DOE uses risk estimates from EPA (e.g., IRIS database, HEAST Tables). DOE does conduct research as input to its “influential risk assessments” such as DOE epidemiological studies (e.g., atomic bomb survivors, DOE workers) and DOE Office of Science low dose radiation research program (Link to Low-dose Radiation Research Program homepage). However, DOE has not historically conducted the independent studies to consolidate these assessments, but rather has depended on others such as EPA and the National Academies to use the data and studies developed by DOE. DOE does encourage the development of mechanisms to rapidly and routinely update the science such as that described above which underlies the risk assessment process.

  • Does your agency use risk assessments conducted by external groups? Would it be helpful to you if risk assessments submitted to your agency by external groups, such as consultants and private industry, met the requirements proposed in the OMB Bulletin?

DOE Response: As noted in the responses to other questions, DOE depends almost exclusively on the risk assessments prepared by others (primarily other Federal agencies or national and international standards organizations) for the toxicity estimates, carcinogenicity estimates and other risk and dose factor information. These values are used in DOE assessments of risk from its operations such as its NEPA documents, performance assessments and so forth. For risk assessments that we conduct such as those supporting the development of cleanup decisions, we frequently support and as appropriate use independent reviews. Therefore, it would be useful if these outside groups followed the OMB Bulletin.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

ADDITIONAL QUESTIONS BY THE NATIONAL RESEARCH COUNCIL FOR DOE

  • What are DOE’s current overall challenges regarding risk assessment? Specifically, please address DOE sites that have to be remediated (e.g., Hanford); DOE facilities (e.g., research and test reactors and processing plants); special projects (e.g., Yucca Mountain); and other sites (e.g., Pantex). How will the OMB Bulletin impact the quality, conduct, and use of risk assessments in these cases?

DOE Response: DOE’s overall challenges regarding risk assessments conducted for activities such as site remediation are related to the need to obtain from the regulatory agencies, such as EPA, the best and most up to date scientific data for input into the risk assessment models (e.g., carcinogenicity data for specific chemicals of concern) and the best data for the default assumptions.


Unless the Bulletin results in changes to other agencies’ regulations or guidance, it is not likely that the Bulletin will significantly affect the quality, conduct or use of risk assessments for these cases. If there is a significant impact on agencies, such as EPA -- which requires the risk assessments for regulatory compliance, issues the guidance on how to prepare the risk assessments, maintains many of the databases with the “approved” data for input into the models such as the carcinogenicity, and provides many of the default assumptions -- then DOE in turn will be affected. In addition, the reporting requirements could prove to be onerous, without improving the overall quality of the risk assessments.


DOE encourages the development of mechanisms to rapidly and routinely update the science underlying the risk assessment process. This is of particular importance to DOE in the context of risk assessment for potential radiation exposure. The Office of Science supports a research program specifically directed at low-dose radiation effects. Integration of scientific advances such as those resulting from the low-dose program into the risk assessment process is necessary to insure that decisions incorporate the latest science.


The statement on page 10 of the proposed Bulletin regarding exemption is very important, but, as we explain below, should be revised: “This Bulletin does not apply to risk assessments that arise in the course of individual agency adjudications or permit proceedings, unless the agency determines that: (1) compliance with the Bulletin is practical and appropriate and (2) the risk assessment is scientifically or technically novel or likely to have precedent-setting influence on future adjudications and/or permit proceedings. This exclusion is intended to cover, among other things, licensing, approval and registration processes for specific product development activities. This Bulletin also shall not apply to risk assessments performed with respect to inspections relating to health, safety, or environment.”

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

The qualifications in (1) and (2) in that paragraph should be removed: if the purpose of a risk assessment is to seek a permit or a license, such as from the Nuclear Regulatory Commission (NRC) or EPA, then those agencies ought to specify the scope and content of the assessment, not DOE. To suggest otherwise (as these two points do by giving DOE managers a chance to make that judgment) sets up a potential for dual, and potentially contradictory, guidance.


The paragraph also needs to exempt, explicitly, all risk and safety assessments being performed preparatory to the safety assessment that is to become part of a permitting or licensing process. Preparatory assessments ought to address the requirements of the regulator(s), not DOE requirements.


A case in point, regarding this potential for dual or contradictory guidance, is with respect to population dose/risk calculations: The proposed Bulletin mentions population dose and risk in many places. Particularly troublesome from a DOE Yucca Mountain project perspective, with calculations for up to a million years, are Section IV, item 4, on page 16 (as also reflected on page 24, item 6 d).


“4) When estimates of individual risk are developed, estimates of population risk should also be developed. Estimates of population risk are necessary to compare the overall costs and benefits of regulatory alternatives.”


For long-term potential dose/risk calculations, the international consensus is shifting away from population doses. The International Commission on Radiological Protection (hence also the NRC and EPA, and even the National Academy of Sciences in their 1995 report on the bases for Yucca Mountain repository standards) acknowledge that for long-term performance or risk assessments future population estimates are highly speculative, making the usefulness of population dose estimates for far future safety evaluations questionable. They are not required for Yucca Mountain repository performance assessments by either EPA or NRC.


In a case where it is appropriate for DOE to assess a population risk, that risk should not be estimated beyond a few hundred years in the future. Similarly, exposures of only the most exposed groups should be assessed (e.g., for emissions or releases from a facility). Exposures beyond a few tens of miles (DOE uses 50 miles as a guideline) from the release point add little to the comparison of alternatives but greatly increase uncertainty and complexity of the analyses.


A Yucca Mountain total system performance assessment is part of an important decision process, so would qualify as an “influential risk assessment,” but given the explicit exemption suggested above, the “special standards” for influential risk assessments would not apply to either its pre-licensing or licensing assessments.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

APPENDIX

Sources of Guidance Related to DOE Dose or Risk Assessment

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

DEPARTMENT OF HEALTH & HUMAN SERVICES

Office of the Secretary

Washington. D.C. 20201

Ellen Mantus, Ph.D.

Study Director

Committee to Review the OMB Risk Assessment Bulletin

Board on Environmental Studies and Toxicology

The National Academies

500 F Street N.W. Washington, D.C. 20001

Dear Dr. Mantus:

We are pleased to forward HHS responses to the questions posed to federal agencies by the National Academy of Sciences Committee to Review the OMB Risk Assessment Bulletin. HHS is one of the federal agency sponsors of the Committee. and our responses are intended to assist the Committee in its deliberations.

As we indicated during the Committee’s recent workshop and public meeting. and expand upon in our enclosed responses, HHS supports the goals of the draft Bulletin, to improve scientific information and encourage the application of sound methodological practices and standards to relevant and appropriate scientific products and situations. The ability of the draft Bulletin 10 support accomplishment of those goals in a manner that is commensurate with scientific objectives and available science while not overburdening agencies will depend upon a number of factors, including its scope, definition, applicability and requirements for federal agencies, as well as its implementation and interpretation.

Our attached responses to NAS questions reflect potential concerns in several areas. For example, concerns exist that the scope and definition of risk assessment proposed in the draft Bulletin is broad, and could encompass agency scientific. safety and health information products and activities that were not previously considered formal risk assessments in the sense of the National Academy of Sciences’ earlier definition. and that do not fit such a model, Second, the draft Bulletin includes technical choices and decisions that one could make in developing a risk assessment, but these practices appear to be expressed as uniform requirements that could inadvertently preclude other appropriate scientific choices and approaches more commensurate with intended scientific and health objectives and the available science. Third, depending on scope and applicability, it will be important to ensure that the Bulletin’s requirements do not impede our ability to develop and communicate urgent and time sensitive health and safety information.

We hope that the attached responses will assist the Committee in its deliberations and we would be happy to provide additional information.

Sincerely,

Ann C. Agnew

Executive Secretary to the Department

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICES BACKGROUND INFORMATION ON THE NATIONAL RESEARCH COUNCIL’S REVIEW OF THE OMB RISK ASSESSMENT BULLETIN1

I. QUESTIONS FOR ALL AGENCIES POTENTIALLY AFFECTED BY THE OMB BULLETIN


A. General questions about current risk assessment practices


1. QUESTION: Please provide a brief overview of your current risk assessment practices. Specifically, do you conduct probabilistic risk assessment? Is there a common approach to both risk assessments and uncertainty analysis? How do you currently address uncertainty and variability in your agency’s risk assessments?


Of the thirteen Department of Health and Human Services (HHS) agencies and offices, only two, the Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC) conduct risk assessments as the term generally is defined and understood by the NAS in its 1983 report, Risk Assessment in the Federal Government: Managing the Process. Based on the NAS definition, a risk assessment is a formal and quantitative assessment that includes: (1) hazard identification, (2) hazard characterization or dose-response assessment, (3) exposure assessment, and (4) risk charactcrization.


As noted below, both FDA and CDC use very similar conceptual approaches to risk assessment although the different contexts (e.g., food, environmental, and occupation) necessitate differences in these agencies approaches. The National Institutes of Health (NIH) does not conduct risk assessments as the term is defined by the 1983 NAS report. Although NIH carries out components of risk assessments, none of the component products involve quantitative risk assessments (as discussed, in Section 1.c.).


Scientific assessments are designed and scaled to fit the public health problem at hand. The type of complex risk assessments contemplated in the Bulletin are not always appropriate given the magnitude and type of problem, affected population characteristics, available data, time constraints or resources. As described more fully in our responses, HHS agencies and offices produce, evaluate and synthesize a great deal of scientific information, including a number of types of health and safety assessments that do not follow the NAS paradigm and are not intended to result in the formal development of a risk estimate—in other words, unlike risk assessments, they do not develop a statement about the probability that populations or individuals with the exposure of concern will be harmed and to what degree.

1

The page numbers cited in our answers refer to the pages of the draft Bulletin as released by OMB for peer review and public comment.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

a. FDA

FDA’s approach to the conduct of scientific assessments is specific to and dependent upon the scope and purpose of the particular need. FDA’s efforts include probabilistic quantitative risk assessments, safety assessments and qualitative risk assessments.


Probabilistic risk assessment. FDA conducts quantitative, probabilistic risk assessments, such as the one on Listeria monocytogenes in ready-to-eat foods, available at http://www.foodsafety.gov/~dms/lmr2-toc.html. Factors considered in determining whether a probabilistic quantitative risk assessment is appropriate include: complexity, availability of data, time frame, staff resources, data availability, and the level of certainty needed. With respect to the work of one FDA center, the Center for Food Safety and Applied Nutrition, the procedures used in the selection, commissioning, and conduct of “major” risk assessments (which would generally cover most probabilistic risk assessment of significant complexity) are available at http://www.cfsan.fda.gov/~dms/rafw-toc.html.


Uncertainty and variability. It may be helpful to clarify how FDA uses these terms. Uncertainty is typically thought to arise from a lack of data or information. Multiple sources of uncertainty are often considered to be relevant to scientific evaluations and techniques are available to account for or measure some of these uncertainties. Variability reflects the fact that all systems or populations have inherent, biological heterogeneity that is not reducible through further measurement or study. Sufficient knowledge is needed to account for both variability and uncertainty, but a key difference between them is that uncertainty reflects incomplete knowledge about a system or population that can be reduced with additional study.


State-of-the art food safety risk assessment models, such as the Listeria monocytogenes risk assessment for ready-to-eat foods, use techniques that separately address uncertainty and biological variability. In other risk assessments, FDA identifies sources of uncertainty without clearly distinguishing between variability and uncertainty, because the level of precision needed from the risk assessment does not warrant this separation.


b. CDC


Formal, quantitative risk assessments that are consistent with the NAS 1983 definition are mainly conducted by the CDC’s National Institute for Occupational Safety and Health (NIOSH) and the National Center for Environmental Health/Agency for Toxic Substances and Disease Registry (NCEH/ATSDR) in the areas of occupational and environmental health, and conducted by NIOSH and NCEH/ATSDR. CDC’s current risk assessment practices are congruent with the practices defined by the NAS. The methods used are data-driven and tailored to the research question under investigation. Typically, workplace exposures of concern are evaluated for potential to cause harm to exposed individuals. Exposures could include chemicals, physical agents, energy, or other hazards in the workplace.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Either human or animal data, or both, are used for quantitative risk assessment, though the preference is to use human data whenever possible. The data are evaluated for applicability to dose-response analysis, availability of human studies, mechanistic information, etc. Dose-response analyses are conducted. Particular emphasis is given to evaluation of alternative models and sensitivity analyses, in order to assess the potential impact of varying data sets or modeling assumptions on the estimated risks.


Probabilistic risk assessment. CDC typically does not conduct probabilistic risk assessments by assigning probability distributions to inputs of a risk equation to generate probability distributions of risk. However, probabilistic methods are used when appropriate and useful, for example, when attempting to characterize uncertainty in exposure or in methodological studies assessing the statistical properties of proposed risk assessment methods. Full probabilistic risk assessments could potentially be developed in the future, given sufficient information on the distributions of exposures, metabolic enzymes, genetic susceptibilities, etc., in the population. However, these data are typically not available for occupational exposures. To date, CDC risk assessment practice has emphasized evaluation of alternative models and assumptions and sensitivity analyses of those models as methods for describing uncertainty, rather than probabilistic modeling.


Uncertainty and variability.

Uncertainty analysis is an essential component of every formal, quantitative risk assessment conducted by CDC. Uncertainty may be evaluated through analysis of different data sets, different endpoints in animal studies, and different hypotheses about the precise mechanism of action, differing assumptions for extrapolation from animals to humans, lack of knowledge of current and historical exposures in human studies and/or differing assumptions regarding statistical models for dose-response or metabolism. CDC risk assessment practice is to explore and describe these uncertainties, to the extent possible, and to evaluate the influence of these various factors on the risk estimates. Current risk assessment research at CDC also assesses the utility of model averaging techniques as a method for addressing model uncertainty in a meaningful and quantitative fashion.


CDC evaluates and describes uncertainty by analysis of alternative models and assumptions, via sensitivity analyses to describe the quantitative impact of alternative assumptions on estimates of risk, and by methods that consider variability and uncertainty of the models in an integrated framework (e.g., metaanalyses, Bayesian approaches). The end result is generally to report a range of plausible risk estimates, which encompass both uncertainty and variability.


c. NIH

NIH does not conduct risk assessments as the term is defined in the 1983 NAS report. However, NIH does carry out components of risk assessments. Since none of the component products involve quantitative risk assessments, NIH does not conduct formal probabilistic analyses, uncertainty analyses or calculations of

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

variance per se. The National Toxicology Program (NTP) Report on Carcinogens (RoC) provides some indication of the strength of the evidence in its use of two categories of carcinogenic hazards (“known” or “reasonably anticipated to be”) and a category of “not listed” when an agent does not meet the criteria for the other two categories. Other NIH products may address uncertainty in the recommendations, Peer-reviewed publications for discovery research typically include a discussion of uncertainty and formal statistical analyses to support a particular conclusion.


2. QUESTION: Please identify any substantial scientific or technical challenges that you may encounter when conducting risk assessments for your agency.


a. FDA

Most challenges that FDA faces in conducting risk assessments are related to funding or resource scarcity rather than substantial scientific or technical issues. Part of FDA’s approach to commissioning risk assessments is to consider fully the specific scientific, technical, and informational challenges that are likely to be encountered and the feasibility of overcoming those challenges. The logistics of supporting risk assessment activities remain difficult and involve issues such as availability of staff expertise and availability of funding. Technical challenges include data gaps, lack of access to proprietary information, and the need to develop experimental protocols or models to generate needed data.


b. CDC

Historically at CDC, emphasis has been placed on the use of human data in formal, quantitative risk assessment; however, it is frequently difficult to obtain characterizations of exposures suitable for dose-response analysis. This problem is particularly acute in the case of carcinogens where, because of long disease latencies, the exposures of greatest concern may be decades prior to the conduct of the study, and records of exposure may be sparse to non-existent. In the case of animal studies, data is generally available on both exposure and response, but uncertainty exists as to how to model responses in the low-dose region and how to extrapolate the results from animals to humans.


c. NIH

The NIH does not currently conduct risk assessments as the term is set forth in the 1983 NAS report. However, if the OMB Bulletin definition were to be implemented as currently drafted and strictly applied across government, there may be many additional activities that will need to adhere to the procedures set forth in the Bulletin. Applying those procedures to activities that are not risk assessments now will pose significant challenges. For example, there may be challenges related to understanding active ingredients (e.g., herbal supplements) and providing clear descriptions of key concepts for emerging fields (e.g., gene therapy).

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

3. QUESTION: What is your current definition of risk assessment, and what types of products are covered by that definition?


Currently, HHS does not have a written definition of “risk assessment” that applies throughout the agency but like most other government agencies worldwide, we rely on the definition of risk assessment put forth by the NAS in its 1983 report, Risk Assessment in the Federal Government: Managing the Process. Based on this definition, FDA and CDC are the only agencies within HHS that currently conduct risk assessments.


a. FDA

FDA believes that “risk assessment” refers to a scientific or technical document that assembles and synthesizes scientific information and arrives at a qualitative or quantitative estimation of the extent to which a potential hazard exists and/or the extent of possible risk to human health, safety, or the environment A complete risk assessment includes the examination of known or potential adverse health effects resulting from human exposure to a hazard. The generally accepted four-part risk assessment paradigm includes: (1) hazard identification; (2) hazard characterization or dose-response assessment; (3) exposure assessment; and (4) risk characterization. Under this definition, we would include only those FDA activities in which there is a formal development of a risk estimate expressed in a scientific or technical document


b. CDC

CDC distinguishes between scientific assessments, for the purpose of hazard identification, and formal, quantitative risk assessments, for the purpose of characterizing risks numerically. Hazard identification studies often are focused narrowly on whether exposure to a given hazard is associated with significant injury or disease. That is, the response may be measured quantitatively, while the exposure is not. Such studies are valuable in identifying hazards, but do not lend themselves to quantitative exposure-response analysis and full risk characterization. Formal, quantitative risk assessments generally follow the NAS definition of exposure assessment, hazard identification, exposure-response analysis, and risk characterization. CDC has reserved the term “risk assessment” for quantitative analyses which follow the NAS paradigm; i.e., to full quantitative risk assessments, rather than applying the term to the sub-components of a full risk assessment.


CDC formal, quantitative risk assessments range from relatively brief documents – i.e., when only a very limited amount of data are available and only a simplistic analysis can be performed – to extensive analyses, when large amounts of data are available and extensive sensitivity analyses are conducted. For example, an epidemiological study may report only an average exposure and a summary measure of excess risk in an occupational cobort. Without access to the individual data from the study, only a simple linear analysis can be performed. A toxicologically-based risk assessment might also be very simplistic, if there is only one data set available and little controversy on how to extrapolate it to humans, e.g.,

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

for a site of contact irritant effect. In every case, an effort is made to develop both a central estimate of risk and statistical confidence limits, and to explore alternative models if the data are adequate to do so. On the other hand, an epidemiologically-based risk assessment using individual data from a large cohort, or a toxicologically-based risk assessment with multiple studies and/or multiple endpoints, could involve numerous models and an extensive exploration of the sensitivity of the final risk estimate to the various modeling assumptions. CDC formal, quantitative risk assessments are peer-reviewed and published in the scientific literature.


c. NIH

The NIH has no formal definition for risk assessment but regards the definition of risk assessment in the 1983 NAS report Risk Assessment in the Federal Government: Managing the Process as the authoritative source for the definition of risk assessment. Under this definition, none of the analytical reports and other products prepared by the NIH is considered a risk assessment.


4. QUESTION: About how long (that is, from initiation of the risk assessment to delivery to the regulatory decision maker) does it take to produce the various types of risk assessments?


a. FDA

It depends upon the complexity and nature of the risk assessment (and on how that term is defined). Factors that affect time to completion include scope of data and analysis, extent of public participation, and nature of peer review.


b. CDC

Formal, quantitative risk assessments at CDC range from several months for very simple analyses to a year or more for very complex and extensive analyses. Additional time is required for publication of the analyses in official CDC publications or peer reviewed journals. However, there are activities conducted on very short timelines that would be considered risk assessments under the Bulletin’s proposed definition, for example, hazard assessments involving infectious exposures that are conducted under emergency situations.


c. NIH

The NIH does not currently conduct risk assessments as the term is defined in the 1983 NAS report. However, if the OMB Bulletin definition of risk assessment were to be implemented, it would take NIH longer to develop its products. The procedures of the Bulletin would add a number of steps but the specific length of time needed for each product would also depend on the complexity of the science and the complexity of the product’s message. In general, most guidance documents and information bulletins can be prepared in less than 18 months. The RoC takes approximately 2.5 years for each agent under review.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

II. Questions about OMB’s definition of risk assessment and applicability


5. QUESTION: Using the definition of risk assessment described in the OMB Bulletin, are there work products that would now be considered risk assessments that were not previously considered risk assessments? If so, what are they?


The Bulletin (page 9) states that “Documents that address some but not all aspects of risk assessment are covered by this Bulletin” as is information that could be used for risk assessment purposes. Among the Bulletin’s specific examples of products that fall under the Bulletin are NTP substance profiles, ATSDR toxicological profiles, FDA tolerance values, and NIOSH current intelligence bulletins and criteria documents, and risk assessments. The Bulletin also refers to the “Surgeon General’s Report on Smoking and Health” and [NIH] “alerts to the public about the risks of taking long-term estrogen therapy” as risk assessments (page 5). However, under HHS’s current understanding of risk assessment, only a minority of ATSDR toxicological profiles and NIOSH current intelligence bulletins and criteria documents are considered risk assessments. All the other HHS information products that are cited are not considered to be risk assessments. For example, FDA tolerance values are considered to be determinations of an acceptable level of hazard associated with a particular exposure.


Under the Bulletin’s proposed definition of risk assessment, several HHS agencies including NIH, CDC and FDA would be most significantly impacted. However, other HHS agencies and offices (e.g. the Agency for Healthcare Research and Quality (AHRQ), the Substance Abuse and Mental Health Administration (SAMHSA) and the Office of Public Health and Science (OPHS)) would be affected as well. Since almost any health information product contains some discussion of safety or risk, it could be interpreted to fall under the Bulletin’s definition of risk assessment.


For example, a major program area for AHRQ is the development of information on effectiveness and comparative effectiveness of health care interventions. To the extent that the effectiveness of an intervention is the demonstration that the benefits outweigh the risks, and that patient safety involves risks associated with the intervention, the Bulletin’s broad definition of risk assessment could be construed to apply. Thus, the Bulletin could impose major new requirements on AHRQ and other HHS agency information products.


a. FDA

The definition of risk assessment currently in the draft Bulletin could effectively include almost any scientific analysis or review conducted by FDA. Second, the current definition may be broad enough to encompass peer-reviewed scientific journal articles cited in support of a regulation or a public health advisory. FDA does not currently consider these types of documents to be risk assessments.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Third, some exposure or hazard assessments might be interpreted under the Bulletin to be “risk assessments” because of uncertainties created by the definition of “risk assessment” when read in conjunction with Section II. 1, “applicability,” which states that the Bulletin’s standards apply to “all agency risk assessments available to the public.” The Bulletin (at page 8) states that the definition of risk assessment applies to documents that “could be used for risk assessment purposes, such as an exposure or hazard assessment that might not constitute a complete risk assessment” (emphases added). The same difficulty reappears on page 9 which states that “Documents that address some but not all aspects of risk assessment are covered by this Bulletin.”


Fourth, the definition of “risk assessment” arguably applies to a wide array of documents or presentations. For example, a “PowerPoint” presentation could become a “risk assessment” if the printed slides are construed to be a “scientific or technical document.” A speech discussing risk assessment on a specific matter could similarly be argued to be a “risk assessment” if the speech is printed or reduced to a transcript, thereby creating a “scientific or technical document.”


Finally, while many FDA product approval activities may be excluded because they are “individual agency adjudications or permit proceedings,” the draft Risk Assessment Bulletin would, under a broad interpretation of the Bulletin, continue to apply to other comparable FDA activities which, either by law or regulation, pertain to a class of products (as in the case of some risk classifications for devices). The draft Bulletin, at page 10, 2nd paragraph, expressly states that it does cover “risk assessments performed with respect to classes of products.” To make clear that this broader interpretation is not correct, and to avoid costly disruptions to FDA activities relating to classes of products and to avoid dissimilar treatment for agency adjudications that pertain to a class of products, the Bulletin should explicitly exclude agency adjudications or permit proceedings even where a class of products is involved. This approach is sensible, where similar products raise the same issue of risk and where that issue related to an agency adjudication or permit proceeding.


b. CDC

Risk assessment activities as defined by the NAS definition constitute a moderate portion of CDC’s overall activities and are largely conducted by NIOSH and NCEH/ATSDR. Under the definition of risk assessment proposed in the OMB Bulletin, much of the scientific work conducted by CDC could be subject to this Bulletin. This broader definition of risk assessment would now include epidemiologic or correlational studies, hazard reviews, site-specific studies, risk assessments reported in journal articles, and summaries of the scientific literature. The purpose of most of CDC’s scientific work is to describe patterns of disease, illness, and health behaviors. Although the intent and methodologies are not congruent with the NAS definition of risk assessments such products could be interpreted to be covered by the Bulletin. This expansion of the NAS definition of risk assessment to include component parts of a risk assessment substantially

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

broadens the scope of CDC work products covered by the provisions of this draft Bulletin.


Under the NAS definition, few CDC documents focus exclusively on risk assessment. In many cases, CDC risk assessors collect the relevant exposure and hazard data, conduct the appropriate quantitative exposure-response and uncertainty analyses and provide the risk assessment as one component of a broader health or safety document. Much of the core work of NIOSH and NCEH/ATSDR involves exposure and hazard assessments in large epidemiologic and toxicologic studies and smaller health hazard evaluations. The purpose of most of this work is to assemble and synthesize scientific information to determine whether a potential to human health hazard exists. In addition, CDC formal, quantitative risk assessments are sometimes conducted in response to requests from other agencies, such as OSHA, MSHA or EPA. The risk assessment product, in that case, may be a published journal article, technical report, testimony or comments to the requesting agency.


The draft Bulletin defines risk assessment as “a scientific and/or technical document that assembles and synthesizes scientific information to determine whether a potential hazard exists and/or the extent of possible risk to human health, safety or the environment.” The preamble specifies that “for the purposes of this Bulletin, this definition applies to documents that could be used for risk assessment purposes, such as exposure or hazard assessment that might not constitute a complete risk assessment as defined by the National Research Council.” CDC publishes Alerts, Hazard Reviews, Fact sheets, Information Circulars, Workplace Solutions, and other informational documents, many of which contain hazard identification, exposure assessment and occasionally quantitative risk assessment. These documents are not currently considered to be risk assessments, using the NAS definition, unless they contain full quantitative risk assessments. For example, NIOSH Criteria Documents frequently contain risk assessment information used to develop Recommended Exposure Limits (RELs) or other recommended standard provisions to protect worker health and safety. Although CDC has not considered all Criteria Documents to be risk assessments as defined by NAS, many do contain quantitative risk assessments. Current Intelligence Bulletins (CIBs), on the other hand, are more variable. Some CIBs contain quantitative risk assessments, but many offer only hazard assessment or exposure assessment information. Therefore, calling all CIBs and Criteria Documents risk assessments would be a new requirement for CDC.


Other examples of the CDC activities that would now be covered under the Bulletin’s proposed definition are the epidemiologic investigations that involve the analysis of available scientific information to determine the extent of risk to human health:

  1. Investigation into anthrax deaths and illnesses resulting from intentionally contaminated mail. Waiting for literature reviews of all available papers and peer reviewed studies and development of the range of all scientific opinions and the

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

likelihood of all plausible alternative assumption would have delayed recommendations for addressing the threat.

  1. Investigation into recent outbreak of mumps illness and the development of new policies on vaccination of healthcare workers. In this outbreak, emergency input from the Advisory Committee on Immunization Practices (ACIP) and the Healthcare Infection Control Practice Advisory Committee (HICPAC) were obtained to assist in the development of new policy based on preliminary epidemiologic information obtained during the investigation of cases of disease. The changes in policy were then quickly published in the MMWR as real time recommendations to control an ongoing outbreak.

  2. Investigation of the agent, risk factors and epidemiology of SARS. This was a previously unrecognized disease so there was no literature specific for this agent. However, as with Anthrax investigations, to wait for the development of all scientific opinions and to address the range of all scientific opinions and plausible explanations would have delayed publications of interim analyses and the establishment of control measures.

Another example of work products that comprise a significant portion of CDC scientific activities are “syntheses of scientific evidence” that might now be included under the OMB definition of risk assessment, but not under the NAS definition. Surgeon General Reports (SGR) are an example of such activities where the purpose is to provide clear and definitive conclusions about the strength of science on the relationships between exposure to potential hazards such as tobacco smoke and health effects. SGRs have not been characterized as risk assessments. An SGR on tobacco use is cited in the Bulletin as a type of risk assessment that would be included under the OMB definition and the Bulletin characterizes the widely adopted model of evidence review as “an actuarial analysis of real-world human data” which does not reflect the methodology used for SGRs. The evidence review methods were established for the 1964 report, and refined over the past twenty-nine reports. The evidence review methodology used by the SGRs has been widely cited as the gold standard for considerations of potential causality based primarily upon epidemiological data. This approach does not use a probabilistic risk assessment, but applies the well-defined rules of causality first listed in the 1964 report and more fully re-defined in the 2004 SGR, Chapter 1. Expert judgment regarding the consistency, strength, specificity, coherence, biological plausibility, and dose-response gradient of the evidence are obtained in a structured and systematic peer-review and senior scientific review process. As stated in many past SGRs and as explicitly stated to Chapter 1 of the 2004 report, the SGR evidence review process separates the evidence review and determinations of causality from implications or policy recommendations. Chapter 1 of the 2004 report reviewed this methodology and the criteria for causality conclusions.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

c. NIH

If the definition of risk assessment described in the OMB Bulletin were implemented as written and applied, the following reports and other products developed by the NIH could be considered risk assessments:


NTP Report on Carcinogens. The Report on Carcinogens (RoC) is an informational scientific and public health document first ordered by Congress in 1978 that identifies and discusses agents, substances, mixtures, or exposure circumstances that may pose a hazard to human health by virtue of their carcinogenicity. The RoC is published biennially and has a formal process for preparation that includes scientific peer review and multiple opportunities for public comment This document is intended for hazard identification only (as clearly noted in its Introduction). However, under OMB’s proposed definition of risk assessment, the Bulletin specifically identifies this report as an “influential risk assessment” because it addresses “some but not all aspects of risk assessment” (page 9).


Guidance to Medical Professionals. NIH routinely develops and disseminates guidance documents intended to aid medical professionals in providing health care in the United States. These guidance documents address critical issues associated with patient care and professional safety such as the proper methods for handling potentially hazardous biological material (e.g. human blood) and methods for reducing nosocomial infections. NIH staff generally prepares these documents after consultation with a broad array of medical professionals. The documents possibly fall under the Bulletin’s definition of risk assessment due to their implication of a hazard or risk if the guidance, which they provide, is not followed.

  • Guidance to the Research Community. In a similar manner, the NIH also develops and disseminates guidance documents for laboratory researchers regarding the safe conduct of basic and clinical research. These guidance documents usually apply to emerging therapies (e.g. gene-based therapies), new technologies (e.g. use of recombinant DNA products) or the management of clinical trials. As with guidance to medical professionals, NIH staff generally prepares these documents after broad consultation. They may be subject to the Bulletin under its current definition of risk assessment because their guidance could imply/identify a hazard or risk.

  • Discovery Research with Direct Potential Impact on Risk Assessments. The Intramural Research Programs of the various NIH Institutes all conduct discovery research that may, on a case-by-case basis, have direct bearing on a risk assessment being considered by another federal agency. Of special note in this category would be clinical trials, epidemiology studies and toxicology studies that are large and generally carry considerable weight in agency’s risk assessments. Even though these studies are intended for peer-

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

reviewed journal articles or the NTP Technical Report series, the Bulletin’s new definition of risk assessments could include studies of this type.

Health Information Documents. These are documents, generally prepared by NIH staff after outside consultation, and are intended as guidance to the general public on health related information. Most likely to fall under the broad definition of risk assessment in the Bulletin would be health alerts (e.g. on the safety of commonly used herbal supplements or on the safety of long-term estrogen) or information on personal choices to lead a healthier lifestyle (e.g. modifying diet to control diabetes). Because these imply potential hazards or risks, they are included in the definition in the Bulletin.


III. Questions about type of risk assessment (tiered structure)


6. QUESTION: In your agency, is there currently a clear demarcation between risk assessments used for regulatory analysis and those not used for regulatory analysis? Is this clear at the outset of the risk assessment?


a. FDA

FDA understands the term “regulatory analysis” as OMB does—i.e., as meaning an assessment of the potential costs and benefits of a regulatory action pursuant to E.O. 12866 and OMB Circular A-4. For risk assessments as FDA currently uses the term (i.e., those that result in a formal development of a risk estimate), there is no clear demarcation between risk assessments done to support economic analyses and those done for other Agency purposes. In general, any economic analysis is done after the risk assessment is completed, and does not influence the risk assessment, except to the extent there are specific variables that economists need to include to perform a sound economic analysis.


b. CDC

Formal risk assessments conducted by NIOSH and NCEH/ATSDR are not specifically developed to support regulatory analysis (cost-benefit or cost effectiveness analyses), nor is any distinction made in conducting risk assessments that may end up as part of another agency’s regulatory analysis. There are units within the agency with delegated regulatory authority such as the Division of Global Migration and Quarantine (DGMQ), which has regulatory authority around migrating populations (immigrants, refugees, travelers, cargo and animals). Specifically, DGMQ has regulatory authority (through delegation from the Secretary of HHS) to prevent the introduction, transmission, and spread of communicable diseases in the US, under 42 C.F.R. Part 70. Because of this DGMQ regularly assesses findings for regulatory analysis and needs to conduct risk assessments to determine the best way to meet legal and regulatory responsibilities. Sometimes DGMQ needs to quickly execute an action, sometimes within minutes (i.e., a quarantine order, if a plane load of passengers is sitting at an airport with an ill passenger on board), and there would not be time to comply with the proposed Bulletin. Even embargos usually need to be enacted rather quickly; within hours to

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

days. Findings and assessments performed by DGMQ can lead to regulatory changes, in essence making DGMQ part of the regulatory decision-making process.


c. NIH

The NIH does not currently conduct risk assessments as the term is defined in the 1983 NAS report. In addition, the NIH is not a regulatory agency. Although some of the NIH scientific products previously discussed might be used by regulatory agencies for regulatory analysis, the NIH has no control over their use, and it is not evident at the outset of any of NIH research whether a product will be used for regulatory analysis.


7. QUESTION: In your agency, is there currently a clear demarcation between “influential risk assessment” used for regulatory purposes and other risk assessments used for regulatory purposes? Is this clear at the outset of the risk assessment?


a. FDA

It is not always clear at the outset of a risk assessment that it will be influential. At the outset, FDA conducts and reviews risk assessments depending on the nature of the risk assessment, FDA’s estimate of its significance (influential or “highly influential” as those terms are defined by OMB), and on the risk management decisions the risk assessment will support.


The distinction between “influential risk assessment” and other risk assessments derives from the Information Quality Act and OMB and OMB and agency guidelines pursuant to it. FDA’s guidelines are posted as part of the HHS information Quality website at http://aspe.hhs.gov/infoquality/Guidelines/fda.shtml#viic. Under FDA’s guidelines, scientific information is considered “influential” if it is:


disseminated information that results from or is used in support of agency actions that are expected to have an annual effect on the economy of $100 million or more or will adversely affect in a material way the economy, a sector of the economy, productivity, competition, jobs, the environment, public health or safety, or State, local or tribal governments or communities.


As FDA’s Guidelines note, “the definition applies to ‘information’ itself, not to decisions that the information may support.”


b. CDC

NIOSH quantitative risk assessments are sometimes used by OSHA and MSHA to support rulemaking. NIOSH makes no methodological or scientific distinctions between “influential risk assessments” and other risk assessments. NIOSH risk assessments are grounded in the best available science and rely on the most robust methods available. NIOSH uses the same approach whether or not they are used

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

for, or are intended to inform, regulatory action. All NIOSH risk assessments are conducted with the goal of evaluating the potential hazards or risks in order to protect workers’ health and safety.


IV. Questions about impact of the Bulletin on agency risk assessment practices


8. QUESTION: If applicable, please specify provisions in the Bulletin that can be expected to have a substantial positive effect on the quality, conduct, and use of risk assessments undertaken by your agency.


a. FDA

FDA has been a leader in conducting risk assessments that meet high standards for accuracy, transparency, and stakeholder involvement. Thus, the draft Bulletin will reinforce FDA’s commitment to high quality risk assessments.


b, CDC

The Bulletin provides many standards which most agencies already adhere to as good practice, at least where it specifies criteria to be applied to quantitative risk assessments. CDC’s formal, quantitative risk assessments reflect a philosophy and commitment to the same tenets described in the draft Bulletin, but these measures already represent CDC current practices. Specifically, CDC occupational risk assessments clearly state the informational needs driving the risk assessment and the objectives of the assessment. They also summarize the scope of the risk assessment, delineating the population of concern and, when appropriate and necessary for the purpose of the risk assessment, they consider confounding factors. CDC typically reports multiple risk estimates, including central estimates and appropriate upper or lower bounds. Great effort is made to produce objective, scientifically defensible, accurate, reproducible and transparent risk assessments.


Since CDC formal, quantitative risk assessments are frequently used by other agencies (OSHA and MSHA) to support regulatory actions, the risk assessments must not only meet the standards of scientific peer review, but also must be defensible in rulemaking hearings. Transparency and reproducibility helps ensure that other scientists can determine the objectivity, scientific soundness and accuracy of CDC risk assessments and enhance the credibility of CDC formal, quantitative risk assessments. CDC also strives to clearly explain the basis for all critical assumptions in its quantitative risk assessments, in particular when alternative assumptions might be used. When appropriate, comparisons are made to other published risk assessments in the scientific literature, characterizing uncertainty and variability, and explain how choices of studies or effects influence the formal, quantitative risk assessment


9. QUESTION: If applicable, please specify provisions in the Bulletin that can be expected to have a substantial negative effect on the quality, conduct, and use of risk assessments undertaken by your agency.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

As drafted, the Bulletin’s definition of risk assessment could be interpreted to cover a wide range of health information disseminated by HHS. Most importantly, this may result in delays in the release of critical scientific data such as public alerts about serious risks to public health and patient safety. Further, the Bulletin’s proposed scope and requirements will involve substantial time, effort, and costs.


Given the Bulletin’s proposed definition of risk assessment, specific examples where provisions of the Bulletin may have substantial negative effects across HHS include:

  • Because most documents are presumed to be available to the public under the Freedom of Information Act (FOIA) (unless an exemption applies), making the Bulletin applicable to risk assessments that are available to the public (discussed on page 9, last paragraph, and also in Section II.1 on page 23) may result in agency resources being spent on satisfying the Bulletin even for analyses that are preliminary or are either not relied on or rejected by the agency.

  • The Bulletin (on pages 13–14) appears to conflict with previous OMB Bulletins as to whether risk assessments must meet the provisions of the Safe Drinking Water Act (SDWA) or whether agencies can adapt SDWA provisions as appropriate and as OMB currently allows under the Information Quality Guidelines.

    Detailed requirements concerning statements about assumptions and placing risks in context and perspective with other risks would create additional requirements on risk assessors and may require judgment outside of their area of expertise.

    The requirement (at Section V.2 (pages 17 and 25)) directing agencies to find and examine previously conducted risk assessments “from qualified scientific organizations” and to “compare these risk assessments to the agency’s risk assessment” will impose additional requirements to evaluate the scientific rigor and aims of the comparison studies and the expertise and objectivity of their contributors.

  • The requirement at pages 20 and 25 that influential risk assessments identify the nature, difficulty, feasibility, cost, and time associated with undertaking research to close data gaps and remedy limitations goes well beyond the likely expertise of the risk assessors themselves will require additional resources and will not improve the quality of the risk assessment itself.

a. FDA

If a broad definition of risk assessment is used, application of the Bulletin may result in substantial increases in the time and resources needed to do scientific analyses that are not currently considered to be risk assessments. Considering the

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

limited resources currently available, this would decrease resources for groups performing non-risk assessment scientific reviews and create a disincentive for conducting thorough scientific analyses of a non-risk assessment nature. Since many scientific reviews we conduct are not developing formal risk estimates, the reviews cannot specify:


Multiple estimates of risk;

Appropriate upper-bound or lower-bound estimates of risk;

  • Each significant uncertainty in the risk assessment process and studies that would assist in resolving the uncertainty; and

Peer-reviewed studies which support or fail to support the risk estimates.


The issue of scope is also of concern if, as some have suggested to the NAS, the Bulletin expands to include the activities or “individual agency adjudications or permit proceedings.” For example, FDA conducts approximately 100 food contact substance notification reviews annually. If these reviews fell within the definition of “risk assessment” and had to comply with the standards of the Bulletin, it would be very difficult for us to meet statutory timelines for response.


More specific examples where the Bulletin may have substantial negative effect include:

  • The definition of “risk assessment” (Section I.3), when read in conjunction with Section II. 1 (“Applicability”), could transform documents into “risk assessments” under the Bulletin when neither the agency nor the document’s authors intended the document to be a true risk assessment.

  • The section on updating risk assessments (Section VI on pages 21 and 25) may impose an additional burden on agencies where risk assessments are used to support regulations. Outside parties may attempt to use the update provision as a mechanism to challenge existing rules – not the underlying risk assessment. However, regulations that impose a significant economic impact on a substantial number of small entities are already subject to periodic review under the Regulatory Flexibility Act, and FDA believes that mechanism, which provides for a more comprehensive review at ten year intervals, is more appropriate and better satisfies the need for periodic review.

b. CDC

The largest and most direct negative impact of the draft Bulletin is the proposed broad definition of risk assessment, By utilizing a definition of risk assessment that includes not only quantitative risk assessment, but also all the individual studies that may eventually lead to a risk assessment (e.g., exposure assessment, epidemiology studies, toxicology studies, hazard identification and evaluation), the Bulletin creates confusion in the scientific community by holding a broad array of supporting science to the same standards and requirements as a full quantitative risk

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

assessment. Such standards cannot be applied practically to studies that identify hazards or provide the groundwork for more formal quantitative risk assessment. The Bulletin’s definition of risk assessment dilutes the effectiveness of such standards and could severely limit the number supporting studies available for quantitative risk assessment by directing them to conduct repetitive and unnecessary analyses. These additional activities would slow the pace of research and require extra resources. Unlike formal, quantitative risk assessments, public health assessments and health consultations incorporate subjective evaluation and best professional judgment. Many of the requirements outlined in the OMB Bulletin are not applicable to this qualitative decision making process.


Particular examples where this provision may not be appropriate include occupational and environmental health hazard evaluations, public health emergency outbreak investigations (e.g. Epi Aids) and epidemiologic studies. Health hazard evaluations are a public health practice activity designed to solve problems at worksites and prevent occupational disease among workers at that site. Specifically, they are investigations to learn whether workers are experiencing work-related health effects or exposed to hazardous materials or harmful conditions. CDC scientists conducting health hazard evaluations conduct exposure and health assessments, often making judgments based on professional expertise and expert knowledge of industry practices. Occupational health hazard evaluations are designed to be rapid evaluations providing employers, employees, and employee representative’s relevant information related to the prevention of occupational exposures and work-related health effects. Applying the standards in this draft Bulletin would be unworkable for health hazard evaluations because of the need to provide results and recommendations in a timely manner,


Another core function of CDC that does not meet the NAS definition, but would be subject and likely impacted by the standards of this draft Bulletin is industry-wide exposure assessment and epidemiologic studies. CDC scientists conduct industrywide field studies to determine the incidence and prevalence of acute and chronic disease in a working population or their offspring and to determine the nature and extent of exposure to potentially hazardous agents in the work environment. The epidemiologic studies conducted by CDC do not establish definitive risk levels, but provide supporting evidence for the presence or lack of an association between exposure and adverse outcome. The broad application of this draft Bulletin to such epidemiological industry-wide field studies would be problematic because collecting data for formal risk characterizations in these would studies would be impractical and resource prohibitive.


CDC already operates by many of the “standards” as best practices for formal, risk assessments and best practices appropriate to other scientific activities, thereby making it difficult to assess the impact of other provisions of the Bulletin particularly for formal, quantitative risk assessments. However, characterization of every possible uncertainty and extensive evaluation of each assumption and derived parameter on the final risk estimate could result in a confusing, less straight-forward document that is entirely consistent with the language in this draft Bulletin, but would not serve the public or the risk assessment community well.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

The Bulletin seems to equate all types of epidemiologic studies with risk assessments regardless of whether numerical estimates or a quantitative model is included in the epidemiological evaluation. The broad application of the Bulletin to epidemiologic studies would be problematic and impractical. Epidemiologic studies do not establish risk levels. For example, from page 6, in “Actuarial analysis of real-world human data” the relevance of etiologic epidemiologic methods is mischaracterized as “such assessments by combining actuarial analyses with biologic theory and medical expertise.” The activities of defining cohorts, conducting exposure assessment, ascertaining outcomes and properly analyzing the association of exposure and outcome while controlling for confounding and evaluating effect modification in epidemiologic studies provide supporting evidence for the presence of or lack of an association between exposure and illness.


This issue is especially of concern for emerging hazards, where a complete package of toxicologic and epidemiologic data may not yet exist. The draft Bulletin notes that “risk assessments of infectious agents pose special challenges since the rate of diffusion of an infectious agent may play a critical role in determining the occurrence and severity of an epidemic” and that “Scientific understanding of both biology and human behavior are critical to performing accurate risk assessments for infectious agents.” However, in addition to the fact that the source of an infectious agent is not static and instead increases and spreads as time lapses (especially for diseases with very short incubation periods (hours to days), there are several other important factors that the Bulletin does not take into consideration:

  1. Federal investigations into risks of infectious diseases often involve those diseases with high or undefined morbidity and/or mortality.

  2. The risk of developing signs and symptoms of disease can occur after even one exposure to undetectable 'levels" of the infectious agent.

  3. The need to stop exposures from occurring, and thus limit illness and death, requires rapid analyses of the source of the infectious agents and the risk factors for exposure and development of disease and rapid responding public health action taken on a basis of incomplete and evolving data.

  4. The public expects and demands that provisional recommendations on means to stop or mitigate the spread of infectious disease be made available as quickly as possible. The goal of scientific completeness must be balanced with the cost of inaction.

  5. Recommendations may evolve over time as more information is obtained, but control efforts cannot wait for completion and certification of each step required by this Bulletin.

  6. Infectious disease risk assessments typically are published later as scientific articles in peer reviewed journals and/or are presented at scientific forum. It is in

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

these venues that scientists present background data from literature reviews, their methodology, the range of their results and discuss in detail any uncertainties and comparisons with the results of others.

  1. Surveillance for infectious diseases and the publication of surveillance data, while used to understand “risk,” are conducted/collected using established case definitions, which serve to enhance objectivity. Interrupting this process for certification would jeopardize timely data collection, and potentially undermine a critical public health activity.

  2. Immediate data collection on adverse events must occur during an “event”

An important part of the health assessment process is our ability to communicate the findings of our activities to the public. We have learned that the public is often confused by the complexities of health science and wants clear, understandable and accurate answers. The Bulletin directs all risk assessments to include an evaluation of alternative models, provide a range of risk estimates using different exposure assumptions, discuss sources of uncertainty, conduct a sensitivity analysis, etc. The inclusion of such information would make it more difficult to develop a clear, userfriendly, unambiguous message for the public.


The Bulletin fails to allow for additional emerging statistical methodologies beyond the expert panel method. Additionally, the requirement to address only clearly adverse human health effects may have unintended public health consequences because of the increasing importance of biomarkers.


It is unclear whether this draft Bulletin is intended to apply to journal articles. If so, the requirements would be impractical. It would be helpful if the final Bulletin include a specific exclusion for journal articles similar to the language in the Peer Review Bulletin.


c. NIH

Given the Bulletin’s proposed definition of risk assessment, some activities carried out by the NIH will be significantly impacted. Risk assessments should be considered special documents since their entire focus pertains to the overall integration of materials into a common document that predicts uncertain risks. In some cases, it is possible to provide sufficient information to judge the uncertainty of the eventual predictions, in others it is not. This is not an issue for each single piece of evidence entering a risk assessment and these individual parts should not be subject to guidelines solely intended for risk assessment documents. Of special concern to the NIH are guidelines developed by Institutes within the NIH to inform the public and studies conducted by NIH researchers to identify factors affecting human health that play a pivotal role in risk assessments (such as epidemiology and toxicology studies). In the opinion of the NIH, these do not, on their own, constitute risk assessments and should be explicitly excluded from the Bulletin.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

If these types of activities are considered risk assessments and subject to the requirements of the Bulletin, it will have serious ramifications on the ability of the NIH to provide the basic scientific evidence needed by other agencies to develop scientifically sound risk assessments. More importantly, if the research and scientific activities supported by the NIH are required to undergo the full process suggested by the Bulletin for risk assessments – even though they are not risk assessments – the NIH would face serious challenges in providing the public with timely scientific research to inform public health decisions.


10. QUESTION: If your agency followed the procedures described in the Bulletin, would it affect the time course for production of the risk assessment (that is, the time required from initiation of the risk assessment to delivery to the regulatory decision maker)? If so, please explain why?


a. FDA

It is difficult to answer this question given the uncertainty surrounding the draft’s current definition of “risk assessment.” Following the Bulletin’s procedures will likely not affect the time required for many complex probabilistic risk assessments because, for those assessments, we currently follow practices that are consistent with the Bulletin. However, the Bulletin could affect the time course for production of other types of scientific review that, although we do not consider them to be “risk assessments,” might nonetheless be considered as falling under the proposed Bulletin.


Additionally, particularly for agencies that sometimes deal in controversial issues, the draft Bulletin may result in challenges as to whether an agency interpreted the Bulletin correctly. For example, for influential risk assessments, the draft Bulletin would compel an agency to respond to “significant” comments and would presume that “scientific” comments are “significant” (see Section V.9). It also expects agencies to prepare a “response-to-comment” document that “should provide an explicit rationale for why the agency has not adopted the position suggested by the commenter” and “why the agency position is preferable.” Section V.9 may adversely affect risk assessment practices at FDA by diverting resources to respond to “significant” comments or to resolving disputes as to whether a comment was “scientific” or “significant.” There may be instances where parties (particularly competitors) may disagree over the “science” to be applied, whether a comment is “significant,” or even whether conventional scientific concepts are applicable or recognizable. In the latter case, individuals or firms advocating the use of “unconventional” or “alternative” therapies may have their products fall within the broad, statutory definitions of “drug” or “device” in the Federal Food, Drug, and Cosmetic Act and would argue that individuals trained in “conventional” science or medicine are either biased or not qualified to evaluate the merits of their products. In such situations, the “science” to be used or considered could become an issue under the Bulletin. One way to avoid such disputes would be to respond to all comments, but this would place even more demands on limited agency resources for little or no apparent benefit.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

As another example, the draft Bulletin, at Section VIII (pages 22 and 26), “Deferral and Waiver,” allows an agency head to waive or defer “some or all of the requirements…where warranted by compelling rationale.” If an agency defers meeting a requirement, the draft Bulletin states that the agency shall comply with the risk assessment requirements “as soon as practicable.” However, the Bulletin provides little or no insight as to how an agency would justify a deferral or waiver, and it is unclear who decides whether an agency’s rationale is “compelling” or whether agencies may be challenged on this issue.


b. CDC

If the Bulletin were to apply to only formal, quantitative risk assessments as defined by NAS and the goals and “standards” in the draft Bulletin are treated as “best practices,” implementation would not substantially slow the pace of producing risk assessments. However, if a more prescriptive implementation was followed and separate peer reviews required for the risk assessment outside the context of the document it was intended for, the result could be a substantial slowing in the risk assessment process.


One recurring message from the public and other agencies is the importance of releasing information a timely manner. This is especially important for public health assessments and health consultations. If the public’s health is at risk, messages must be disseminated as quickly as possible, so that appropriate corrective actions can be implemented. Current CDC scientific activities that would now be considered risk assessment under the proposed OMB definition would be subject to the extensive set of requirements for formal, quantitative risk assessments and pose challenges to the timeliness of information release. It would take considerable time and effort for health assessors to address all of the requirements of this Bulletin and thereby compliance could significantly delay the release of our documents, with potential adverse consequences for public health.


Additionally, the Bulletin implies that in certain circumstances, agencies shall await further research to attain “scientific completeness” before conducting a risk assessment. The goal of scientific completeness must be balanced with the cost of inaction. There must be the ability to conduct the best risk assessment with the “best” available data and inclusion of uncertainty caveats to address urgent public health needs.


For those supporting studies not previously considered risk assessment, there could be a substantial slowing of delivery and use when formal, quantitative risk assessments are conducted having a substantial impact on the quantitative risk assessor and then to the decision maker because of new and time-consuming requirements.


If public health assessments and health consultations were included in the scope of the Bulletin, application of all aspects of the Bulletin under site specific activities would be challenging and resource intensive and potentially, have a deleterious

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

impact on public health because they are often performed under circumstances requiring rapid communication of results to affected parties.


Implementation of the proposed Bulletin also could result in substantial delay in SGR releases. In the history of the twenty-nine SGRs on Smoking and Health for example, the typical time required to complete the reports has varied from less than a year to over 5 years, Rapid expansion of scientific peer-reviewed literature in recent years has added to the technical and scientific challenge in completing the SGRs in a timely fashion. The time from completion of the review volume to publication of the final SGR now takes from 12–18 months, If SGRs were subject to the requirements described in the Bulletin, then the current challenge for timely dissemination would be exacerbated. The language in the Bulletin implies that the review process of such work products should be subject to public comment from all parties. Adding the requirement for public participation and comment to this process likely would add a large volume of comments, which would affect the timeliness of the reports without adding improvements in the scientific quality to the report.


c. NIH

The NIH does not currently conduct or use risk assessments according to the definition set forth in the 1983 NAS report, However, the NIH is concerned that, if strictly applied in its current form, the Bulletin may have a negative impact on the conduct of NIH research programs. The products outlined previously (in Answer 5c) that might be subject to the procedures of the Bulletin have historically been developed as independent, science-driven, public health documents or as research activities. If these activities are required to undergo the full process outlined in the Bulletin for risk assessments, the NIH will face serious challenges in providing the public with timely scientific research to inform public health decisions. The provisions in the Bulletin are likely to blur the distinction between the independent scientific review and research funded by the NIH and the use of that information in making public health policy decisions.


11. QUESTION: One of the Bulletins’s reporting standards states the need to be scientifically objective by “giving weight to both positive and negative studies in light of each study’s technical quality.” Please give an example of how this would be implemented by your agency or department.


HHS scientific products are currently based upon a complete and thorough review of the science giving weight to both positive and negative studies.


a. FDA

FDA already considers both positive and negative studies, as well as their quality, when it reviews the full range of scientific research. Before beginning a risk assessment, it is appropriate to establish criteria for including or excluding studies from the assessment. If needed, weighting factors based on the quality of the study data can also be established or determined before the risk assessment is conducted.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Consequently, FDA does not believe the draft Bulletin’s reporting standards on scientific objectivity will necessitate any significant changes to existing FDA practices.


b. CDC

CDC typically evaluates all relevant studies of acceptable quality in its quantitative risk assessment. Negative studies are evaluated along with positive studies in forming a risk assessment strategy. Specifically, negative studies are assessed in light of the quantitative risk assessment to determine if they are consistent and to suggest any inconsistencies in the database that might warrant explanation. The SGRs have always considered all available peer-reviewed literature, considering all positive and negative studies. This consideration of the consistency of the data always has been central to the SGR evidence review process. Another example is when a positive rat cancer bioassay is compared with one or two negative epidemiology studies. When quantitative information on dose is available, it is possible to compare the animal and human studies and determine if the epidemiology studies had sufficient power to detect the observed dose response. If they do, and the results are incompatible, it might suggest looking closer at mechanistic information or species susceptibility. If the epidemiology studies did not have sufficient power or the statistical analyses suggests no inconsistency, greater confidence can be derived in using the animal data to predict human response, even in light of the nominally “opposite” human results.


However, if the Bulletin is interpreted to mean that an agency should give some sort of quantitative weight to each study based on an evaluation of the technical quality of the study, this is not currently done at CDC, nor is it clear how this could be objectively implemented. Weighting based on perceived technical quality would require substantial professional judgment and carrying that quantification into some sort of adjustment of the final risk numbers is not technically feasible at this time.


12. QUESTION: Does your agency use risk assessments conducted by external groups? Would it be helpful to you if risk assessments submitted to your agency by external groups, such as consultants and private industry, met the requirements proposed In the OMB Bulletin?


a. FDA

FDA, on occasion, does rely on risk assessments conducted by external groups, including contractors. We also may receive documents or submissions during notice-and-comment rulemaking or in adversarial proceedings that could be considered “risk assessments” as defined by the draft Bulletin. As the FDA does with any scientific information, we would evaluate any such risk assessments to determine whether they may appropriately be considered in our decisionmaking pursuant to the Information Quality Act and other legal authorities.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

b. CDC

Although CDC does not typically contract its quantitative risk assessments, it does evaluate others’ risk assessments of the same hazard and compare them to CDC assessments. If the draft Bulletin were to be applied to government agency risk assessments, it would be helpful if quantitative risk assessments conducted by external groups also met the same requirements when those assessments are used by a government agency.

ADDITIONAL NAS QUESTION FOR FDA

12. QUESTION: Dr. Galson indicated at the public meeting that there were problems with the application of OMB requirements to certain types of assessments. Can FDA suggest specific language to exclude those problematic assessments from OMB requirements, rather than just offering examples of those assessments? In other words, how would FDA describe in general terms the types of assessments it would like to see excluded?


Risk assessment is a tool for addressing public health problems and should be scaled to fit the public health problem at hand. The risk assessments according to the proposed Bulletin definition and requirements may not always be appropriate depending on the magnitude of the problem, time constraints, resources, affected population characteristics or available data. Rather than describing in general terms the types of assessments we would like to see excluded, we suggest three significant revisions to the text of the draft bulletin to clarify and better define what should be included.


First, we propose that the Bulletin’s definition of “risk assessment” be revised so that it is consistent with the definition of that term in domestic and international risk assessment communities. OMB’s draft definition is broad because it defines “risk assessment” as “a scientific and/or technical document that assembles and synthesizes scientific information to determine whether a potential hazard exists and/or the extent of possible risk to human health, safety or the environment” (page 8). We suggest that “risk assessment” should refer to a “scientific and/or technical document that assembles and synthesizes scientific information and arrives at a qualitative or quantitative estimation of the extent to which a potential hazard exists and/or the extent of possible risk to human health, safety, or the environment.” A risk assessment does not, itself, determine the presence of a hazard (as the Bulletin’s proposed definition implies). Furthermore, not all syntheses of scientific information are risk assessments. This change would be effective in removing from the purview of the Bulletin several of the types of activities that we do not feel are appropriately brought under the bulletin.


Second, we recommend that the Bulletin define or interpret “scientific information” as it relates to the definition of “risk assessment.” The definition of “scientific information” should be relatively narrow and encompass “factual inputs, data, and

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

models.” While ordinarily there may be a common understanding of the meaning of “scientific information,” our recommendation results from the existence of a broad definition of the term in the OMB Peer Review Bulletin (70 FR 2664 (January 14, 2005)). There, “scientific information” is defined as “factual inputs, data, models, analyses, technical information, or scientific assessments based on the behavioral and social sciences, public health and medical sciences, life and earth sciences, engineering, or physical sciences” and includes any communication or representation of “knowledge, such as facts or data….” Clarifying the definition of “scientific information” in this particular context would help avoid the application of the OMB risk assessment bulletin to FDA documents and other federal documents that are not appropriately considered “risk assessments” either by their authors or by the scientific community.


Third, we recommend that OMB revise the Bulletin to be more consistent with the OMB Peer Review Bulletin in terms of scope and exemptions. The Peer Review Bulletin, for example, applied to “influential scientific information” and imposed additional requirements to “highly influential scientific assessments,” and the requirements applied to information that an agency disseminated to the public. In contrast, the draft risk assessment bulletin would apply to all risk assessments regardless of whether they are disseminated2 or influential. In terms of exemptions, the Peer Review Bulletin contained an express exemption for “[a] health or safety dissemination where the agency determines that the dissemination is time-sensitive….” The Draft Risk Assessment Bulletin, however, omits a “time-sensitive” health or safety exception and provides only a weak agency deferral and waiver authority that requires the agency to comply with Bulletin requirements as soon as practicable. The Draft Risk Assessment Bulletin also omits exceptions for regulatory impact analyses, and negotiations involving international trade or treaties where compliance with the Bulletin “would interfere with the need for secrecy or promptness;” such express exemptions existed in the Peer Review Bulletin. As with the suggestions noted above, this clarification would help protect against unintended application of the Bulletin to less relevant activities.

2

The reason why the draft risk assessment bulletin reaches more documents is because Section II.1 uses the phrase “all agency risk assessments available to the public…” (emphasis added) and because the bulletin explains that “available to the public” includes documents that are made available to the public or required to be disclosed under the Freedom of Information Act (FOIA). Note, however, that the OMB Peer Review Bulletin and the Information Quality Act use a different construct; the information must be “disseminated” to the public. It would be easier for agencies. OMB, and interested parties to use the same construct as the OMB Peer Review Bulletin and the Information Quality Act so that peer review and risk assessment standards apply to information “disseminated” to the public. A single interpretation would be easier to implement, easier to understand, and avoid disputes as to whether one or both bulletins applied to an agency document We also believe that the draft risk assessment bulletin’s “available to the public…under the Freedom of Information Act” construct is impractical because, in general, the presumption is that everything is available under FOIA unless an exemption applies. Consequently, even a preliminary risk assessment that an agency does not rely upon and rejected by the agency would be subject to the risk assessment bulletin because the preliminary risk assessment could be “available to the public” under FOIA.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

This page intentionally left blank.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT RESPONSE TO THE BACKGROUND INFORMATION ON NRC REVIEW OF THE OFFICE OF MANAGEMENT AND BUDGET RISK ASSESSMENT BULLETIN
QUESTIONS FOR ALL AGENCIES POTENTIALLY AFFECTED BY THE OMB BULLETIN
July 26, 2006

General questions about current risk assessment practices

  • Current risk assessment practices.

    • HUD does not conduct probabilistic risk assessments, but rather uses data to focus on the central tendency of the data, or the central estimate, typically means or medians. This is largely due to the fact that the data are not amenable to aggressive statistical data manipulation

    • HUD addresses uncertainty analysis where the data are amenable to the required statistical analysis.

    • HUD currently addresses uncertainty and variability in risk assessments by describing the confidence level of the mean (either arithmetic or geometric).

  • Substantial scientific or technical challenges of risk assessments.

    • There is substantial variability in housing stock. These variables include: construction methods and materials; age; maintenance, repair, and renovations; climate and meteorological impact; design and operation of plumbing, electrical, heating, ventilation, and air conditioning systems; ownership, occupancy and uses; socio-economic factors; state and local building codes and enforcement;

    • Most existing housing risk-related research not sponsored by HUD or other Federal agencies is limited in scope and is not amenable to application to national impacts.

    • Congressional authority and appropriations may limit the scope of research to support the risk assessment.

    • Privacy concerns and the general information collection requirements associated with gathering the necessary data are often restrictive and/or cumbersome.

    • Because HUD does not conduct many risk assessments, it cannot support full time equivalent staff for the analyses. Therefore it is necessary to seek outside support to complete the requisite research and analysis for the risk assessment, and there are a limited number of qualified contractors who are available.

  • HUD’s current definition of risk assessment, and associated products.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

HUD Response to National Research Council Questions

Review of the OMB Risk Assessment Bulletin

  • HUD relies on OMB Circular A-4 for its risk assessments.

  • HUD also addresses Congressional and Executive requirements, such as:

    • the Small Business Regulatory Enforcement Fairness Act (SBREFA);

    • the Regulatory Flexibility Act;

    • Unfunded Mandates Reform Act (UMRA);

    • Executive Order 13132, entitled Federalism (64 FR 43255, August 10, 1999);

    • Executive Order 13175, entitled Consultation and Coordination with Indian Tribal Governments (59 FR 22951, November 6, 2000);

    • Executive Order 13045, entitled Protection of Children from Environmental Health Risks and Safety Risks (62 FR 19885, April 23, 1997);

    • National Technology Transfer and Advancement Act of 1995 ("NTTAA"); and

    • Executive Order 12898, entitled Federal Actions to Address Environmental Justice in Minority Populations and Low-Income Populations (59 FR 7629, February 16, 1994).

  • Risk assessments time frames.

    • The period to complete an original risk assessment is usually two years from the time the need for the assessment is identified until a final work product is available for public comment. In some cases, this can be shortened to about six months where an original risk assessment can be adapted for amendments to existing regulations.

Questions about OMB’s definition of risk assessment and applicability

  • New risk assessment not previously considered if HUD uses the proposed OMB Bulletin definition.

    • Because inspections and adjudications are explicitly not covered by the proposed bulletin, HUD believes that no additional programs will require risk assessments.

    • HUD supports these exclusions, although additional clarification of the definitions may be helpful.

Questions about type of risk assessment (tiered structure)

  • Demarcation between HUD risk assessments used for regulatory analysis and other analyses.

    • Environmental and health issues which may warrant risk assessments for regulatory analysis are limited to a few programs. Analysis for most programs is limited to risks associated with significant economic impact.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

HUD Response to National Research Council Questions

Review of the OMB Risk Assessment Bulletin

  • HUD current demarcation between “influential risk assessment” used for regulatory purposes and other risk assessments used for regulatory purposes.

    • As for the case discussed in the previous answer, environmental and health issues which may warrant risk assessments for regulatory analysis are limited to a few programs. Analysis for most programs is limited to risks associated with significant economic impact.

Questions about impact of the Bulletin on agency risk assessment practices

  • Provisions in the Bulletin that can be expected to have a substantial positive effect on the quality, conduct, and use of risk assessments undertaken by HUD.

    • HUD anticipates that meeting the additional cost and time requirements when using risk assessments will improve the quality, conduct and use of risk assessments.

  • Provisions in the Bulletin that can be expected to have a substantial negative effect on the quality, conduct, and use of risk assessments undertaken HUD.

    • HUD believes the cost and time effects will not have substantial negative effects.

  • Effect on the time course for production of the risk assessment (that is, the time required from initiation of the risk assessment to delivery to the regulatory decision maker) if HUD followed the procedures described in the Bulletin.

    • HUD believes that the time course will have to be extended to ensure the procedures are properly followed.

  • Please give an example of how HUD would implement the Bulletin’s requirement for scientific objectivity by “giving weight to both positive and negative studies in light of each study’s technical quality.”

    • HUD has always considered positive, negative, and inclusive studies in its regulatory risk analysis.

    • In order to enhance transparency, HUD would ask the author to identify the sources of funding for all research studies. This will aid in evaluating what weight to give to all studies, whether positive or negative.

  • HUD’s use of risk assessments conducted by external groups.

    • HUD welcomes the submission of risk assessments submitted by external groups, and meeting the requirements proposed in the OMB Bulletin will provide added weight to their consideration. HUD will still look for peer review to ensure that the study did not introduce bias due to financial support by a stakeholder with a strong position before the study was initiated, among other considerations.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

HUD Response to National Research Council Questions

Review of the OMB Risk Assessment Bulletin

ADDITIONAL ISSUES FOR HUD

  • Consideration of baseline conditions when evaluating alternative mitigation options for regulatory analysis may be inappropriate where there is a statutory requirement.

  • The definition or guidelines for the determination of what constitutes a significant comment would be useful for consistency.

  • Organizational structure, administrative procedures and statutory authority may affect the deferral and waiver process.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

U.S. DEPARTMENT OF THE INTERIOR COORDINATED AGENCY RESPONSE ON OMB’S PROPOSED RISK ASSESSMENT BULLETIN
Prepared by: Office of Policy Analysis (PPA)

Introduction

The Department of Interior (DOI) has reviewed the Office of Management and Budget’s (OMB) Office of Information and Regulatory Affairs (OIRA) proposed risk assessment bulletin, along with the accompanying set of questions developed by the National Research Council (NRC) of the National Academy of Sciences.


The DOI’s comments are provided below. The attached comments represents DOI’s best effort to compile information from all Interior agencies and is not necessarily comprehensive, given the short time-frame required for a response. Comments on the proposed Bulletin, and answers to the questions posed by the NRC could be developed in greater detail given additional time.


Many DOI activities appear to be outside the scope of the proposed Bulletin, given the exemptions described in the section titled “Requirements of This Bulletin.”

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

July 31, 2006

Draft DOI Response to OMB Risk Assessment Bulletin


AGENCY RESPONSES TO THE PROPOSED BULLETIN


Bureau of Reclamation (BOR)


General Comments:


The Bulletin is focused on risk assessment with regard to human health, safety and the environment. The failure analysis of physical structures is addressed to a limited extent. Consider expanding discussion of aspects of risk assessments for physical structures to include the integration of scientific data, simulations and analysis data, failure analysis, and expert elicitation (where expert elicitation provides probabilistic valuation integrating data, analysis, experience, and professional judgment when statistical data is not readily available).


Consider a Department of Homeland Security Module for the bulletin to address security risk assessments, the implications of transparency and communications given the sensitive nature of security risk dissemination.


Specific Comments:


Page 10, Section III: Goals, 3. Goals Related to Effort Expended. Add: The level of effort to be expended should also consider the likelihood that an additional increment of data/analysis would alter the conclusion or decision to be made.

Page 12, Section IV: General Risk Assessment and Reporting Standards, 2. Standards Relating to Scope, third paragraph, modify first sentence as shown in italics: The third step in framing the scope of the risk assessment entails identifying the affected entities, the population to which the hazard applies, and those impacted economically by decision making.


Fish and Wildlife Service (FWS)


General Comments:


The Service is concerned that the Bulletin appears to favor “central tendencies” or expected outcomes as the best approach or the best science. It is the view of the Service that the best science is that which is objective, explicit and complete and the ends or parts of a distribution that we focus on is guided by policy and social values. However, we do agree with the Bulletin that risk assessment cannot be designed independently of context and thus, other norms besides “central tendencies” or middle/most likely risk values could be just as important. As a result, the Service believes there is no absolute standard for treating uncertainty. We agree that there is a need to avoid being selective when gathering or processing information but it is equally important to report results in the same context.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

July 31, 2006

Draft DOI Response to OMB Risk Assessment Bulletin


The Bulletin contains exemptions for single product toxics labeling and the Service is concerned that this might lead to human health and environmental risks that could be foreseen if the exemption was not in place.


There are a number of places in the Bulletin where terms or phrases are not clearly defined. For example, in Section II, 2b, ii, the phrase “scientifically or technically novel or likely to have precedent-setting influence on future adjudications and/or permit proceedings” could use more explanation.


Specific Comments:


Page 1, Summary – OMB and the Office of Science and Technology Policy (OSTP) describe this bulletin as “technical guidance” yet some sections of the Bulletin, including the critical implementation section at the end (the formally titled “Risk Assessment Bulletin” section) create an impression that these are requirements.


Page 3 – The statement is made that “Federal agencies should implement the technical guidance provided in this Bulletin, recognizing that the purposes and types of risk assessments vary.” (Italics added. See comment on page 1 above, and pages 23-24, below). Similarly, the statement “The technical guidance provided here addresses the development of the underlying documents that may help inform risk management and communication, but the scope of this document does not encompass how federal agencies should manage or communicate risk” highlights the fact that the Bulletin is advisory and does not constitute a set of absolute requirements. The Service recommends that the Bulletin be edited to clearly indicate that it is guidance to Federal agencies.


Page 4, paragraph 2 – The use of a “screening level assessment” to determine that a potential risk does not exist, and therefore is not of concern, appears to allow agencies to not fully assess potential risks and will likely result in incomplete or erroneous assumptions about the actual risks to human health or the environment.


Page 5, paragraph 4 – The comments in this paragraph of the Bulletin regarding the “high doses used in experiments” versus “the [presumed] low doses typically found in the environment” reveal that potentially the Bulletin assumes that toxic elements and compounds are always in low amounts in the environment. A single example (e.g., mercury and its prevalence in the human environment and accumulation in a variety of seafood “doses”) reveals that such an assumption is potentially incorrect.


Page 8, Title – The title of the section “The Requirements of This Bulletin” is inconsistent with other sections indicating that the Bulletin is technical guidance.


Page 9, Section II: Applicability. The paragraph states that “a rule of reason should prevail in the appropriate application of the standards in this Bulletin.” The example given is a screening-level risk assessment, which would be exempt from the standard of “neither minimizing nor exaggerating the nature and magnitude of risk.” The paragraph goes on to say that quantitative risk assessments should provide a range of risk estimates.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

July 31, 2006

Draft DOI Response to OMB Risk Assessment Bulletin


Many screening-level risk assessments are quantitative, in that numbers for both exposure and thresholds of harm are compared, typically by taking ratios. The Service believes that calculating a range of risk estimates defeats the purpose of a screening-level assessment. This apparent contradiction should be resolved.


Page 11, Section IV: General Risk Assessment and Reporting Standards. The section begins by stating that risk assessments must “meet the three key attributes of utility, objectivity, and integrity in IQA guidelines. Objectivity is defined in IQA guidelines in terms of accuracy, clarity, completeness, and lack of bias. While all of the attributes are desirable and positive, lack of bias has special considerations in risk assessments. Bias may be relatively easy to overcome in a scientific exercise of measurement, data analysis and presentation. In this scientific process, risk assessment is what is done after the raw data are condensed; it is essentially speculation. Bias is difficult to control in this situation, requiring those who will use the risk assessment to agree beforehand on how it will be done.


Page 14, “Standards Related to Objectivity”, paragraph 2. The statements referring to “…the best available, peer reviewed science and supporting studies conducted in accordance with sound and objective scientific practices” modifies the definition and concept of “best available scientific and commercial information” per the Endangered Species Act. Is that what the Bulletin intended to do?


Page 14, section IV. 4. This section implies that risk is directly measured as part of a risk assessment, making the application of objectivity standards straightforward. This is clear in the first paragraph: “When determining whether a potential hazard exists, weight should be given to both positive and negative studies.” In the second paragraph, there are references to peer-reviewed science and data collected by accepted or best available methods. In reality, risk is seldom directly measured and objectivity standards are difficult to apply to the risk characterization. The objectivity standards may be applied to the data that are used as input to risk assessments. However, the risk assessments themselves typically require assumptions about input data, modeling of these data, or comparisons among input data to estimate risk. These activities are arbitrary or speculative in nature rather than strictly scientific. This distinction needs to be made in the guidelines. Otherwise, most risk assessments may be held to objectivity standards that can only be reasonably applied to input data.


Page 16, item 5. re: central estimates and expected risks – The Service believes that reliance on a central estimate alone in determining expected risks, is problematic, even in “non-influential” (per definition of the Bulletin) risk assessments because short-term, high-range values can have devastating effects even when an average does not. We recommend that this section of the guidance receive further, scientifically-focused, attention to make sure it provides a more accurate discussion of “expected risk.”


Page 16, section 7 1). The use of the term “baseline risk” is confusing and should be clarified. Typically, baseline is used for conditions as they currently exist. The reason “anticipated countermeasures” should be understood to “capture the baseline risk” needs to be explained.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

July 31, 2006

Draft DOI Response to OMB Risk Assessment Bulletin


Page 18, last paragraph – The statement that “When model uncertainty is substantial, the central or expected estimate may be a weighted average of the results from alternative models” appears to be sound guidance as long as agencies have latitude to take other approaches if circumstances demand.


Page 23. The actual bulletin begins here, while the bulk of the preceding material was “supplementary information.” In Section II, Applicability, the “rule of reason” described in the supplementary information is missing. Although the phrase “to the extent appropriate” precedes the phrase “all agency risk assessments…shall comply…,” it is overshadowed by all the requirements listed in the remaining pages of the bulletin. While requirements for transparency and similar attributes have obvious utility, requirements for procedures like population risk estimates and the use of probability distributions do not. Because some agencies have a long history of performing risk assessments with stakeholders, OMB should consult with them on the likely consequences of an overly prescriptive approach to risk assessment. For example, the need to reach consensus on the conduct of the assessment among risk assessors, risk managers, and other stakeholders may need to take precedence over some procedures required by the guidelines. Otherwise, arriving at meaningful decisions may be much more difficult. Such advice should at least be as prominent in the bulletin as the prescriptive measures.


Minerals Management Service (MMS)


For the purposes of this Bulletin, the term “risk assessment” refers to a document that assembles and synthesizes scientific information to determine whether a potential hazard exists and/or the extent of possible risk to human health, safety or the environment. The purpose of this Bulletin is to enhance the technical quality and objectivity of risk assessments prepared by federal agencies by establishing uniform, minimum standards.


MMS’ two largest programs are Minerals Revenue Management (MRM) and Offshore Mineral Management (OMM). MMS’ mission includes 1) the protection of lives, resources and property while managing offshore mineral resources and 2) ensuring industry compliance with revenue mandates, receipt of fair market value on the oil and gas produced, and the timely disbursement of revenues to localities, tribes and the Treasury. The “royalty side” of the MMS operations has more difficulty incorporating the details from the Bulletin, and therefore looks to additional sources for guidance in this area.


MRM does face various risks, and attempts to prepare risk assessments. A working definition of risk is “future events or conditions that may or may not occur that will positively or negatively affect agency objectives.” A risk assessment is the identification of these future events or conditions along with an estimate of their impacts and the likelihood of occurrence. MMS appreciates the need for an integrated approach such as this, and continues to incorporate best practices across government and industry. However, since the Bulletin focuses on health, safety, and the environment, its impact on risk assessment practices is minimal (or none) for MRM, and unclear for OMM. For

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

July 31, 2006

Draft DOI Response to OMB Risk Assessment Bulletin


OMM, the procedures in the Bulletin may affect the time course for risk assessments conducted internally, but should not affect those risk assessments that are contracted out, since the procedures described in the Bulletin could be included in the contract.


Note: The administrative areas within MMS incorporate the risk-based principles of OMB Circular A-123 in conducting internal control assessments.


Agency Responses to the NRC Questions—Questions and Agency Responses


General questions about current risk assessment practices

  • Please provide a brief overview of your current risk assessment practices. Specifically, do you conduct probabilistic risk assessment? Is there a common approach to both risk assessments and uncertainty analysis? How do you currently address uncertainty and variability in your agency's risk assessments?

BOR: Reclamation conducts probabilistic risk assessments to evaluate dam safety issues at approximately 250 Reclamation owned dams. These risk assessments are used for prioritization of workload, evaluation of the need for risk reduction measures, selection of preferred alternatives, and verification that completed risk reduction measures were effective in reducing risk. Reclamation uses event tree-based models to evaluate risk, and integrates uncertainty and variability in the analyses by having technical staff estimate probability distributions for each event in the tree. Distributions of risk are then computed through Monte Carlo simulation of the event trees.


FWS: Risk assessment practices are used by several different Service programs. The Fire Coordination staff use risk assessment practices to make predictions about equipment and staffing needs for both wildland fire suppression and for controlled burns. Safety and Health staff use risk assessment practices to evaluate certain work activities to determine the potential risks these activities pose to human health. The Aquatic Nuisance Species (ANS) program uses risk assessment practices to evaluate the potential for ANS to invade a specific water body. The Endangered Species Program is engaged in assessing risks to Federal listed threatened and endangered species. These analyses are more along the lines of classical strategic decision-making as opposed to classical risk assessment analyses. Classical risk assessment analyses are principally conducted by the Environmental Contaminants (EC) and Natural Resource Damage Assessment and Restoration (NRDAR) Programs. These programs frequently rely on agencies like the Environmental Protection Agency (EPA) and the Department of Defense to conduct a significant portion of the assessments needed.


Many of the risk assessments performed by the Service are qualitative or deterministic. Probabilistic risk assessments are sometimes a component of risk assessment as related to the Comprehensive Environmental Restoration, Compensation, and Liability Act of 1980 (CERCLA) and as guided by the EPA. In general, the Service does not conduct probabilistic risk assessments very frequently.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

July 31, 2006

Draft DOI Response to OMB Risk Assessment Bulletin


Generally, for contaminant-related issues, the Service uses EPA-recommended approaches.


At this point, the Service cannot provide a complete listing of all approaches used in each program or region, however, the two excerpts provided below provide some insight into approaches used by Service personnel:


[excerpt from the regional response of one Service region reflecting the approach taken by staff from the Contaminants Program]


Our uncertainty/variability analyses summarize the assumptions made for each element of the assessment and evaluate the validity of those assumptions, the strengths and weaknesses of the analyses, and attempts to quantify – to the greatest practicable extent – the uncertainties associated with each risk we identify. In our deterministic environmental risk assessments, we discuss uncertainty related to selection and quantification of constituents of potential ecological concern (CPECs), receptor selection, exposure estimation, effects estimation, and risk characterization. We also identify and thoroughly discuss in our uncertainty analyses significant data gaps that may have hindered or prevented the full determination of potential risk.


[excerpt from the regional response of one Service region reflecting the approach taken by staff from the Contaminants Program]


Uncertainty is addressed primarily through the use of standard uncertainty factors; variability is addressed through the use of ranges and measures of central tendency


MMS: MMS, in some cases, relies on the regulatory review process and participation of the scientific community to identify the need for risk assessments. Reviews and analyses of offshore operational data are also done in-house to support reviews of Outer Continental Shelf operations. MMS also conducts probabilistic risk assessment for meeting or failing to meet the fair market value requirement for Royalty-in-Kind.

  • Please identify any substantial scientific or technical challenges that you may encounter when conducting risk assessments for your agency.

BOR: The most significant challenge for Reclamation is the treatment of low-probability, but high-consequence events. Ensuring the safety of dams through probabilistic risk assessment requires assurance that the dams will safely perform their intended purpose even under extreme events not likely to have been experienced in recent history. While available data are insufficient for a statistically-based estimate of event probabilities, Reclamation has conducted significant investigations to develop tools for inferring estimates of hydrologic and seismic event probabilities through a variety of scientific and engineering processes.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

July 31, 2006

Draft DOI Response to OMB Risk Assessment Bulletin


FWS: One of the most substantial scientific/technical challenges the Service faces is related to the complexity of the systems that are the focus of risk assessments and the limited data that are available to evaluate risk. When conducting risk assessments the Service is usually interested in evaluating risk to a variety of species as a result of exposure to one or more chemicals or trace elements. In most cases, toxicity data for the species of interest does not exist and this challenge is compounded when data does not exist for the most sensitive life stage. Similarly, the Service is challenged by the extrapolation of laboratory studies to wild populations and extrapolation of studies of one species to another. In general, a lack of data on wild populations and species in decline poses difficulties in risk assessment.


MMS: MMS may contract out a risk assessment on new and/or emerging technologies. Another challenge is to anticipate future events, such as hurricanes and their severity, and evaluate their likelihood.

  • What is your current definition of risk assessment, and what types of products are covered by that definition?

BOR: For Reclamation, risk assessment activities include identification of potential risks, data collection and information analysis for computing risks, assembling a team of technical experts to develop risk models and report estimated risks, and decision-making regarding Reclamation actions to be taken to address the risk. Reclamation work products include risk analysis reports, decision documents, and workload priorities.


FWS: A process to provide a qualitative and/or quantitative appraisal of the likelihood that adverse effects are occurring or may occur in plants and animals (other than humans) as a result of exposure to one or more stressors, which are defined as any physical, chemical, or biological entity that can induce an adverse ecological response. Stressors can be biological (e.g. invasive exotic species) and physical (e.g. mechanical destruction of habitat), as well as chemical (i.e. hazardous substances).


MMS: A risk assessment is the identification of future events or conditions along with an estimate of their impacts and the likelihood of occurrence in program operations. Risk management is the creation and implementation of strategies to minimize the impacts or likelihood.

  • About how long (that is, from initiation of the risk assessment to delivery to the regulatory decision maker) does it take to produce the various types of risk assessments?

BOR: Assuming the preliminary technical analysis has been performed, the time to prepare a risk analysis report varies from a couple of weeks to several months. The shorter time frame is associated with screening-type studies performed to determine the value of conducting further detailed studies to estimate risk. The longer time frame is associated with assembling technical experts in a team to conduct risk analysis and

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

July 31, 2006

Draft DOI Response to OMB Risk Assessment Bulletin


compile recommendations for decision makers, regarding the need for and selection of actions to reduce risk.


FWS: Risk assessments are intended to be time-efficient, cost-effective, analyses that facilitate defensible appraisals of the significant effects of stressors on natural resources at spatial and temporal scales relative to the Federal statutes, regulations, and policies which the Service is charged to uphold. The scale, complexity, protocols, data needs, and investigational methods used in a given assessment are determined by circumstances at or associated with the site being investigated, in conjunction with the fiscal and staffing constraints. Thus, the timelines vary considerably.


MMS: Varies, contracted risk assessments may take up to 2 years from problem identification to delivery.


Time to produce internal risk assessments depends on the complexity of the subject.


Questions about OMB’s definition of risk assessment and applicability

  • Using the definition of risk assessment described in the OMB Bulletin, are there work products that would now be considered risk assessments that were not previously considered risk assessments? If so, what are they?

BOR: Based on the OMB Bulletin, there are no work products in the Dam Safety Program that would now be considered a risk assessment that were not previously considered risk assessments.


FWS: OMB’s definition appears to more broadly define risk assessment, or risk assessment-like processes, than what the Service has historically called risk assessment. For example, some of the processes and procedures within the Endangered Species Program most likely will fall under OMB’s definitions.


MMS: No, regarding human health, safety, and environment issues as addressed in the Bulletin.


Questions about type of risk assessment (tiered structure)

  • In your agency, is there currently a clear demarcation between risk assessments used for regulatory analysis and those not used for regulatory analysis? Is this clear at the outset of the risk assessment?

BOR: Reclamation is not a regulatory agency, therefore none of its risk assessments have a regulatory purpose. However, Reclamation clearly understands the importance of defining the purpose and scope of a risk assessment prior initiating work on the risk model. This is addressed in Reclamation’s methodology for dam safety risk assessment.


FWS: While some work may be more directly linked to policy, most of the ecological risk assessments the Service undertakes—including most other risk assessments or risk

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

July 31, 2006

Draft DOI Response to OMB Risk Assessment Bulletin


assessment-type work (following the definition of risk assessment in the policy bulletin) —have a link to statutes and/or regulations.


MMS: It varies. In general, MMS does not make a distinction. However, initiated risk assessments and analysis will likely relate in one manner or another to our regulatory authority granted in the Outer Continental Shelf Lands Act.

  • In your agency, is there currently a clear demarcation between “influential risk assessment” used for regulatory purposes and other risk assessments used for regulatory purposes? Is this clear at the outset of the risk assessment?

BOR: Since Reclamation is not a regulatory agency, and the risk assessments conducted by Reclamation are used for decisions at specific facilities, there is no need for the Bureau to distinguish between “influential” and other types of risk assessment.


FWS: The Service believes the necessary guidance from the OMB Peer Review Bulletin and the Service’s own Information Quality Act guidelines provide the necessary framework to make the demarcation clear.


MMS: No.


Questions about impact of the Bulletin on agency risk assessment practices

  • If applicable, please specify provisions in the Bulletin that can be expected to have a substantial positive effect on the quality, conduct, and use of risk assessments undertaken by your agency.

BOR: Reclamation believes that government-wide guidance regarding risk assessment offers the opportunity for greater consistency among a variety of technical applications where risk assessment can assist in decision making, provided that some of the key challenges facing the technical staff implementing these methods are addressed.


FWS: The Service believes that the presence of a risk assessment bulletin will stimulate discussion within the agency at all levels, and will increase awareness of appropriate guiding principles when risk assessments are produced.


MMS: Since the Bulletin deals with health, safety, and the environment, it would not affect how certain risk assessments are done. It is unclear if the provisions in the Bulletin would have a substantial positive or negative effect on future risk assessments. A Risk Management instruction guide will help answer this question.

  • If applicable, please specify provisions in the Bulletin that can be expected to have a substantial negative effect on the quality, conduct, and use of risk assessments undertaken by your agency.

BOR: Given Reclamation’s current commitment to probabilistic risk analysis for Dam Safety decision making, Reclamation sees no negative impacts in that area. It is less

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

July 31, 2006

Draft DOI Response to OMB Risk Assessment Bulletin


clear whether or not there would be an expectation to extend the application of probabilistic risk assessment to other areas of Reclamations programs.


FWS: If it is the intent of the Bulletin to drive more risk assessment work to be probabilistic, the Service anticipates that it would negatively affect the Service as a result of anticipated increased costs and time required to complete probabilistic risk assessments. If influential risk assessments carry with them more requirements, then costs will rise, as more staff time is required to complete them.


MMS: Since the Bulletin deals with health, safety, and the environment, it would not affect how certain risk assessments are done. It is unclear if the provisions in the Bulletin would have a substantial positive or negative effect on future risk assessments. A Risk Management instruction guide will help answer this question.

  • If your agency followed the procedures described in the Bulletin, would it affect the time course for production of the risk assessment (that is, the time required from initiation of the risk assessment to delivery to the regulatory decision maker)? If so, please explain why?

BOR: Reclamation foresees no substantial affects in the area of Dam Safety.


FWS: It might increase the time and cost depending on the application of the “influential risk assessment” concept, and if the Bulletin’s guidance is in reality a set of “requirements.”


MMS: In certain areas of MMS, it could affect the risk assessments being contracted out since the procedures described in the Bulletin could be included in the contracted assessments. In other MMS areas, the Bulletin’s procedures may extend the timeline for the risk assessments conducted by MMS staff.

  • One of the Bulletin’s reporting standards states the need to be scientifically objective by “giving weight to both positive and negative studies in light of each study’s technical quality.” Please give an example of how this would be implemented by your agency or department.

BOR: Reclamation’s methodology specifically calls for teams conducting risk assessments to evaluate information both in support of and contrary to a given premise in estimating the likelihood of an event. This information can include scientific data, theoretical analysis, and engineering judgment. This methodology acknowledges that there may be multiple sources of data, and that some sources of data may provide conflicting interpretations of the likelihood of an event.


FWS: The Service currently does not have formal direction on such a technique. However, a useful approach might include the following: 1) identify applicable studies, primarily from peer-reviewed scientific journals; 2) evaluate the applicability and technical quality of the study in relation to the objectives of the risk assessment; 3) rank

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

July 31, 2006

Draft DOI Response to OMB Risk Assessment Bulletin


each study based on its quality and how well it supports the objectives of the assessment; 4) use the most highly ranked studies to develop values such as toxicity reference values, home ranges, assimilation efficiencies etc.


MMS: MMS cannot provide examples at this time.

  • Does your agency use risk assessments conducted by external groups? Would it be helpful to you if risk assessments submitted to your agency by external groups, such as consultants and private industry, met the requirements proposed in the OMB Bulletin?

BOR: Reclamation only uses risk assessments conducted by external groups to the extent that they are contracted to expand program accomplishment. Reclamation requires these risk assessments to meet the same standards as risk assessments performed internally by Reclamation staff. Therefore the benefits of an OMB risk assessment bulletin would be only those addressed by previous questions.


FWS: The Service, at times, uses risk assessments developed by external groups but not without evaluating them carefully first. Improving risk assessments is a laudable goal, whether for Federal agencies or the private sector. The Service suggests that consultants and industry be urged to follow the same guidelines as those provided by OMB.


MMS: MMS contracts for specific risk assessments. It is undeterminable if this Bulletin would be helpful.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

U.S. DEPARTMENT OF LABOR

Responses to Questions from the National Research Council’s Committee to Review the Proposed OMB Risk Assessment Bulletin

The Department of Labor (DOL) appreciates the opportunity to respond to questions from the National Research Council’s Committee to review the proposed OMB Risk Assessment Bulletin. Within DOL, analyses of safety and health risks are performed by both the Occupational Safety and Health Administration (OSHA) and the Mine Safety and Health Administration (MSHA). Both agencies use similar approaches and must meet similar statutory and other legal obligations in conducting such assessments. For clarity, our responses to these questions primarily reference OSHA but generally apply in an analogous manner to MSHA.


General questions about current risk assessment practices

  • Please provide a brief overview of your current risk assessment practices. Specifically, do you conduct probabilistic risk assessment? Is there a common approach to both risk assessments and uncertainty analysis? How do you currently address uncertainty and variability in your agency's risk assessments?

Risk assessments are generally performed in connection with promulgating safety and health rules; as such, risk analyses disseminated by these agencies are subject to statutory requirements governing regulatory decision making as well as the public rulemaking process, during which the risk analyses undergo rigorous scientific and technical review by scientific experts and the interested public. OSHA’s analyses of workplace risks are disseminated to the public as a component of Federal Register notices of proposed and final rules.


In promulgating safety and health standards, OSHA uses the best available information to evaluate the risk associated with exposures to workplace hazards, to determine whether this risk is severe enough to warrant regulatory action, and to determine whether a new or revised rule will substantially reduce this risk. OSHA makes these findings, referred to as the "significant risk determination", based on the requirements of the Occupational Safety and Health Act, the Supreme Court's interpretation of the Act in the "benzene" decision of 1980 (Industrial Union Department, AFL-CIO v. American Petroleum Institute, 448 U.S. 607), and other court decisions. To make its determinations of the significance of the risk, OSHA relies on analyses of scientific and statistical information and data that describe the nature of the hazard associated with employee exposures in the workplace, and derive estimates of lifetime risk assuming that employees are exposed to the hazard over their working life (usually taken to be 45 years). This corresponds to the first two components of the risk assessment paradigm described by the National Research Council (NRC) in 1983, i.e., hazard identification and exposure-response analysis.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

OSHA generally relies on the risk approaches, practices, policies, and assumptions used by the agency in previous regulatory actions unless there is convincing scientific rationale to adopt an alternative. OSHA does not have formal risk assessment guidelines like the EPA.


For health risks, OSHA most often has relied on epidemiological data, but will estimate risk from animal data where adequate human data are not available or where it is useful to compare risk estimates derived from both human and animal data. OSHA generally uses widely accepted approaches to estimate risk in the range of exposures of interest to the agency (e.g., at the current exposure limit and at exposure levels being considered to set new or revised limits). Because risk assessment is used to support findings of the significance of risk, OSHA finds it most useful to quantitatively estimate risk by extrapolating from the observed range of exposure to the range of interest; as such, approaches such as the use of uncertainty factors or EPA’s margin-of-exposure approach is less useful for OSHA’s regulatory purposes.


OSHA will typically address model uncertainty by comparing results from alternate models that are compatible with scientific evidence on mode of action. OSHA will usually conduct sensitivity analysis where there is reasonable data to support it and when it is useful to facilitate regulatory decision making. Most often, OSHA bases its regulatory decisions on a range of central estimates of risk derived from the best supported models. The key assumptions and uncertainties in the assessment are identified and their impact discussed. OSHA has not generally derived quantitative uncertainty distributions for its risk estimates.


OSHA addresses risks to vulnerable and/or susceptible employee populations in its quantitative risk assessments when there is scientific evidence to support potential differences in risk. Quantitative variability in risk is characterized when the appropriate data and models are available; for example, OSHA accounted for biological variability in using a physiologically-based pharmacokinetic model to estimate cancer risk in its methylene chloride rulemaking. Variability in employee exposures is usually addressed as part of the OSHA feasibility analysis and not in the assessment of risk.


Analyses of safety risks conducted by OSHA to support safety standards are quite different from health risk analyses in terms of the kinds of data and information available to the Agency. The goal of a safety risk analysis is to describe the numbers, rates and causal nature of injuries related to the safety risks being addressed. OSHA has historically relied on injury and illness statistics from the Bureau of Labor Statistics (BLS), combined with incident or accident reports from OSHA’s enforcement activities, incident or accident reports submitted to the record from the private or public sectors, testimony of experts who have experience dealing with the safety risks being addressed, and information and data supplied by organizations that develop consensus safety standards, such as the American National Standards Institute or ASTM International (formerly known as the American Society for Testing and Materials).

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Part of what can be considered the risk analysis also appears in OSHA’s Economic Analysis for proposed and final rules. The Economic Analysis includes an analysis of employee exposures to the hazard of interest, estimates of the sizes of the exposed employee populations in affected industry sectors, and an analysis of the numbers of exposure-related illnesses that occur in those populations and the numbers of illnesses potentially avoided by the new standard. Thus, the remaining two components of the NRC risk assessment paradigm, exposure assessment and risk characterization, are conducted by OSHA to fulfill Executive Order requirements to evaluate the benefits of regulation. Information and data typically relied upon by the Agency to conduct these analyses include exposure data generated by OSHA’s enforcement activity, exposure data submitted to the record by industry or labor organizations, industry studies conducted by the National Institute for Occupational Safety and Health (NIOSH), and data obtained by OSHA or its contractors during the conduct of site visits to industrial facilities. In addition, OSHA has usually relied on statistics published by the BLS or the U.S. Census to develop estimates of the size of the population at risk. OSHA does not typically conduct probabilistic uncertainty analysis as part of its exposure assessment and risk characterization, but does conduct sensitivity analysis to describe the effect of uncertainties in estimates of exposure or population-at-risk on benefits estimates.

  • Please identify any substantial scientific or technical challenges that you may encounter when conducting risk assessments for your agency.

One of the biggest challenges in chemical risk assessment at OSHA is determining quantitative risk estimates for non-cancer endpoints, especially if it involves extrapolation from experimental animals to humans and extrapolation outside the observable range. As mentioned above, OSHA statutes and policies required to support regulatory action are not readily compatible with the uncertainty factor/margin of exposure approaches favored by EPA and other agencies to evaluate non-cancer risks.


Analysis of safety risks present unique challenges to OSHA. While OSHA can sometimes be quite confident of the number of injuries or fatalities caused by a hazard (due to the availability of fatality and injury statistics from the Bureau of Labor Statistics), there is sometimes uncertainty about the population-at-risk and extent of employee exposure to safety hazards. In large part, this is due to the difficulty of ascertaining certain exposure metrics for safety hazards, unlike the situation that exists for chemical exposures. In addition, it is difficult to quantify precisely the effect of preventive measures on safety risks since quantitative exposure-response relationships are difficult to construct. Instead, the effects of preventive measures on risk are often a matter of expert judgment and practical experience in implementing safety programs.


BLS routinely groups the mining industry with the oil and gas sectors when reporting national statistics, because the mining industry is small in size compared to other industries. Therefore, sufficient data are not readily available to MSHA to make meaningful statistical inferences.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

For both safety and health regulatory projects, OSHA faces a challenge in estimating the effect on risk of certain mitigation measures such as employee training, competency certification, exposure assessment, and certain procedural requirements. There is a general lack of quantitative data on the beneficial effects of such practices and OSHA generally describes their effects in qualitative terms.

  • What is your current definition of risk assessment, and what types of products are covered by that definition?

As described in response to the first question above, OSHA typically considers risk assessment to mean hazard identification and estimation of lifetime risk associated with exposure to a hazard over a working lifetime. OSHA has not, in the past, treated exposure assessment or risk characterization as part of its “risk assessment,” although OSHA does conduct these analyses as part of its estimation of the benefits of regulatory alternatives.

  • About how long (that is, from initiation of the risk assessment to delivery to the regulatory decision maker) does it take to produce the various types of risk assessments?

This obviously depends on the scope and complexity of the assessment, the availability of existing analyses, and the priority given the regulatory project within the agency. For its most recently completed risk assessment, OSHA required about 2.5 years to develop an assessment of health risks associated with exposure to hexavalent chromium. This included work to produce and review the risk analyses, evaluate key studies, develop and review the written health effects and dose-response documents, and conduct and respond to an outside peer review of the dose-response analysis before the assessment was ready to pass on to decision makers.


Analysis of safety risks generally take less time, ranging from weeks for an assessment of risks that are already characterized by BLS, to a year or more for risks that have not been so classified.


Questions about OMB’s definition of risk assessment and applicability

  • Using the definition of risk assessment described in the OMB Bulletin, are there work products that would now be considered risk assessments that were not previously considered risk assessments? If so, what are they?

As mentioned above, the exposure assessment and risk characterization analyses conducted by OSHA as part of the agency’s benefits assessment have not been considered to be part of what OSHA disseminates as a risk assessment.


It is possible that some non-regulatory informational products developed by OSHA can fall into the Bulletin’s definition of risk assessment where the information contains hazard statements or hazard information. For example, OSHA recently

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

published guidance documents to assist employers and employees in reducing exposures to perchloroethylene in dry cleaning establishments and glutaraldehyde in health care facilities. Both of these documents contain information on potential adverse health consequences of exposure (cancer in the case of perchloroethylene and asthma for glutaraldehyde) as well as exposure control recommendations believed by OSHA to be effective in reducing these risks. Such documents might be regarded under the Bulletin as “a synthesis of scientific information to determine whether a potential hazard exists.”


Questions about type of risk assessment (tiered structure)

  • In your agency, is there currently a clear demarcation between risk assessments used for regulatory analysis and those not used for regulatory analysis? Is this clear at the outset of the risk assessment?

As described above, to date OSHA has conducted what the agency regards as risk assessments only as part of regulatory analyses.

  • In your agency, is there currently a clear demarcation between “influential risk assessment” used for regulatory purposes and other risk assessments used for regulatory purposes? Is this clear at the outset of the risk assessment?

The Department of Labor Information Quality Guidelines defines “influential” information as that having a “clear and substantial impact on important public policies and private-sector decision making.” Generally, this is interpreted to mean information having an annual impact of $100 million or more. In most cases, it will be evident to OSHA at the outset of a risk assessment whether the assessment is or is not likely to be influential under this definition. However, since the impact of a regulation depends on the scope of the regulation as well as the nature of the individual provisions in the regulation, it is not always clear at the outset that the regulation can reasonably be expected to have an impact of $100 million annually. The actual impact of a regulation is usually determined well after a risk assessment has been initiated since the results of the assessment in part are necessary to make regulatory decisions that can affect the size of the impacts.


Questions about impact of the Bulletin on agency risk assessment practices

  • If applicable, please specify provisions in the Bulletin that can be expected to have a substantial positive effect on the quality, conduct, and use of risk assessments undertaken by your agency.

Most of the Bulletin’s provisions should have a positive effect on the transparency, objectivity, and technical completeness of agency risk assessments. OSHA believes that its risk assessments have typically complied with these quality standards and, since the agency’s risk assessments are conducted as part of notice and comment rulemaking, they have always achieved a high degree of transparency and have typically been subject to rigorous scientific scrutiny and debate.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
  • If applicable, please specify provisions in the Bulletin that can be expected to have a substantial negative effect on the quality, conduct, and use of risk assessments undertaken by your agency.

The Bulletin’s provisions for deriving quantitative distributions of model uncertainty and variability, wherever feasible, could add significant time to some risk assessments where such analyses are not critical to fully inform regulatory decision makers. In particular, such analyses have not been necessary to adequately characterize safety risks. These provisions may also require conducting formal uncertainty analysis of OSHA’s exposure assessments and benefits analyses performed as part of the agency’s economic impact assessments. As explained above, these assessments have not been generally regarded by OSHA as part of its risk assessment.


One possible negative impact relates to provisions IV.3 and IV.5, which specify that, wherever possible, risk estimates based on all plausible assumptions and models be quantitatively evaluated. OSHA believes the wording of the requirement could provide credibility to some risk analyses that may not be supported by scientific evidence and, thus, could undermine the technical rigor of the assessment. OSHA would prefer that the Bulletin make clear that quantitative evaluation of risk be based on those assumptions and models that are clearly consistent with supporting scientific evidence, for example regarding a chemical agent’s mode of action.


Provision IV.6 of the proposed Bulletin would require that an executive summary of the risk assessment include information that would place the risk estimates in context with other risks that might be familiar to the target audience. OSHA does not generally engage in such comparative risk analyses for decision making purposes since OSHA’s regulatory decisions must be based on consideration of the significance of risk and the extent to which those risks would be reduced by the regulatory action (as well as other factors such as technologic and economic feasibility). OSHA has had a long history of considering risks to be clearly significant if employees are exposed to a lifetime risk of 1 death or case of serious harm per 1,000 employees; thus, evaluating the significance of the risk does not involve making comparisons of that risk to other risks.


Provision IV.7 of the proposed Bulletin would require that agencies provide information on the onset of adverse effects and on the timing of corrective measures and associated reduction in risk. For chronic health effects, information that describes the relationships between reduction in exposure and reduction in risk (for example, cessation lag models) is not generally available. OSHA’s benefits analyses clearly identify assumptions made by the Agency to describe how benefits are believed to accrue following regulatory action, and such assumptions are usually based on what is known about the latency of the disease(s) of interest. For example, for chemically related lung cancer, OSHA has often assumed a latency of 20 years from first exposure for purposes of describing how benefits can be expected to accrue after exposures are reduced in response to a new regulation. However, until specific information on the actual relationships between cessation of exposure and reduction in chronic disease risk

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

becomes available, OSHA does not anticipate that it will be possible to construct alternative assumptions for evaluating the benefits.

  • If your agency followed the procedures described in the Bulletin, would it affect the time course for production of the risk assessment (that is, the time required from initiation of the risk assessment to delivery to the regulatory decision maker)? If so, please explain why?

OSHA believes that the Bulletin’s provisions to develop quantitative distributions of model uncertainty and variability, wherever feasible, could add significant time to some risk assessments without necessarily increasing the utility of the risk assessment for Agency decision makers.

  • One of the Bulletin’s reporting standards states the need to be scientifically objective by “giving weight to both positive and negative studies in light of each study’s technical quality.” Please give an example of how this would be implemented by your agency or department.

OSHA usually considers both positive and negative studies in its risk assessments and makes hazard determinations based on evaluating the quality of each study included. The agency will often look to reconcile positive and negative data, using additional scientific information if necessary. OSHA’s recently published evaluation of the scientific evidence for an increased cancer risk among employees exposed to hexavalent chromium is an example of how OSHA considers positive and negative studies on technical merit based on a weight of evidence scheme.

  • Does your agency use risk assessments conducted by external groups? Would it be helpful to you if risk assessments submitted to your agency by external groups, such as consultants and private industry, met the requirements proposed in the OMB Bulletin?

OSHA always considers risk assessments submitted by outside groups during its rule-makings, whether or not they meet the OMB Bulletin’s standards or DOL’s Information Quality Guidelines. Clearly, higher quality risk assessments that comply with these guidelines are more likely to be helpful to OSHA.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

This page intentionally left blank.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

THE DEPARTMENT OF TRANSPORTATION’S RESPONSES TO QUESTIONS POSED BY THE NATIONAL RESEARCH COUNCIL’S COMMITTEE TO REVIEW THE OMB RISK ASSESSMENT BULLETIN

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

QUESTIONS FOR ALL AGENCIES POTENTIALLY
AFFECTED BY THE OMB BULLETIN

The Department of Transportation (DOT) is pleased to submit to the Office of Management and Budget (OMB) responses to the questions posed by the National Research Council’s Committee to Review the OMB Risk Assessment Bulletin that relate to the substance of OMB’s proposed draft Risk Assessment Bulletin (the Bulletin).


By way of background, the DOT is a diverse department that consists of ten operating administrations, and the Office of the Secretary, each of which has statutory responsibility for a wide range of regulations. For example, the DOT regulates safety in the aviation, motor carrier, railroad, mass transit, motor vehicle, commercial space, and pipeline transportation areas. The DOT also regulates aviation consumer and economic issues and provides financial assistance and writes the necessary implementing rules for programs involving highways, airports, mass transit, the maritime industry, railroads, motor vehicle safety, and natural gas and hazardous liquid pipeline transportation. It writes regulations carrying out such disparate statutes as the Americans with Disabilities Act and the Uniform Time Act. Finally, the DOT has responsibility for developing policies that implement a wide range of regulations that govern internal programs such as acquisition and grants, safety statistics, access for the disabled, environmental protection, energy conservation, information technology, occupational safety and health, property asset management, seismic safety, and the use of aircraft and vehicles.


General Questions about Current Risk Assessment Practices


Question 1:


Please provide a brief overview of your current risk assessment practices. Specifically, do you conduct probabilistic risk assessment? Is there a common approach to both risk assessments and uncertainty analysis? How do you currently address uncertainty and variability in your agency's risk assessments?


Response:


The DOT does not provide written guidance as to how the DOT operating administrations should conduct risk assessments1 so there is no common approach to risk assessments and uncertainty analyses within the DOT operating administrations. As a result, the operating administrations employ varied risk assessment practices that range from informed judgment to probabilistic risk assessments.


For example, one operating administration, the National Highway Traffic Safety Administration (NHTSA), researches the incidence, severity and causes of injury relating to motor vehicle crashes when it assesses risk. NHTSA does not, however, typically

1

OMB’s Bulletin defines the term “risk assessment” as “a document that assembles and synthesizes scientific information to determine whether a potential hazard exists and/or the extent of possible risk to human health, safety or the environment.” Bulletin at 2.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

conduct formal probabilistic risk assessments. Rather, it addresses risk by defining the target populations for specific safety-related countermeasures by maintaining several databases that measure the annual incidence of crashes, as well as the characteristics of those crashes. These databases provide a sample-based annual estimate of all crash types, but also provide a complete annual census of fatal crashes. The databases are occasionally supplemented by special studies that address specific injury or safety problems and by research that employs biomechanical test devices and crash tests of motor vehicles. NHTSA synthesizes all of these data sources to estimate the target population that is at risk due to specific vehicular or behavioral characteristics.


Another operating administration, the Pipeline and Hazardous Materials Safety Administration (PHMSA), conducts both probabilistic risk assessments and qualitative risk assessments to support its regulatory functions. PHMSA also employs risk assessments to allocate resources, measure performance, prioritize workload, develop strategies, and refine PHMSA’s overall mission. The scope and comprehensiveness of the risk assessments that PHMSA conducts vary with the nature and impact of the issues being addressed and the potential value of the risk assessments in its risk management decisions.


A third DOT operating administration, the Federal Aviation Administration (FAA), typically conducts risk assessments, including probabilistic risk analyses, for regulatory analysis, investment analysis and procurement. The FAA conducts risk assessments to evaluate the effects of proposed industry-wide mitigations against broad categories of aviation accidents. The FAA maintains guidance documents that promote the use of risk assessments by providing information on topics such as: (i) risk and uncertainty, (ii) risk assessment of benefit-cost results, (iii) sensitivity analyses, (iv) monte carlo analyses,2 and (v) decision analyses.


Question 2:


Please identify any substantial scientific or technical challenges that you may encounter when conducting risk assessments for your agency.


Response:


To the extent that the DOT operating administrations conduct risk assessments, the challenges that they have typically encountered involve a lack of data relating to the nature of the risks at issue.

2

A monte carlo study acknowledges the fact that raw data are often uncertain; instead of knowing exactly what the "cost" of something is, we may have a probability distribution of the cost. A monte carlo study combines the uncertainties and determines, given the uncertainties, the probability distribution for the outcome at issue -- for example, in a cost-benefit analysis, what the probability is that the benefits will exceed the costs.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

For example, NHTSA maintains databases that contain comprehensive information regarding the circumstances, causes, and impacts of motor vehicle crashes. However, NHTSA must frequently assess the impacts of countermeasures that target specific injury groups or crash circumstances that are not found in its databases. Additionally, at times, NHTSA must estimate impacts due to specific causal factors that are not well documented in police reports, which are the basis for most of the information in its databases, or in NHTSA’s investigative reports. If supplemental studies are unavailable to address these issues, NHTSA may have to rely on imperfect proxy measures to develop its risk assessments.


PHMSA similarly relies on databases and incident reporting systems to estimate the probability of accidents involving pipelines and hazardous materials. Although PHMSA utilizes commodity flow surveys3 jointly prepared by the Departments of Commerce and Transportation, its ability to achieve a high degree of confidence in its estimates is hampered by the limited availability of data.


In addition, it should be noted that agencies’ ability to collect data is restricted by the Paperwork Reduction Act.


Question 3:


What is your current definition of risk assessment, and what types of products are covered by that definition?


Response:


The DOT does not have a standard definition of risk assessment, and within the DOT, the definition varies based upon the operating administration that is defining the term. Similarly, the types of products that are included in the definition of risk assessment vary depending on which operating administration is conducting the risk assessment.


For example, the FAA defines risk assessment as an assessment, either qualitatively or quantitatively, of the probability of some hazard occurring and the potential impact(s) or consequence(s) of that occurrence. According to the FAA, a risk assessment addresses the questions of what can happen, how likely it is that the event will occur, and what the consequences of the event will be. The FAA applies the risk assessment process to products including regulatory analyses and investment analyses of air traffic control services and airport infrastructure to support procurement decision-making.


PHMSA defines risk assessment as a determination of risk context and acceptability, often relative to similar risks. PHMSA’s definition encompasses risk analysis, which

3

The Commodity Flow Survey captures data on shipments originating from selected types of business establishments located in the 50 states and the District of Columbia. Respondents provide the following information about their establishment’s shipments: domestic destination or port of exit, commodity, value, weight, mode(s) of transportation, the date on which the shipment was made, and an indication of whether the shipment was an export, hazardous material, or containerized.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

PHMSA defines as the study of risk in order to understand and quantify risk so that it can be managed. PHMSA employs risk assessments as a tool to better understand the risks associated with the transport of hazardous materials by all of the DOT operating administrations and energy transportation by pipelines.


In contrast to the FAA and PHMSA, NHTSA does not have a current definition of risk assessment. It does, however, interpret the Bulletin’s definition of risk assessment to apply primarily to an agency’s estimates of fatalities and injuries that result from specific vehicular or behavioral characteristics. NHTSA believes that according to the Bulletin’s definition of risk assessment, risk assessments are used in regulatory analyses or evaluations, and special studies, and they define the target population that is addressed by regulatory or behavioral programs.


Question 4:


About how long (that is, from initiation of the risk assessment to delivery to the regulatory decision maker) does it take to produce the various types of risk assessments?


Response:


The DOT does not have a standard time frame for producing risk assessments; the time required to produce risk assessments varies widely from days to years, depending on the complexity of the issue. Factors that influence the time parameters include: the importance and complexity of the project, the level of the estimated risk, the number of the alternatives being considered, the availability of information, the sensitivity of results to changing assumptions, and the consequences of an incorrect decision.


For example, when the FAA evaluates the urgency of a risk, it may conduct a preliminary risk analysis based on the limited data that is immediately available to it. If the urgency indicates higher than acceptable short-term risk, interim actions may be put in place or quick attempts to refine the analysis are pursued. In rare instances of very high risk, future flight may be restricted until interim mitigations (e.g. inspections or operational restrictions) can be put in place. For the typical unsafe condition on a particular product, a more in-depth risk analysis takes approximately several weeks or months. This is to account for collecting available data, coordinating meetings with manufacturers and/or airlines, and evaluating possible mitigations.


Questions about OMB’s Definition of Risk Assessment and Applicability


Question 5:


Using the definition of risk assessment described in the OMB Bulletin, are there work products that would now be considered risk assessments that were not previously considered risk assessments? If so, what are they?

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Response:


As discussed above, the regulatory analyses that are conducted by the DOT operating administrations incorporate elements of risk assessment, even though the operating administrations may not have typically considered them to be risk assessments. Nonetheless, the Bulletin’s expansive definition of risk assessment includes some risk assessment activities that the DOT’s operating administrations presently undertake.


For example, NHTSA does not typically conduct formal risk assessments, and in the past, it has not considered its target population estimates to be risk assessments. However, NHTSA interprets the Bulletin’s risk assessment definition to include estimates of populations at risk due to specific vehicular or behavioral characteristics. Similarly, the Research and Innovative Technology Administration (RITA) believes that some of its activities would come within the Bulletin’s definition of a risk assessment. As an example, RITA points to its Travel Statistics Program, which requires independent data collection and analysis that involves aspects of motor vehicle occupant fatalities.


Questions about Type of Risk Assessment (Tiered Structure)


Question 6:


In your agency, is there currently a clear demarcation between risk assessments used for regulatory analysis and those not used for regulatory analysis? Is this clear at the outset of the risk assessment?


Response:


There is generally not a clear line of demarcation between DOT risk assessments that are generated for regulatory analysis and DOT risk assessments that are generated for other purposes. However, as a practical matter, analysts are typically aware at the outset of a risk assessment whether the assessment is for regulatory or non-regulatory purposes.


The FAA explains that as the scope of a risk assessment does not necessarily vary based on whether or not the assessment is for regulatory or non-regulatory analysis, a clear demarcation would serve no practical purpose. Another operating administration, NHTSA, gave a similar explanation, noting that its analysts are well aware of when risk assessments are used for regulatory purposes because the assessments are developed in the context of, and included within, the regulatory document.


Question 7:


In your agency, is there currently a clear demarcation between “influential risk assessment” used for regulatory purposes and other risk assessments used for regulatory purposes? Is this clear at the outset of the risk assessment?

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Response:


The DOT does not currently have a clear line of demarcation between “influential risk assessments” used for regulatory purposes and other risk assessments used for regulatory purposes. One operating administration, NHTSA, notes that the only practical demarcation for “influential risk assessments” occurs when an estimate of a population at risk is associated with a regulation of sufficient scope that it requires probabilistic uncertainty analysis. In such cases, NHTSA will include in its analysis a variation in estimates of the population at risk.


Questions about Impact of the Bulletin on Agency Risk Assessment Practices


Question 8:


If applicable, please specify provisions in the Bulletin that can be expected to have a substantial positive effect on the quality, conduct, and use of risk assessments undertaken by your agency.


Response:


The DOT believes that it already complies with many of the substantive guidelines in the Bulletin. Nonetheless, the DOT believes that the overall effect of the Bulletin on the quality, conduct, and use of the DOT’s risk assessments will be positive. Indeed, the DOT supports the stated purpose of the Bulletin to enhance the quality and objectivity of risk assessments that are performed by Federal agencies.


Additionally, the DOT agrees with Section III of the Bulletin, which provides that agency efforts should be commensurate with the importance of the risk assessment. See Bulletin at 11. The DOT believes that this guideline will help prevent unnecessary expenditures of agency resources when preparing risk assessments. At a time of significantly reduced budgetary resources, the DOT maintains that it is critical that the benefits from the Bulletin’s requirements justify the costs of complying with those requirements.


Question 9:


If applicable, please specify provisions in the Bulletin that can be expected to have a substantial negative effect on the quality, conduct, and use of risk assessments undertaken by your agency.


Response:


The DOT has several concerns that some of the guidelines in the Bulletin, if rigidly interpreted, may result in a substantial negative effect on its preparation of risk assessments.


First, several provisions of the Bulletin may discourage the use of risk assessments in the future. For example, the DOT is concerned that Standard 3 in the Bulletin (standards

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

related to characterization of risk) may introduce unnecessary complexities into the risk assessment process by requiring the risk assessment to contain a range of risk estimates so that the public is aware of whether the nature of the risk is conservative. Such a requirement, however, is time consuming, not always necessary, and could deter the DOT’s operating administrations from employing such assessments. Standard 3 requires, in relevant part, that “[w]hen a quantitative characterization of risk is provided, a range of plausible risk estimates should be provided.” Bulletin at 13. An alternative approach might be to state: “Uncertainty in the data and assumptions should be evaluated to the degree necessary to demonstrate that the analysis conclusions are relatively insensitive to that uncertainty or to provide risk bounds that encompass the range of uncertainty.”


Second, the DOT believes that the Bulletin may considerably prolong the preparation of risk assessments. For example, if Standard 3 in the Bulletin were applied to risk assessments that were conducted by agencies through advisory committees, it might be much more difficult for the agencies to perform risk analyses and respond in a timely manner to unsafe conditions. When working outside the agency structure, for example, agencies face the burden of trying to (i) evaluate and document bounds and uncertainties and (ii) perform a complete review of prior studies. Additionally, Section III.5 of the Bulletin requires agencies to “follow appropriate procedures for peer review and public participation in the process of preparing the risk assessment.” Bulletin at 11. The DOT understands that this would require its operating administrations to respond to public comments. An exception should be added to the guidelines for agencies that plan on proceeding under a notice-and-comment rulemaking, in order to avoid unnecessary delay.


Third, some of the guidelines in the Bulletin could be exploited against the DOT by regulated entities in order to frustrate the DOT’s regulatory powers or evade regulation. For example, a requirement that analysis from influential risk assessments be “capable of being substantially reproduced” could be employed by regulated parties to purposely delay rulemakings. Even studies detecting low probability risks sometimes involve a great deal of technical judgment, and the same expert judgment process can produce different results over time. Especially in health and safety areas, there has to be an appropriate balance between expert judgment and scientific certainty before action is taken.


Finally, many of the risk assessment guidelines that are set forth in the Bulletin are more appropriate for health risk assessments than for the types of transportation safety risk assessments common at the DOT. Thus, this could cause an unnecessary negative effect; a remedy would be to make many of the requirements discretionary for safety decisions, depending on the particular circumstances of each assessment.


Despite the above-referenced concerns, the DOT understands that many of the negative effects of the Bulletin could be minimized through the application of the Bulletin’s “rule of reason,” which could provide the DOT operating administrations with sufficient discretion and flexibility to apply the guidelines when they are appropriate. However, further clarification could lessen the possibility of future disputes over what is reasonable.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Question 10:


If your agency followed the procedures described in the Bulletin, would it affect the time course for production of the risk assessment (that is, the time required from initiation of the risk assessment to delivery to the regulatory decision maker)? If so, please explain why?


Response:


The DOT’s adherence to the guidelines in the Bulletin would entail a more formalized risk assessment process that would affect the time and resources that are necessary to complete risk analyses.


For example, one standard under Section IV is that agencies shall place the risk in perspective/context with other risks familiar to the target audience. This standard may make sense if the risk being evaluated is expressed in an obscure metric or involves odds that are difficult for readers to fathom. However, NHTSA’s risk measures are typically expressed in very straightforward terms—deaths and injuries from traffic crashes. These are easily understood concepts and token comparisons to other types of injury statistics would be superfluous to the point of the analysis. Further, there is no reason to include information on the timing of exposure and the onset of adverse effects, or to develop estimates of individual risk.


By way of another example, if the peer review component in the Bulletin is interpreted as requiring “formal peer review,” then significant delay—as much as six months—would likely occur. Other requirements, such as those for characterization of risk, could also cause delay.


Question 11:


One of the Bulletin’s reporting standards states the need to be scientifically objective by “giving weight to both positive and negative studies in light of each study’s technical quality.” Please give an example of how this would be implemented by your agency or department.


Response:


Currently, the DOT operating administrations are generally scientifically objective when evaluating studies, even though the method by which the studies are evaluated may vary.


Some operating administrations already review and analyze pertinent literature as a routine part of their analytical tasks, and will continue notwithstanding the outcome of the Bulletin. Indeed, there are DOT operating administrations that presently evaluate both positive and negative studies in light of each study’s technical quality. For example, NHTSA already considers both positive and negative studies when preparing risk

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

assessments. If NHTSA is unable to establish a clear quality-based preference for conflicting studies, the results of both studies may be presented either as a range or through a sensitivity analysis.


However, other modal administrations make wide use of advisory committees that have broad representation, including individuals with significant technical expertise. In those situations, it is not clear whether there would be any additional benefit to literature searches that could be time consuming.


Question 12:


Does your agency use risk assessments conducted by external groups? Would it be helpful to you if risk assessments submitted to your agency by external groups, such as consultants and private industry, met the requirements proposed in the OMB Bulletin?


Response:


If risk assessments are conducted by external groups, it would be helpful if those risk assessments generally met the requirements proposed in OMB’s Bulletin. However, the DOT does not believe that such risk assessments should be required to follow the Bulletin’s guidelines. Rather, the guidelines should act as a “best practices” for external assessments. One DOT operating administration, NHTSA, notes that when it employs contractors for research and development, the requirements to which its contractors are subject are substantially similar to the Bulletin’s requirements. A requirement mandating complete adherence to the Bulletin’s guidelines by contractors that conduct risk assessments could discourage the submission of useful and relevant information by external groups. They could, however, be advised that any variations from the “best practices” in the OMB Bulletin would need appropriate justification.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

EPA Answers to Questions posed by NRC in its review of the Proposed OMB Risk Assessment Bulletin – August 3, 2006

I. Introduction


Below please find EPA’s answers to the questions posed by NRC to EPA (and other federal agencies) about the Proposed OMB Risk Assessment Bulletin. We have numbered the questions for readability. (In some cases, the order of the questions has been rearranged, e.g. question 1).


Answers were prepared in the Office of the Science Advisor with input from other Offices. Many answers were based on the general comments presented by EPA to NRC at its public meeting in June. Others were written specifically in response to this request or were drawn from existing EPA publications, primarily:


US EPA 2004; An Examination of EPA Risk Assessment Principles and Practices. EPA/100/b-04/001; www.epa.gov/osa/ratf.htm


US EPA 2002; Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity, of Information Disseminated by the Environmental Protection Agency. EPA/260R-02-008; www.epa.gov/oei/quality/informationguidelines US EPA 2006; EPA's Peer Review Handbook, 3rd Edition; EPA/100/B06/002; http://www.epa.gov/peerreview


II. NRC Questions and EPA Responses


QUESTIONS FOR ALL AGENCIES POTENTIALLY AFFECTED BY THE OMB BULLETIN


General questions about current risk assessment practices


NRC Question 1. A. Please provide a brief overview of your current risk assessment practices.


In 2004, EPA published a staff paper entitled “An Examination of EPA Risk Assessment Principles and Practices” (Staff Paper) that described its risk assessment practices at that time. This paper was developed in large part in response to public comments1 requested by OMB on EPA’s risk assessment practices. While it does not represent official EPA policy, it was reviewed and approved for publication and presents an analysis of EPA’s general risk assessment practices at that time. Chapter 1, pages 1-6, and Chapter 2, pages 11-16 provide a good overview of our current practices.

1

On February 3, 2003 (68 FR 22, pp. 5492-5527) OMB requested public comment on “ways in which ‘precaution’ is embedded in current risk assessment procedures through ‘conservative’ assumptions in the estimation of risk” and “Examples of approaches in human and ecological risk assessment…which appear unbalanced.”

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

NRC Question 1 B. Specifically, do you conduct probabilistic risk assessment?


EPA typically uses deterministic approaches to characterize risk, although, increasingly often, in the Office of Pesticide Programs (OPP), in the Office of Solid Waste and Emergency Response (OSWER), and for criteria pollutants in the Office of Air Quality Planning and Standards (OAQPS), EPA applies probabilistic techniques for characterization of exposure or risk.


EPA has published a number of documents related to probabilistic assessments: these include the March 1997 Guiding Principles for Monte Carlo Analysis (USEPA, 1997b), the May 1997 Policy Statement (USEPA, 1997c), and the December 2001 Superfund document Risk Assessment Guidance for Superfund: Volume III — Part A, Process for Conducting Probabilistic Risk Assessment (USEPA, 2001a)” Section 3.4.3 of Chapter 3 of the Staff Paper described generally how EPA uses probabilistic analyses with respect to hazard assessment.


“ EPA cancer and other risk assessments have not included full probabilistic uncertainty analyses to date, primarily due to the need to develop relevant probability distributions in the toxicity part of risk assessment. However, quantitative statistical uncertainty methods are routinely applied in evaluation of fitting of dose-response models to tumor data, and quantitative uncertainty methods have been used to characterize uncertainty in pharmacokinetic and pharmacodynamic modeling.”


OPP increasingly is using probabilistic techniques for characterization of exposure.


OSWER routinely uses probabilistic techniques for evaluating risks from wastes, specifically in the fate, transport and exposure components of assessments used for a variety of management decisions and rules. OSWER has also used PRA to characterize variability and uncertainty in exposure assessments on a site-specific basis. Superfund has a guidance document (US EPA 2001).


For criteria air pollutants, OAQPS has conducted probabilistic exposure analyses and for some air pollutants (e.g., particulate matter, ozone) and health endpoints it has conducted probabilistic risk assessments incorporating statistical uncertainty in exposure-response and concentration-response relationships.


In addition, in July of 2005, EPA was a co-sponsor of a Contemporary Concepts in Toxicology Workshop on Probabilistic Risk Assessment (www.toxicology.org/AI/MEET/PRA_meeting.asp) and has a workgroup within the Risk Assessment Forum that is considering ways to promote probabilistic analyses, including a risk assessor—risk manager dialogue, and a clearinghouse for EPA probabilistic assessments.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

U.S. Environmental Protection Agency (USEPA). (1997). Guiding principles for Monte Carlo analysis. EPA/630/R-97/001. Risk Assessment Forum, Office of Research and Development, Washington, DC.


U.S. Environmental Protection Agency (USEPA). (1997). Policy for use of probabilistic analysis in risk assessment at the U.S. Environmental Protection Agency. Fred Hansen, Deputy Administrator. Science Policy Council, Washington, DC. (http://www.epa.gov/osa/spc/2polprog.htm )


U.S. Environmental Protection Agency (USEPA). (2001). Risk assessment guidance for Superfund: Volume III - Part A, Process for conducting probabilistic risk assessment. EPA 540-R-02-002. Office of Emergency and Remedial Response, Washington, DC. http://www.epa.gov/oswer/riskassessment/rags3a/.


NRC Question 1 C. How do you currently address uncertainty and variability in your agency's risk assessments?


EPA has been increasingly making efforts to more completely characterize uncertainty in its risk estimates. EPA’s 1986 set of Risk Assessment Guidelines explicitly stated the importance of characterizing uncertainty. EPA’s Exposure Assessment Guidelines developed this theme further for the exposure assessment part of risk assessment. EPA’s Risk Characterization Policy provided even more direction for describing uncertainty in risk estimates.


Chapter 3 of the Staff Paper discusses EPA’s practices in the areas of uncertainty and variability. Below is an excerpt from the overview of the chapter.


“Uncertainty and variability exist in all risk assessments. Even at its best, risk assessment does not estimate risk with absolute certainty. Thus, it is important that the risk assessment process handle uncertainties in a predictable way that is scientifically defensible, consistent with the Agency’s statutory mission, and responsive to the needs of decision makers (NRC, 1994). Instead of explicitly quantifying how much confidence there is in a risk estimate, EPA attempts to increase the confidence that risk is not underestimated by using several options to deal with uncertainty and variability when data are missing. For example, in exposure assessment, the practice at EPA is to collect new data, narrow the scope of the assessment, use default assumptions, use models to estimate missing values, use surrogate data (e.g., data on a parameter that come from a different region of the country than the region being assessed), and/or use professional judgment. The use of individual assumptions can range from qualitative (e.g., assuming one is tied to the residence location and does not move through time or space) to more quantitative (e.g., using the 95th percentile of a sample distribution for an ingestion rate). This approach can also fit the practice of hazard assessment when data are missing. Confidence in ensuring that risk is not underestimated has often been qualitatively ensured through the use of default assumptions.”

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Most recently, EPA has begun to place increased emphasis on use of quantitative uncertainty analyses in its risk assessments, and, in its IRIS assessments, will be moving away from promoting a single value for both non-cancer and cancer effects and will instead recognize and quantify the range of uncertainty in estimates of potential hazard and risk.


NRC Question 1 D. Is there a common approach to both risk assessments and uncertainty analysis?


EPA has a long history of the development of risk assessment guidance to foster consistent practices between and within different effect areas, e.g. carcinogenicity, neurotoxicity, or for different categories of assessments, e.g. cumulative risk assessment, benchmark dose analysis. Approaches to uncertainty analysis are less well developed at this point, but are a goal for the Agency. Section 3.3.3 of the Staff Paper on uncertainty analysis describes a general EPA tiered approach.


“Over the years, improved computer capabilities have created more opportunities to characterize uncertainty. As a result, advocates promote such characterization in all cases. We need to be judicious in which methods we apply, such as Monte Carlo analysis. Uncertainty analysis is not a panacea, and full formal assessments can still be time- and resource-intensive. Further, the time and resources needed to collect an adequate database for such analyses can be a problem. While uncertainty analysis arguably provides significant information to aid in decision making, its relative value is case-specific and depends on the characteristics of the assessment and the decision being made. In some cases, a full probabilistic assessment may add little value relative to simpler forms. This may occur where more detailed uncertainty analysis (or analysis focused on non-critical uncertainties) does not provide information which has any impact on the overall decision.”

“Accordingly, EPA’s practice is to use a “tiered approach” to conducting uncertainty analysis; that is, EPA starts as simply as possible (e.g., with qualitative description) and sequentially employs more sophisticated analyses (e.g., sensitivity analysis to full probabilistic), but only as warranted by the value added to the analysis and the decision process. Questions regarding the appropriate way to characterize uncertainty include:

  1. Will the quantitative analysis improve the risk assessment?

  2. What are the major sources of uncertainty?

  3. Are there time and resources for a complex analysis?

  4. Does this project warrant this level of effort?

  5. Will a quantitative estimate of uncertainty improve the decision? How will the uncertainty analysis affect the regulatory decision?

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
  1. How available are the skills and experience needed to perform the analysis?

  2. Have the weaknesses and strengths of the methods involved been evaluated?

  3. How will the uncertainty analysis be communicated to the public and decision makers? ”

NRC Question 2. Please identify any substantial scientific or technical challenges that you may encounter when conducting risk assessments for your agency.


The principal scientific challenge relates to limited data.


Data limitations relate to reliance on available data and may include qualitative hazard characterization without identification of the full range of potential hazards; quantitative analyses with limited data points; reliance on animal data for estimating risks to humans; an absence of hazard or exposure data on susceptible lifestages at potential risk; and reliance on data on individual chemicals when estimating risks likely to involve exposure to multiple agents.


Specific data limitations may be seen: for evaluation of countervailing risks, e.g. for implications of reduced income, as an indirect impact; for defining the timing of exposure and onset of the adverse effects, reduction, or cessation of adverse effects; or for estimating population risk from safety assessments, e.g. reference doses.


There are many places within the exposure to outcome continuum where additional data can be quite instrumental either in establishing the adversity of exposure or in reducing the uncertainty in an assessment. EPA encourages, wherever possible, the development of more biological data, or other data for refining risk assessments.


EPA has recently placed increased emphasis on mode of action information in its cancer risk assessments as a way of evaluating alternative (non-linear) dose response models. These data can play an important role in defining the biological plausibility of alternative models.


EPA has also recently emphasized a preference for data-derived uncertainty factors rather than the default assumptions used in safety assessment (Reference Dose) calculations, such as the data-derived factors used in intra-species extrapolation.


Another important consideration is the increasing role of biochemical data or newer types of data, e.g., genomics, in defining events that may be linked with adverse outcomes and become valid endpoints for risk assessment.


Technical challenges may include application of multiple models with limited datasets, estimation of indirect countervailing risks of alternatives, and others.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

There are areas of risk assessment for which the application of some probabilistic and statistical methods is not straightforward and additional guidance may need to be developed. For example, quantitative uncertainty analysis (of which 2-dimensional Monte-Carlo assessments is one example) and probabilistic hazard assessment are areas in which techniques are available but for application within the Agency, EPA believes there could be benefit from development and articulation of guidance in their application for some risk assessments. As another important example, consider that much of the historical effort in risk assessment has been devoted to “safety assessment” - development of adequate margins of exposure or safety for key variables to prevent toxicity of products, failure of structures, etc. Such safety analyses may not be quickly replaced with more extensive calculations of statistical bounds and probabilities.


Application of central estimates and confidence bounds in dose response assessments may also require further development prior to routine application. Development of guidance, and in some contexts, derivation of central estimates and statistical bounds may require further methods development. These proposed methods and applications should be subject to peer review prior to application. What is meant by central estimates may need more discussion or guidance. The definition of central estimates may be context specific, i.e. may vary or even not be appropriate, depending on the regulatory and statutory context. There is a need for flexibility to make these determinations.


There are a number of additional areas in risk assessment where there may be technical challenges.


These include:

  • the state of development of methodologies, and understanding statutory needs and specific context as issues for e.g. reporting results as population risks;

  • the need for clear definitions, an understanding of the needs for the decision, the statutory environment, and the specific context, in distinguishing between central estimates and expected risks;

  • limited or no data to support a quantitative measure of the relative plausibility of alternative risk estimates; and

  • the need for caution (See NRC, 1994) in treating fundamentally different predictions as quantities that should be averaged.

Level of uncertainty in risk estimates is a central issue addressed in EPA risk assessments. This uncertainty is inherent in both exposure estimates and estimates of potential effects (e.g., weight of evidence and dose/response). For our most influential assessments (e.g., National Ambient Air Quality Standards (NAAQS)), EPA conducts quantitative uncertainty analysis for both exposure and effects. However, because of unquantifiable model uncertainty, the large number of input parameters and limited data on their distribution, even the most comprehensive uncertainty analyses do not present the true distribution of uncertainty. EPA has efforts underway to further develop methods to address uncertainty including expert elicitation.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

In many assessments (especially for exposure assessments) where distributional information is not available, uncertainty is partially characterized by providing several discrete sets of assumptions that span the range of potential values. In many cases where data are inadequate, default values or high end values (intended to not underestimate risks) are used in the analysis. In such cases their potential impact on the assessment is characterized.


For cancer potency assessments EPA follows the approach in its 2005 Cancer Guidelines (i.e., provide confidence limits, based on the point of departure (POD), and indicate risks may be as low as zero). In some cases a range of potency estimates is presented. In others alternative approaches (which EPA believes are adequately supported) are discussed.


For most non-cancer effects (e.g. RfDs in IRIS), EPA typically presents confidence limits where PODs are derived from benchmark dose analyses. However, RfDs are typically presented as point estimates and the uncertainty around those estimates are unknown. As for cancer assessments, the risk often may be as low as zero. Uncertainty factors and a qualitative confidence characterization are also presented.


Alternative models/Model Uncertainty: EPA utilizes expert judgment based on the available data to focus the choice of models to be evaluated. For our most significant assessments (e.g., NAAQS), the quantitative implications of these alternatives are more fully explored. For most Agency exposure assessments programs typically use a single preferred exposure model to develop exposure estimates. Such models have been peer reviewed and their performance and limitations are well documented. Where new models are used, model uncertainties are presented.


As noted in EPA’s Cancer Guidelines, many aspects of model uncertainty in risk assessment related to human health hazards (e.g., the use of animals as a surrogate for humans) are difficult to quantify. Further, the bases for analyses of many of these aspects of risk assessment often rest on science policy choices or inference guidelines that have been justified based on the available general evidence and peer reviewed as generic science policy default choices.


Defining adversity is both a challenging and complex issue.


Endpoints chosen as points of departure or as critical effects are not always adverse per se. However, they may well be associated with adverse outcomes, and if the evidence is sufficient, appropriately and often serve as the critical endpoints in risk assessments. Example: The use of blood acetylcholinesterase inhibition as an endpoint, or the use of precursor effects to prevent frank toxicity (as in the recent NRC recommendations regarding perchlorate).

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Adversity is not a yes/no phenomenon in many, many situations, so endpoint selection is governed by the considerations in EPA’s risk assessment guidelines and professional judgment.


Evidence comes in many levels of quality and detail, and it is the weight of the evidence, or its integrated whole, that will often support a judgment, not simply the “best evidence”.


Finally, as another technical challenge facing EPA, there is also our evolving understanding of both the science and engineering processes involved in improving the conceptual model for describing and modeling chemical fate/transport in the environment. A recent example is the consideration of organic chemicals in the generation of gases for waste placed in a landfill.


NRC Question 3. What is your current definition of risk assessment, and what types of products are covered by that definition?


From EPA Staff Paper, section 1.1.1


“The most common basic definition of risk assessment used within the U.S. Environmental Protection Agency (EPA) is paraphrased from the 1983 report Risk Assessment in the Federal Government: Managing the Process (NRC, 1983), by the National Academy of Sciences’ (NAS’s) National Research Council (NRC):


Risk assessment is a process in which information is analyzed to determine if an environmental hazard might cause harm to exposed persons and ecosystems.”


EPA has long embraced the idea that a risk assessment consists of analyses that embrace the four steps described in NRC 1983: hazard identification, dose response assessment, exposure assessment, and risk characterization. Implicit in the completion of these steps is the notion of the characterization of the magnitude or extent of the potential hazards.


In carrying out its mission, EPA conducts a wide range of analyses that fall within this definition. A series of presentations to the NRC committee examining Toxicity Testing and Assessment of Environmental Agents (1-19-06), made by EPA speakers from programs that regulate air, water, solid waste, toxic substances and pesticides, describes the regulatory environment and the range of EPA products. Many of EPA’s programs rely on hazard identification and dose response assessments developed by the Office of Research and Development under its IRIS program.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

NRC Question 4. About how long (that is, from initiation of the risk assessment to delivery to the regulatory decision maker) does it take to produce the various types of risk assessments?


Assessments vary widely in their complexity and in the time needed for their production and completion.

For examples:

  • review of pre-manufacture notices under the Toxic Substances Control Act to support a concern for significant hazard or exposure must take place within ninety days of submission;

  • provisional peer review toxicity values for Superfund sites may be completed in weeks or a few months;

  • more complex assessments including Integrated Risk Information System (IRIS) assessments, site-specific assessments, or pesticide registration risk assessments may take one to five years; and

  • some of the most complex assessments (e.g. dioxin, Libby Montana site-specific risk assessment) in which there is significant controversy and significant new data, the time needed may extend well beyond five years.

It should be noted that much of this time is due in part to requirements not only for rigorous scientific evaluation, but also coordination across the Agency, internal peer review, interagency review, external peer review and final approvals.


Questions about OMB’s definition of risk assessment and applicability


NRC Question 5. Using the definition of risk assessment described in the OMB Bulletin, are there work products that would now be considered risk assessments that were not previously considered risk assessments? If so, what are they?


OMB’s definition applies the term “risk assessment” to work products that are less than complete risk assessments, e.g. hazard characterization and dose response assessments such as IRIS entries. EPA does not see a big change in its practices as a result of this new, more inclusive definition. EPA recognizes, that many of these products, do end up as a major basis of subsequent, fully developed risk assessments.


Questions about type of risk assessment (tiered structure)

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

NRC Question 6. In your agency, is there currently a clear demarcation between risk assessments used for regulatory analysis and those not used for regulatory analysis? Is this clear at the outset of the risk assessment?


In general, most EPA risk assessment activities are tied to some aspect of a regulatory analysis, even if they do not result in a full (four step) risk assessment.


While the regulatory purpose should generally be apparent at the outset of the assessment in the planning and scoping phase, the ultimate regulatory needs and uses may only evolve over time and may be different for different settings, and different customers.


With respect to actions that may need regulatory impact analyses (RIAs), and that could be subject to OMB Circular A-4, some actions clearly do, some do not, and for some the need may only become apparent as an assessment is developed.


OMB Circular A-4 advocates a flexible approach to these analyses, stating (p. 3):

“You will find that you cannot conduct good regulatory analysis according to a formula. Conducting high-quality analysis requires competent professional judgment. Different regulations may call for difference emphases in the analysis, depending on the nature and complexity of the regulatory issues and the sensitivity of the benefit and cost estimates to the key assumptions.”


EPA agrees with this emphasis on professional judgment and consideration of the differences in the nature and purpose of an Agency’s assessments related to A-4 and to all risk assessments.


There is a need for flexibility given the variety of statutory mandates and types of assessments to which the section would apply. Differences between RIAs and risk analyses conducted for other purposes mean that not all standards should be applicable to all regulatory risk assessments. They should, of course, where appropriate, maintain consistency with the requirements of Circular A-4.


NRC Question 7. In your agency, is there currently a clear demarcation between “influential risk assessment” used for regulatory purposes and other risk assessments used for regulatory purposes? Is this clear at the outset of the risk assessment?


EPA has set out a number of criteria for determining whether an assessment is an influential risk assessment and considers it a case by case process, with, then, no clear demarcation point. These judgments are made in part to determine what upcoming assessments are subject to peer review, and so are made early in the process.


EPA interprets influential risk assessment to mean any risk assessment (or component), as defined above, that meets the OMB Peer Review Bulletin’s definition of “influential scientific information,” which is, "scientific information the agency reasonably can

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

determine will have or does have a clear and substantial impact on important public policies or private sector decisions," as described in EPA’s Peer Review Handbook, 3rd edition. The Handbook states:


“Generally, determinations whether a scientific and/or technical work product is “influential” will occur on a case-by-case basis. The continuum of work products covers the range from the obviously influential, which clearly need peer review, to those products which clearly are not influential and don’t need peer review. There is no easy, single “yes/no” test that applies to the whole continuum of work products for determining whether a work product is influential scientific information.


The novelty or controversy associated with the work product may determine whether it is influential scientific information. Influential scientific information may be novel or innovative, precedential, controversial, or emerging (“cutting edge”). An application of an existing, adequately peer-reviewed methodology or model to a situation that departs significantly from the situation it was originally designed to address may make peer review appropriate. Similarly, a modification of an existing, adequately peer-reviewed methodology or model that departs significantly from its original approach may also make peer review appropriate. Determining what constitutes a “significant departure” is the responsibility of the decision maker (SPC Peer Review Handbook, 3rd edition, section 2.2.3).”


The Handbook also provides criteria to evaluate whether products should be considered influential. “ Generally, scientific and/or technical work products that are used to support a regulatory program or policy position and that meet one or more of the following factors would be considered to be influential scientific information:

  1. Establishes a significant precedent, model, or methodology;

  2. Likely to have an annual effect on the economy of $100 million or more, or adversely affect in a material way the economy, a sector of the economy, productivity, competition, jobs, the environment, public health or safety, or State, Tribal, or Local governments or communities;

  3. Addresses significant controversial issues;

  4. Focuses on significant emerging issues;

  5. Has significant cross-Agency/interagency implications;

  6. Involves a significant investment of Agency resources;

  7. Considers an innovative approach for a previously defined problem/process/methodology;

  8. Satisfies a statutory or other legal mandate for peer review.”

Questions about impact of the Bulletin on agency risk assessment practices

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

NRC Question 8. If applicable, please specify provisions in the Bulletin that can be expected to have a substantial positive effect on the quality, conduct, and use of risk assessments undertaken by your agency.


EPA supports the broad goal of this OMB Bulletin to improve the quality, objectivity, utility, and integrity of risk assessments. Many of the Bulletin’s standards are drawn from National Research Council (NRC) reports that EPA supported and whose recommendations have been endorsed by EPA. Many of the approaches presented in the supplementary information section of the proposed Bulletin (“Preamble”) have already been adopted by EPA:

  • in our quality system which includes our implementation of the OMB Information Quality Guidelines and OMB Peer Review Bulletin;

  • in the EPA Risk Characterization Handbook (www.epa.gov/osa/spc/2polprog.htm )

  • in the EPA Staff Paper on Risk Assessment Principles and Practices; and

  • in other EPA guidance, guidelines, and policies.

Further, EPA is engaged in a wide variety of activities to advance risk assessment practices:

  • agency wide workgroups including a Probabilistic Analysis Workgroup, and a task force on Expert Elicitation;

  • in specific activities in different program offices and regions, particularly the National Center for Environmental Assessment (NCEA); and

  • in support of intramural research in EPA labs and support of extramural research on risk assessment practices (e.g. Resources for the Future report on uncertainty analysis).

EPA supports the general goals described in section III of the Proposed Bulletin. These goals call for dialogue between risk assessors and decision makers in order to define the objectives of the assessment. This dialogue, in turn defines the scope and content of the risk assessment considering professional judgment and the costs and benefits of acquiring additional data before initiating the assessment. The goals provide flexibility in the type of risk assessment based on the hazard, the data, and the decision needs; furthermore the goals indicate that the level of effort be matched to the importance of the assessment. In contrast, sections IV and V describe the twenty standards referred to above as requirements in categorical and mandatory terms; for example, “All influential risk assessments shall…” (Sec V), or “…the agency shall include a certification explaining that the agency has complied with the requirements of this bulletin”.


The contrast between the flexibility described in the general goals and the prescriptive nature of the twenty proposed standards makes unclear what will be required in any one of the enormous variety of circumstances under which EPA and other agencies work. This in turn may lead to unrealistic expectations within and outside the Federal government regarding compliance with the proposed Bulletin.


The Bulletin should integrate the flexibility described in the goals in Section III with the standards in Sections IV and V.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Because of the breadth of the areas covered in the standards, and the complexity of their application and implementation, EPA suggests that OMB consider the model used for the Information Quality Guidelines, that is, to issue general guidance, and to ask each Agency to develop guidance appropriate for the scope of its activities, which OMB would review.


EPA believes that it could provide substantial compliance with the standards in the proposed Bulletin through compliance with its Information Quality Guidelines, Peer Review Policy, Risk Characterization Policy, Monte Carlo Policy, its Risk Assessment Guidelines, and other existing, related guidelines, policies, and guidance. Development of EPA guidance based on compliance with those specific Agency policies, guidelines, and guidance, EPA believes, would provide considerably greater detail and thereby promote greater transparency and clarity in its practices, for those within and outside the government.


EPA believes that this process would ensure greater consistency and integration with current practices, while advancing the practice of risk assessment in the specific areas described in the standards.


NRC Question 9. If applicable, please specify provisions in the Bulletin that can be expected to have a substantial negative effect on the quality, conduct, and use of risk assessments undertaken by your agency.


We see the issues here as essentially related to clarity, transparency, and conduct of risk assessments.


Aspects related to clarity and transparency include:


Sections VIII and IX


While Section IX gives OIRA and OSTP responsibility for overseeing implementation of the Bulletin, it does not outline any roles and responsibilities for decision-making, resolution of disagreements between agencies and OMB, certifications, waivers, exemptions, and other areas. The document should describe how interactions between OMB and the Agencies will work in implementing the Bulletin.


Implementation of the deferral and waiver section VIII is unclear and ambiguous in what is required; that is, when is a standard being “waived” as opposed to just being applied “flexibly?”


While the proposed Bulletin does provide an opportunity to waive or defer some or all of the indicated standards, this opportunity is defined in a very limited way. Under Section VIII, only the agency head may waive or defer the standards, which would likely result in an undue expenditure of great effort and time within the Agency. In addition, deferral

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

only delays the implementation of full compliance with the Bulletin and does not provide any real relief. The proposed Bulletin does not describe any criteria for granting a waiver or for providing for exemptions, but it indicates that even deferral is expected to be a rare event.


Scientific “defaults” or “inference guidelines” play an important role for EPA in providing a consistent and peer reviewed means of addressing recurring, fundamental issues of science policy in its risk assessments. The proposed Bulletin does not address this aspect of risk assessment practice that is discussed in the 1983 NRC “redbook” and specifically described for different areas in the EPA Risk Assessment Guidelines. However, as emphasized in the 2005 Cancer Guidelines, EPA sees that a critical analysis of all the available information relevant to assessing risk is the starting point from which default options may be invoked to address uncertainty or the absence of critical information.


Aspects related to Conduct of Risk Assessments


Those aspects of the Bulletin that could have the greatest negative impact on conduct, in addition to those that may pose technical challenges, are those that have a potentially broad scope, e.g., those that call for multiple analyses. The primary negative effect might be increased need for time and/or resources.

The standards of the proposed Bulletin would come into play for a large class of agency products and, if categorically adopted, would mandate a high level of analysis and development of characterization that goes beyond most current EPA practice in risk assessment.


While EPA appreciates that fact that the Bulletin does not create legal rights (Section XI), challenges that claim that the risk assessment or supporting analyses have not fully carried out the practices established by the Bulletin come in many other fora. Such claims could pose an additional burden.


Several standards discuss multiple analyses, including: IV 5, a quantitative evaluation of reasonable alternative assumptions; and V 5, portrayal of results based on different effects observed and/or different studies; We have some general concerns about these analyses as drafted:

  • their scope may be impractical;

  • one should consider the value added (benefits) of these analyses versus their costs as a function of the importance of the assessment, and their relative value in comparison to collecting data;

  • multiple analyses may pose risk communications challenges;

  • and in some cases, the complexity of the analyses may limit their feasibility.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Section V 9. states: Consider all significant comments received on a draft risk assessment report and:

  1. issue a “response-to-comment” document that summarizes the significant comments received and the agency's responses to those comments; and

  2. provide a rationale for why the agency has not adopted the position suggested by commenters and why the agency position is preferable.

EPA conducts its peer reviews and public involvement in line with its defined policies in these areas and consistent with the OMB Peer Review Bulletin, which provides for different processes for influential scientific information and highly influential scientific assessments. This section goes beyond those guidelines by calling for a response to comment package for all influential risk assessments, and also in its call not only to explain the basis for the agency position, but also to explain why other approaches were not taken, and why. This goes beyond the peer review procedures even for highly influential scientific assessments and most practice we know of in this area.


NRC Question 10. If your agency followed the procedures described in the Bulletin, would it affect the time course for production of the risk assessment (that is, the time required from initiation of the risk assessment to delivery to the regulatory decision maker)? If so, please explain why?


If EPA followed all of the procedures described in the twenty standards, assessments could take considerably longer. If alternatively, scoping and planning lead to an appropriately defined assessment in terms of its scope, as noted in the goals section, then those assessments should be efficient, and there would be a limited impact on current timelines.


NRC Question 11. One of the Bulletin’s reporting standards states the need to be scientifically objective by “giving weight to both positive and negative studies in light of each study’s technical quality.” Please give an example of how this would be implemented by your agency or department.


Weight of Evidence analyses, to which the Agency subscribes, embrace the notion of consideration of all the evidence, consistent with its quality. Thus, any published EPA risk assessment, should satisfy this standard in that sense. The phrase “giving weight to both positive and negative studies” has quantitative connotations and the term “consideration” may be preferable.


See also, section 4.4.2, page 72 of the Staff Paper which illustrates how positive evidence has not uncritically been accepted in analysis of carcinogenicity data.


NRC Question 12. Does your agency use risk assessments conducted by external groups? Would it be helpful to you if risk assessments submitted to your agency by

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

external groups, such as consultants and private industry, met the requirements proposed in the OMB Bulletin?


Yes, in some cases the Agency has relied upon assessments conducted by external groups, including NRC panels, the World Health Organization, the Canadian government, ATSDR, and CAL-EPA. In general, their conformity with the requirements of the Bulletin, as feasible and appropriate, would be a laudable goal both for those whose assessments may be used as well as more broadly for those who might wish to propose alternative analyses for consideration.


ADDITIONAL QUESTIONS FOR SPECIFIC AGENCIES: EPA


NRC Question 13. Regarding pesticides specifically, what risk-assessment activities will be covered by the Bulletin and what risk-assessment activities will be exempted?


The Agency agrees with the OMB bulletin that risk assessments for permitting or licensing programs should be exempt. Thus, pesticide risk assessments or actions under FIFRA would be excluded given that pesticide registration/re-registration program is a licensing program. However, the proposed Bulletin did indicate that actions that involve assessment / reassessment of tolerances for pesticide residues on food would be subject to the Bulletin (page 10, par. 2). EPA’s Office of Pesticide Programs conducts risk assessments in support of the establishment of tolerances under Federal Food Drug and Cosmetic Act (FFDCA). Because pesticide risk assessments supporting tolerances are tied to the pesticide registration/re-registration program (i.e., licensing), such risk assessments should also be exempted from the OMB bulletin. Furthermore, all new food tolerances are impacted by the short PRIA (Pesticide Registration Improvement Act) time frames (2 years and less). Although pesticide risk assessment tied to the registration/re-registration program (licensing) should be exempted, we agree with OMB that certain pesticide risk assessments that have significant science issues that are debated by the scientific community and that have intra- and inter-agency impact on regulatory decisions of broad consequences (e.g., arsenicals) should be subject to the Bulletin.


NRC Question 14. Does EPA have any examples of the application of the 1996 requirements of the Safe Drinking Water Act, as described on page 13 of the Bulletin? Can any examples be provided to the committee? If none are available, can EPA provide an explanation?

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

EPA has adapted these requirements in its implementation of the Information Quality Guidelines (US EPA, 20022). Thus, any assessments published subsequent to our completion of that document should be consistent with the elements described therein.


In issuing its IQGs, EPA adapted the SDWA principles. As EPA explained in its IQGs, “EPA conducts and disseminates a variety of risk assessments. When evaluating environmental problems or establishing standards, EPA must comply with statutory requirements and mandates set by Congress based on media (air, water, solid, and hazardous waste) or other environmental interests (pesticides and chemicals). Consistent with EPA's current practices, application of these principles involves a “weight-of-evidence” approach that considers all relevant information and its quality, consistent with the level of effort and complexity of detail appropriate to a particular risk assessment.” EPA committed to ensure, to the extent practicable and consistent with Agency statutes and existing legislative regulations, the objectivity of our dissemination of influential scientific information regarding human health, safety or environmental risk assessments by applying an adaptation of the SDWA principles.


EPA adapted the SDWA principles in the Agency’s IQGs, “in light of our numerous statutes, regulations, guidance and policies that address how to conduct a risk assessment and characterize risk” in order to:

  • Implement SDWA principles in conjunction with and in a manner consistent with Agency statutes, existing legislative regulations, and our existing guidelines and policies for conducting risk assessments.

  • Accommodate the range of real world situations that EPA confronts in the implementation of our diverse programs. For example, EPA’s adaptation covers situations where EPA may be called upon to conduct "influential" scientific risk assessments based on limited information or in novel situations, and recognizes that all “presentation” information called for in the SDWA principles may not be available in every instance. Our adaptation recognizes that the level of effort and complexity of a risk assessment should also balance the information needs for decision making with the effort needed to develop such information.

  • Enable EPA to use all relevant information, including peer reviewed studies, studies that have not been peer reviewed, and incident information; evaluate that information based on sound scientific practices as described in our risk assessment guidelines and policies; and reach a position based on careful consideration of all such information (i.e., a process typically referred to as the “weight-of-evidence” approach). As noted in our IQGs, EPA uses a weight of evidence approach, in which a well-developed, peer-reviewed study would generally be accorded greater weight than information from a less well-developed study that had not been peer-reviewed, but both studies would be considered.

2

US EPA (December 2002). Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information Disseminated by the Environmental Protection Agency (EPA/260R-02-008). Washington, DC, Office of Environmental Information

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
  • Allow EPA to use terms that are most suited for environmental (ecological) risk assessments. EPA assessments of ecological risks address a variety of entities, some of which can be described as populations and others (such as ecosystems) which cannot.

The Bulletin should clarify that it does not modify or supersede OMB-approved agency adaptations of the SDWA risk assessment principles in their Information Quality Guidelines.


NRC Question 15. Does EPA have a working definition of “expected risk” or “central estimate”? The agency indicated in its 1986 cancer guidelines (51FR33992-34003) that central estimates of low-dose risk, based on “best fit” of the observed dose-response relationship, were meaningless—that “fit” in the high-dose region provided no information about “best fit” in the region of extrapolation. The newer cancer guidelines appear to adopt the same thinking. Has the Agency changed its view on this point? If so, why?


EPA finds the terms central estimate and expected risk to be quite different and does not use them interchangeably. EPA documents discuss central estimates from a specific model, for example, with respect to both cancer dose response assessment and for derivation of maximum likelihood estimates for points of departure (PODs). In contrast, discussion of the notion of expected risk, (not a specifically defined term, to our knowledge) in a risk assessment usually involves a particular exposure distribution, and relies on a series of judgments about whom (average consumer, top 5% of those exposed) we expect to be exposed. For safety assessment, an additional complication is a limited ability to describe what effect is expected above a reference dose.


EPA’s 2005 cancer guidelines differ significantly from its 1986 guidelines with regard to the treatment of the “central estimate” of cancer risk. In particular, the 2005 guidelines distinguish between the dose-response function within the range of data from that which is used to extrapolate to lower doses. In contrast, the 1986 guidelines use one model (the linearized, multistage model) both to fit the data and to extrapolate to lower doses. The 2005 guidelines discuss the following issues not mentioned in the 1986 guidelines.

  1. A preference for biologically-based dose-response models when there is adequate scientific support for them.

  2. The potential for biologically based modes of action that are non-linear at low doses (even in the absence of a biologically based dose-response model).

  3. The utility of central estimates, and estimates of confidence limits, when practicable, conforming with OMB and EPA guidelines on data quality.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

NASA Responses to National Academy of Sciences Questions Posed on Office of Management and Budget’s Proposed Risk Assessment Bulletin
July 25, 2006

QUESTIONS FOR ALL AGENCIES POTENTIALLY AFFECTED BY THE OMB BULLETIN

General questions about current risk assessment practices


Question 1: Please provide a brief overview of your current risk assessment practices. Specifically, do you conduct probabilistic risk assessment? Is there a common approach to both risk assessments and uncertainty analysis? How do you currently address uncertainty and variability in your agency’s risk assessments?


NASA Response:


NASA defines risk in a very broad sense.1 Risk is the expression of likelihood (probability) and severity of scenarios leading to potential undesired consequences with respect to achieving established and stated program objectives which generally fall into two categories: technical and programmatic. Technical objectives are associated with attributes such as safety and performance. Programmatic objectives are associated with attributes such as schedule and cost. When specifically considering technical risk, the undesired consequences of interest to NASA include:

  • Death, injury, or illness to a member of the public.

  • Loss of crew.

  • Mission failure.

  • Death, injury, or illness to ground crew and other workforce (occupational).

  • Earth contamination.

  • Planetary contamination.

  • Loss of, or damage to, flight systems.

  • Loss of, or damage to, ground assets (program facilities and public properties).

Regardless of the type of risk that may be of interest for specific circumstances, assessments performed at NASA for technical risks typically involve the definition and characterization of three components of risk:

  • The sequence of possible events that constitute a risk scenario (events leading to an undesired consequences)

  • The probability of the risk scenario occurring(expressed qualitatively or quantitatively),

  • The severity of the consequences that constitute the outcome of the risk scenario (expressed qualitatively or quantitatively).2

1

NASA Procedural Requirements (NPR) 8000.4 defines risk as “the combination of the probability that a program or project will experience an undesired event (such as a cost overrun, schedule slippage, safety mishap, environmental exposure, or failure to achieve a needed scientific or technological breakthrough or mission success criteria) and the consequences, impact, or severity of the undesired event, were it to occur.”

2

This is done in the form of the numeric magnitude of the parameter, or a set of parameters that best represent the impact of consequences.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

In addition to NASA’s mission-related risk assessments, external regulatory agencies at the Federal and State levels may require NASA to conduct project-specific (e.g., nuclear missions) or site-specific risk assessments to evaluate the extent of environmental contamination, potential threat to human health (including occupational exposures), and the environment or evaluate remediation response alternatives. In these cases, NASA utilizes the risk assessment technical procedures approved by the regulating agency, in conformance with the requirements of the regulating agency.


Application of Probabilistic Techniques for the Assessment of Technical Risks


NASA uses the scenario-based modeling framework for probabilistic risk assessment (PRA) of space systems. This framework is employed primarily because space-related accidents with adverse safety consequences are too infrequent to assess directly the risk using actuarial assessments. This scenario-based modeling framework involves the following steps:

  1. Define a set of undesired consequences (e.g., loss of crew, loss of mission)

  2. Develop for each undesired consequence; a set of off-normal trigger conditions or events is which, if uncontained or unmitigated, can lead to the undesired consequences. These disturbances are referred to as initiating events (IEs).

  3. Employ various systematic techniques to identify risk scenarios (sequences of events) that start with an IE and end at an undesired consequence (called an “end state” in PRA). These scenarios include hardware failures, human errors, and physical phenomena.

  4. Evaluate, using Bayesian approaches, the probabilities of these scenarios based on available evidence, expert judgment, and data from similar systems. Evaluating uncertainties is an important part of this activity. The probabilities are updated as new information is gained.

To date, NASA has performed a number of scenario-based PRA studies, two of them for major programs, namely the Space Shuttle and the International Space Station [1,2]. PRA techniques are now being used in trade studies for the new exploration systems [3 ,4]. In addition, numerous risk assessments have been conducted to support the decision process for the launch of radioactive materials.3 [5]

To improve the quality of PRAs and to formalize its integration with engineering activities, NASA has developed procedures and requirements for PRA methods and applications. NASA is currently in the process of developing procedures for how to use PRA to support risk-informed decision making. The following is a list of PRA-related documents that NASA has developed:

3

These assessments/evaluations are required under the requirements of Presidential Directive/National Security Council Memorandum Number 25 (PD/NSC-25), “Scientific or Technological Experiments with Possible Large-Scale Adverse Environmental Effects and Launch of Nuclear Systems into Space.” NASA missions involving the launch of radioactive materials must also comply with the provisions of the National Environmental Policy Act of 1969 (42 U.S.C. 4321 et seq.).

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

According to NASA requirements, a PRA has to be conducted during project formulation and design concept phases and has to be maintained and updated periodically throughout the system life cycle to support design and operational decisions. Because the PRA models must be synchronized with the system design and operational state-of-knowledge, they are interactive in nature. Furthermore, the focus of PRA models often changes during the lifecycle of the system depending on from where the dominant risk contributors appear to be coming. Because of these properties and in the context of managing risk, a typical NASA PRA can be viewed neither as a static engineering calculation whose results are fixed nor as a single deliverable document.4


Evaluation of Uncertainties in PRAs


NASA considers the evaluation of uncertainties as an essential part of evaluating technical risks, in particular the uncertainties associated with the risk scenario probabilities and the risk scenario consequences. To deal with uncertainty as part of the risk function, one must be mindful of the nature of uncertainty and its characterization. According to NASA’s PRA Procedures Guide, uncertainty is classified into two broad categories or types: epistemic (state-of-knowledge) uncertainty and aleatory (variability) uncertainty.

  • Epistemic uncertainty: that uncertainty associated with incompleteness in the risk analyst’s (or analysts’) state of knowledge. In the context of modeling of system behavior, there are two categories of epistemic uncertainty:

    • Parameter Uncertainty: uncertainty in the value of a parameter of a model, conditional on the mathematical form of that model.

    • Model Uncertainty: uncertainty in whether the model adequately represents the behavior of the system being modeled.

  • Aleatory uncertainty: that uncertainty associated with variation or stochastic behavior in physical properties or physical characteristics of the system being addressed. Aleatory uncertainty is manifested, for example, in the variability of the time at which a failure or a random event will occur. Another example is

4

Risk assessments conducted to support the launch of nuclear or radioactive materials would however be prepared as a single document in accordance with requirements established by PD/NSC-25.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

the variations in material properties resulting from variability in manufacturing processes. Another example is the variation in weather properties and characteristics. Human performance also exhibits aleatory uncertainty in its variation from day to day and from individual to individual.

The NASA PRA Procedures Guide provides methods on how to mathematically quantify aleatory and epistemic uncertainty using techniques of probability theory and simulation. For example, the standard method for characterizing uncertainty associated with a parameter value of a risk model (e.g., a failure rate) is to represent it with a probability density function. Similarly, the standard method for propagating uncertainties through a risk model is to use simulation techniques. In the context of managing risk of space systems, it is important to separate the epistemic from aleatory uncertainty. The essential difference between these two types of uncertainty is that the former is, in principle, reducible through the collection of more knowledge (e.g., conducting research), whereas the latter is not as it represents a property of the system being analyzed. Consequently, the control for aleatory uncertainty is very different.


Application of Probabilistic Techniques for Assessment of Programmatic Risks


NASA uses probabilistic techniques to assess programmatic risks. Cost-risk analysis results in a cost estimate value associated with a probability of achieving that value. In cost-risk analysis, four sources of uncertainties are modeled: cost estimating relationship (CER) uncertainty; CER parameter input uncertainty; programmatic uncertainty; and project element correlation uncertainty.

  • CER uncertainty is due to the imperfect regression line fit to a set of cost data and is captured by including the standard error or prediction interval as a distribution around the regression line in the statistical convolution. Cost estimators provide the data for modeling of CER uncertainty.

  • CER parameter input uncertainty is provided by modeling optimistic, most likely and pessimistic values as triangular (for example) distributions when using the CER in the calculations and statistical convolutions. Engineers provide the data for the modeling of parameter input uncertainty.

  • Programmatic uncertainty is captured by modeling programmatic influences on the cost. Both engineers and cost estimators provide the data for the modeling of programmatic uncertainty.

  • Finally, correlation uncertainty between the proposed system’s elements is modeled through rules of thumb and engineering input. These correlations have to be accounted for since costs in subsystems/components tend to move consistently either in the same direction or opposite directions. Failure to do so will lead to underestimation of cost uncertainty.

The resulting distribution for cost can be interpreted as producing a range of cost estimate values each associated with a probability p that if chosen as a budget the probability that the proposed project will cost that value or less is p.5

5

For funding decisions, NASA uses 70% level of probability.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Question 2: Please identify any substantial scientific or technical challenges that you may encounter when conducting risk assessments for your agency.


NASA Response:


There are several significant scientific and technical challenges when conducting risk assessments of space missions:

  • Space systems operate in harsh environments. In addition to being subjected to the significant forces during launch and landing, space systems function outside the confines of Earth’s atmosphere and are subjected to orbital debris and micro-meteoroids. The modeling and evaluation of the effects of these environments on space systems and the uncertainties involved must be factored into the analysis when conducting a PRA.

  • Most space systems are designed and operated for a specific mission, such as delivering a satellite to earth orbit, conducting advanced micro-gravity research, or exploring neighboring planets. Because of their unique designs, the system’s response to adverse environments and identified initiating events isalso unique. Analysis of the response of one space system is not readily transferable among other systems. Advances in space system design and technology (hardware) also require dedicated physical modeling for specific space systems.

  • A space mission is comprised of a number of phases; from launch to orbital operations and possible transit to other planets, and for some missions, entry and landing back on earth. During a mission, the configuration and operation of the systems change; propulsion systems needed for liftoff are not needed outside the confines of earth’s gravity; components that must operate in a certain way during one phase may need to operate in different way during another phase. The modeling and assessment of the phased mission’s nature of space flight pose challenges.

  • Representative reliability and failure data presents a challenge to conducting risk assessment of space systems. There is limited experience data with respect to the operation of systems and components in space.6 In addition, as technology advances, improved space systems and hardware are fielded that increase energy efficiency and reduce weight. While these systems and component are tested in simulated space environments, there is also limited data on many of these advanced components and systems. Because of scarcity of data, Bayesian approaches need to be employed to combine various sources of data (i.e., available evidence). This presents challenges for how to model the relevance and confidence associated with each piece of evidence systematically.

6

The approach used for system acquisition may also impact the availability of data to support risk assessments. If commercial services are used for a portion of the mission, such as use of a commercial expendable launch vehicle, some or all of the data concerning the performance of the vehicle may be proprietary and access to, or use of the data may be restricted.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
  • “Margin” in design and operational parameters is an important issue for space exploration. “Margin” in a key parameter is the difference between the value of that key parameter in some operational state and the value of that parameter at which failure will occur. Designers incorporate margin to reduce the chance of failure. Unfortunately, the provision of physical margin in space vehicles is very costly (e.g., extra material strength or shielding adds weight which, in turn, reduces payload delivery capacity to orbit). The determination of the adequacy of margin in a given situation must be key to developing realistic PRA models. Transforming design margins into a probabilistic framework to support PRAs poses significant challenges.

Question 3: What is your current definition of risk assessment, and what types of products are covered by that definition?


NASA Response:


NASA NPR 7120.5 C [6] defines risk assessment as an evaluation of a risk item that determines (1) what can go wrong, (2) how likely is it to occur, and (3) what the consequences are. As stated in response to Question 1, since NASA defines the concept of risk broadly, the subjects of its risk assessments are also broad to include one or more of the three basic program execution domains:

  • System technical performance

  • Program cost

  • Program schedule

NASA considers risk assessment as a necessary element of the risk management process [7] which is required by all programs and projects that provide aerospace products or capabilities—i.e., flight and ground systems, technologies, and operations for space and aeronautics.


Question 4: About how long (that is, from initiation of the risk assessment to delivery to the regulatory decision maker) does it take to produce the various types of risk assessments?


NASA Response


NASA uses risk assessment throughout a program’s or project’s life cycle, from initial stages of formulation where concepts and preliminary design ideas are developed, through fielding and operation to decommissioning. Since a risk assessment evolves and is updated over the life of the project or program, it can be considered as a “living” risk model with no fixed dates for their final delivery. The level of detail associated with a risk assessment model is dependent on the availability of design and operational information and the nature of the application for which the risk model is intended.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

For large programs, where the assessments were conducted after the spacecraft were fielded, i.e., Space Shuttle, the risk assessment required several years to complete. For nuclear missions, where probabilistic and risk techniques are used as part of the safety analyses, the risk assessment is conducted as the mission is planned, the spacecraft and launch vehicles are constructed, and reviews and approvals are attained. The completion of nuclear mission safety analyses require about 3-5 years.


Risk assessments of conceptual designs used to perform trades studies and sensitivity analyses to optimize safety, mission profile and operations have been conducted in several months. These types of risk assessments are typically conducted at a high level.


Questions about OMB’s definition of risk assessment and applicability


Question 5: Using the definition of risk assessment described in the OMB Bulletin, are there work products that would now be considered risk assessments that were not previously considered risk assessments? If so, what are they?


NASA Response


One significant area of change relates to our internal policies and directives. OMB Circular A-123, Management’s Responsibility for Internal Control, requires management to perform risk assessments to identify internal and external risks that may prevent the organization from meeting its objectives. The results of those risk assessments would be used to identify control activities that could be implemented to ensure agency objectives are met. The OMB Bulletin indicates that influential risk assessments are those that the agency reasonably can determine will have a clear impact on private sector decisions. For NASA, where the majority of our budget is applied to contracted activity, most of our internal controls do not impact the decisions of the private sector. If the special standards for Influential Risk Assessments were applied to every risk assessment performed to determine if an internal policy or directive was required, this could dramatically impact the time to develop, implement, and modify the internal controls. It is not clear, given the emphasis that the Bulletin places on regulatory matters and public use of the risk assessments, that risk assessments performed for internal NASA decision-making purposes need to have this level of regulation.


Questions about type of risk assessment (tiered structure)


Question 6: In your agency, is there currently a clear demarcation between risk assessments used for regulatory analysis and those not used for regulatory analysis? Is this clear at the outset of the risk assessment?


NASA Response


NASA is not a regulator, but rather a user with direct stakes in the technical and programmatic risk metrics that support decision making. NASA never uses its risk models for any regulatory application. Unlike regulatory agencies, NASA owns and

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

operates the subjects of risk assessment. This has several implications for NASA’s approach to risk assessment and risk management. Because NASA is interested in technical performance (e.g., safety, mission success), as well as programmatic performance (e.g., cost and schedule), its risk assessments need to address both, preferably in an integrated fashion. At regulatory agencies, the need for regulatory stability and transparency creates an incentive to standardize and hold static the technical approaches used in quantitative risk assessment. At NASA, quite the opposite, because of the application of novel technologies in new environments, there is a need to advance the state-of-the-art in quantitative risk assessment to support decision making aimed at optimizing safety and likelihood of mission success (see Response to Question 2). In this connection, NASA is continuously developing new risk assessment techniques and, as such, needs the flexibility to push the envelope on probabilistic methods and applications. For example, methodological enhancements are needed and are being planned for implementation to handle the dynamic nature of space flights in risk assessment of space missions.


In general, the technical and programmatic risk assessments conducted within NASA would meet the five aspiration goals as described in Section III of the proposed Bulletin (Problem Formulation; Completeness; Effort Expended; Resources Expended; and Peer Review and Public Participation). The significant exception in most cases would involve public participation that would normally not be required in the same sense as a regulatory process affecting the livelihood of the private sector would be.


In the case of environmental compliance and remediation, NASA responds to regulating the agency’s requirements in the development of risk assessments for site-specific environmental remediation. These risk assessments reflect the direction, requirements, and processes required by the regulating agency.


Question 7: In your agency, is there currently a clear demarcation between “influential risk assessment” used for regulatory purposes and other risk assessments used for regulatory purposes? Is this clear at the outset of the risk assessment?


NASA Response


This question is not applicable to NASA because technical and programmatic risk assessments performed within NASA are not used for any regulatory purposes. If a regulating agency asks NASA to perform a risk assessment, NASA will perform the assessment in compliance with the technical procedures of the regulating agency.


Questions about impact of the Bulletin on agency risk assessment practices


Question 8: If applicable, please specify provisions in the Bulletin that can be expected to have a substantial positive effect on the quality, conduct, and use of risk assessments undertaken by your agency.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

NASA Response


By and large, risk assessments to examine technical risk within NASA meet the provisions that are cited within the Bulletin. The largest benefit to NASA with respect to the performance of risk assessment in the technical arena would be the added emphasis that a higher level external (OMB) requirement provides. Implementation of risk assessment, particularly probabilistic risk assessment, to analyze technical risks is a relatively new activity. If the OMB requirements were to apply to NASA, being able to cite an external requirement reinforces the existing risk assessment requirements established within NASA.


Question 9: If applicable, please specify provisions in the Bulletin that can be expected to have a substantial negative effect on the quality, conduct, and use of risk assessments undertaken by your agency.


NASA Response


The largest potentially detrimental aspect of this Bulletin upon NASA relates to the scope of the Bulletin. The Bulletin indicates that the scope of the document covers risk assessments disseminated by Federal agencies (See “The Requirements of This Bulletin”, page 8). This wording infers that the requirements of the Bulletin apply to risk assessments that are prepared specifically for or are likely to be provided external to the agency. This wording is consistent with the significant emphasis that the Bulletin places on use of risk assessments to support definition and implementation of regulations. Later in the definitions section of the Bulletin, however, the scope is significantly broadened when the Bulletin indicates that these rules would apply to any risk assessment document that is made available to the public by the agency or that is subject to release under the Freedom of Information Act. If the Bulletin applies to any internal risk assessment performed within NASA that is releasable under the Freedom of Information Act, there could be a substantial burden to meet all of the requirements contained within the Bulletin. Two examples bear notation.


Within NASA technical areas, many of the benefits of performing risk assessments do not lie in the completion and release of a formal risk assessment report but are realized by performing the risk assessment process among the participants in the design process. The Bulletin indicates that one of its goals is to have the risk assessors engage in an iterative dialogue with the decision makers who will use the assessment. The inference in this text is that the decision maker receives a report at the end of the risk assessment process and then makes a decision7. In the use of risk assessment within NASA, it is often the “give and take” with the participants in the risk assessment process that causes design and operational changes to be made to control risks, often at engineering levels lower than that of the ultimate decision maker. Emphasis on delivery of a report rather than the beneficial effects through pursuing the discipline of the process can have a negative effect on the ultimate impact of the risk assessment.

7

Risk assessments related to the launch of nuclear or radioactive materials are conducted in this manner; however, other technical risk assessments within NASA are not of this nature.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

A second example has to do with internal controls as defined in OMB Circular A-123, Management’s Responsibility for Internal Control. This OMB Circular requires management to perform risk assessments to identify internal and external risks that may prevent the organization from meeting its objectives. The results of those risk assessments then are to be used to identify control activities that can be implemented to ensure agency objectives are met. The OMB Bulletin indicates that influential risk assessments are those that the agency reasonably can determine will have a clear impact on private sector decisions. In NASA, where the majority of the budget is applied to contracted activity, most of our internal controls have no direct application to the private sector; however, they do influence private sector activities. If the special standards for Influential Risk Assessments were applied to every risk assessment performed to determine if an internal policy or directive was required, this could impact the time to develop, implement, and modify the internal controls. Given the emphasis that the Bulletin places on regulatory matters and public use of the risk assessments, it is not clear that risk assessments performed for internal purposes need to have this level of requirement placed upon them.


Question 10: If your agency followed the procedures described in the Bulletin, would it affect the time course for production of the risk assessment (that is, the time required from initiation of the risk assessment to delivery to the regulatory decision maker)? If so, please explain why?


NASA Response


NASA does not provide its programmatic or technical risk assessments to regulatory decision makers so there would be no impact in that area; however, it should be noted that even with the internal risk assessments that are performed to assess technical risk the final delivered report may not be the most important aspect of the risk assessment. NASA accrues much of the benefit from performing its technical risk assessment because of the iterative work performed by the risk assessors in conjunction with the engineers, logisticians, and analysts during the design process. Risks identified during the process are often resolved well before a final report is completed.


As stated earlier, in the case of environmental, safety and health risk assessments, NASA’s conduct of site-specific risk assessments reflects the direction and requirements of the regulating agency. NASA must comply with the requirements imposed by the regulating agency and would be subject to any schedule changes or other impacts generated by the regulatory agency’s conformance with the Bulletin.


Question 11: One of the Bulletins’s reporting standards states the need to be scientifically objective by “giving weight to both positive and negative studies in light of each study’s technical quality.” Please give an example of how this would be implemented by your agency or department.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

NASA Response


NASA is adopting a risk-informed decision-making process which is supported by two major activities: (1) risk assessment and (2) deliberation. NASA considers the deliberation activity as a crucial part of the decision-making process since it evaluates risk assessment results and scrutinize the results to ensure that they are meaningful and the risk models used as the basis of the results are technically sound and traceable [8]. The deliberation activity involves all affected stakeholders that may include, as appropriate, program/project manager, astronauts, NASA workforce, engineering organizations, and safety and mission assurance organizations8. Typically these deliberations take place in several forms depending on the context and the nature of decision situation. For example, deliberations can take place as part of the risk assessment peer review activities [9, 10] or as part of the design review or flight readiness review activities [11].


Question 12: Does your agency use risk assessments conducted by external groups? Would it be helpful to you if risk assessments submitted to your agency by external groups, such as consultants and private industry, met the requirements proposed in the OMB Bulletin?


NASA Response


Large-scale risk assessment projects within NASA are often conducted jointly by several groups that include both NASA civil service and contractor analysts. The involvement of contractors in the conduct of risk assessments is necessary because NASA contracts out the majority of its mission execution activities to aerospace sector companies. The domain and discipline knowledge of the external groups who are involved in the development and operation of various aspects of NASA’s mission is needed in order to develop realistic risk models. Because of the multi-group and multi-discipline nature of risk assessments and to ensure technical quality and consistency, NASA has developed risk assessment requirements and procedures (e.g., the PRA Procedures Guide) that must be met by all parties involved (internal and external) in the conduct of NASA’s risk assessments. NASA’s internal use of risk assessments is primarily for non-regulatory purposes and often requires significant innovation in application; therefore, the OMB Bulletin would be of limited help in conducting our risk assessments.

REFERENCES

1 Space Shuttle Probabilistic Risk Assessment, Volume II, Rev. 1: Model Integration Report, Johnson Space Flight Center, January 2005.

2 Probabilistic Risk Assessment of the International Space Station: Phase II – Stage 7A Configuration, Volume II – Data Package, Futron Corporation, 2000.

3 NASA’s Exploration Systems Architecture Study, NASA-TM-2005-214062, November 2005.

4 NASA Smart Buyer Study, 2004

5 Pluto/New Horizons Interagency Nuclear Safety Review Panel Safety Evaluation Report of August 2005.

8

The safety and mission assurance organizations are independent from the program/project management organizations and provide independent perspective on risk-related issues that affect safety and mission success.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

6 NASA NPR 7120.5C “NASA Program and Project Management Processes and Requirements,” March 2005

7 NASA NPR 8000.4, “Risk Management Procedural Requirements,” April 2002.

8 NASA NPR 8715.3: Chapter 2 (system Safety) of NASA General Safety Program Requirements (updated version pending release)

9 Final Report of the Independent Peer Review Panel on the Probabilistic Risk Assessment of the Space Shuttle, Prepared for NASA Headquarters Office of Safety and Mission Assurance, April 4, 2005.

10 Report of the Independent Peer Review Panel on the Probabilistic Risk Assessment of the International Space Station Phase II – Stage 7A Configuration, Prepared for NASA Headquarters Office of Safety and Mission Assurance, June 2002.

11 Space Shuttle Flight Readiness Review.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

EXECUTIVE OFFICE OF THE PRESIDENT

OFFICE Of MANAGEMENT AND BUDGET

WASHINGTON, D.C. 20503

Dr. Ellen Mantus

Project Director

National Research Council

Division on Earth and Life Sciences

Board on Environmental Studies and Toxicology

500 Fifth Street, NW Washington, DC 20001

Dear Dr. Mantus:

Enclosed with this letter are the Office of Management and Budget's (OMB’s) responses to questions that the National Academies of Sciences (NAS) submitted to OMB on June 28, 2006. We hope these responses will be helpful to the National Research Council Committee as it reviews the OMB Proposed Risk Assessment Bulletin (Proposed Bulletin).


Additionally, your request to OMB asked for “copies of all comments that are submitted by federal agencies on the OMB Bulletin, if possible.” At this point in time, OMB has not received any official comment letters on the Proposed Bulletin from Federal agencies that conduct risk assessments. However, staff of one agency did send us comments marked “internal deliberative.” Additionally, we have received a comment letter from the Small Business Administration’s (SBA’s) Office of Advocacy, which is available on that office’s website at http://www.sba.gov/advo/laws/comments/omb06_0608.html.


If you should need further information from OMB, please contact Dr. Nancy Beck at 202-395-3258.

Sincerely,

Steven D. Aitken

Acting Administrator

Office of Information and Regulatory Affairs

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

1. Dr. Graham discussed the recent perehlorate evaluation as an example that would have benefited from this Bulletin. Does the Bulletin support using a “precursor” of an adverse effect or other mechanistic data as the basis of a risk assessment, as was recommended in the National Academies’ perehlorate review?


OMB response:


While the Proposed Risk Assessment Bulletin (Proposed Bulletin) does not speak to specific use of a precursor effect, there is no language in the Proposed Bulletin that precludes the use of a “precursor” of an adverse effect or other mechanistic data as the basis of a risk assessment.


Further, Section V, subsection 7 (page 20) of the preamble of the Proposed Bulletin discusses the standard for characterizing human health effects: “[I]t may be necessary for risk assessment reports to distinguish effects which are adverse from those which are non-adverse?” Additionally, Section V, subsection 7 (page 25) of the text of the Proposed Bulletin notes the importance of describing the ramifications of the choice of effect: “Where human health effects are a concern, determinations of which effects are adverse shall be specifically identified and justified based on the best available scientific information generally accepted in the relevant clinical and toxicological communities.”


2. Is it correct that those submitting data and risk assessments to the government to obtain product registrations, approvals, and licenses are excluded from the requirements of the Bulletin?


OMB response:


The Proposed Bulletin does not apply to risk assessments performed with respect to individual agency adjudication or permit proceedings (including a registration, approval or licensing) unless the agency determines that: (i) compliance is practical and appropriate and (ii) the risk assessment is scientifically or technically novel or likely to have precedent-setting influence on future adjudications and/or permit proceedings. (Proposed Bulletin, Section II, subsection 2(b), page 23). This exemption applies regardless of who generated the data and the risk assessment.


The OMB Information Quality Guidelines (67 FR 8460 Feb 22, 2002) do not cover adjudicative processes. The OMB Final Information Quality Bulletin for Peer Review (70 FR 2677 Jan 14, 2005) (Peer Review Bulletin) also includes an exemption for “individual agency adjudication or permit proceedings (including a registration, approval, licensing, site-specific determination), unless the agency determines that peer review is practical and appropriate and that the influential dissemination is scientifically or technically novel or likely to have precedent-setting influence on future adjudications and/or permit proceedings.” The exemption used in the Proposed Bulletin is consistent with the exemption in the Peer Review Bulletin.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

3. Will the Bulletin require further review by OMB staff of risk assessments that have been peer reviewed in accordance with established peer review procedures and standards, including publication in a reputable peer reviewed journal?


OMB response:


The Proposed Bulletin does not require OMB review of any risk assessment. However, under existing authorities and procedures, OMB might review a risk assessment. For example, risk assessments that axe part of regulatory impact analyses might be reviewed under Executive Order 12866. Additionally, Section III, subsection 5 (page 23) states: “The agency shall follow appropriate procedures for peer review and public participation in the process of preparing the risk assessment.” Agencies should rely on the Peer Review Bulletin to determine appropriate peer review procedures.


4. Public participants in the risk assessment and rulcmaking processes – industry groups, environmental groups, other governmental entities, individual scientists – often provide risk assessments for agency consideration. Will these outside assessments be held to the same standards as agency-generated assessments, that is, to the requirements in the Bulletin?


OMB response:


The Proposed Bulletin applies to risk assessments that are made publicly available by an agency, regardless of whether the agency conducted the risk assessment. If third-party submissions are to be used and made publicly available by Federal agencies, it is the responsibility of the Federal Government to make sure that such information meets relevant standards.


5. The 1983 NRC report Risk Assessment in the Federal Governraent: Managing the Process treats “risk assessment” as a term of art that covers four distinct analyses (hazard identification, dose-response assessment, exposure analysis, and risk characterization), each typically based on a number of separate studies and analyses. The OMB Bulletin defines “risk assessment” to apply to “any document” that “could be used for risk assessment purposes, such as an exposure or hazard assessment that might not constitute a complete risk assessment as defined by the National Research Council.” What is the advantage of defining risk assessment in this way'?


OMB response:


The Proposed Bulletin used a risk assessment definition that “applies to documents that could be used for risk assessment purposes, such as an exposure or hazard assessment that might not constitute a complete risk assessment…” (Proposed Bulletin, Section I, page 8). Many of these individual documents are relied upon by Federal agencies and used in important, and often economically significant, regulatory decisions made by Federal agencies as well as other

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

decision makers. The accuracy, quality, clarity, transparency, and utility of these documents could be improved by meeting, as appropriate, the quality standards outlined in the Proposed Bulletin. As we stated in the OMB Press Release accompanying the Proposal Bulletin, “Transparent and accurate risk assessments are necessary for agencies and other decision makers to make wise risk management decisions during the formation of agency rules and policy decisions.”

Additionally, if these individual documents are prepared in a manner consistent with the Proposed Bulletin, this may avoid additional work when these activities are combined to create a comprehensive risk assessment document at a later point in time.


6. The Bulletin discusses the importance of risk assessors interacting with decision-makers. What safeguards will be built into the process to protect the scientific process from being framed by the decision-maker instead of the science?


OMB response:


In Section III, subsection 1 (page 10) of the preamble, the Proposed Bulletin sets forth an aspirational goal of an iterative dialogue between risk assessors and agency decision maker(s). This type of dialogue “will help ensure that the risk assessment serves its intended purposes and is developed in a cost-effective manner.” (Proposed Bulletin, Section III, subsection 1, page 10). The standards proposed in the Proposed Bulletin are designed to ensure the quality and objectivity of the scientific process and the science.

Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 155
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 156
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 157
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 158
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 159
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 160
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 161
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 162
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 163
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 164
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 165
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 166
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 167
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 168
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 169
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 170
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 171
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 172
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 173
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 174
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 175
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 176
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 177
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 178
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 179
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 180
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 181
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 182
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 183
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 184
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 185
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 186
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 187
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 188
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 189
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 190
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 191
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 192
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 193
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 194
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 195
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 196
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 197
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 198
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 199
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 200
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 201
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 202
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 203
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 204
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 205
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 206
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 207
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 208
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 209
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 210
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 211
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 212
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 213
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 214
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 215
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 216
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 217
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 218
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 219
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 220
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 221
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 222
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 223
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 224
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 225
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 226
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 227
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 228
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 229
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 230
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 231
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 232
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 233
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 234
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 235
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 236
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 237
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 238
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 239
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 240
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 241
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 242
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 243
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 244
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 245
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 246
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 247
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 248
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 249
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 250
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 251
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 252
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 253
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 254
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 255
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 256
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 257
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 258
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 259
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 260
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 261
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 262
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 263
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 264
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 265
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 266
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 267
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 268
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 269
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 270
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 271
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 272
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 273
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 274
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 275
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 276
Suggested Citation:"Appendix E: Questions for Federal Agencies from the Committee and Agency Responses to Questions." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 277
Suggested Citation:"Appendix E