National Academies Press: OpenBook
« Previous: 3 Risk Assessment Definition and Goals
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

4
Standards for Risk Assessment

This chapter evaluates the standards that are proposed in the Office of Management and Budget (OMB) bulletin. The standards are defined for general and influential risk assessment, and the committee first comments on that structure. It then discusses major themes, such as uncertainty. The chapter concludes with a summary of comments on each of the individual standards that are proposed in the bulletin.

DIFFERENT LEVELS OF STANDARDS

The bulletin articulates standards for general risk assessments and special standards for influential risk assessments (see Appendix B). One standard listed for general risk assessments is specifically directed at risk assessments used for regulatory analysis. The bulletin defines an influential risk assessment as one that the responsible “agency reasonably can determine will have or does have a clear and substantial impact on important public policies or private sector decisions” (OMB 2006, p. 23). Thus, the categories of risk assessments, and thus the standards, are not based on inherent properties of risk assessments but on aspects of risk management.

In defining special standards for influential risk assessments, OMB appropriately recognizes that risk assessments that have potentially greater impact should be more detailed, be better supported by data and analyses, and receive a greater degree of scrutiny and critical review than risk assessments likely to have smaller impacts. However, proposing different standards for general and influential risk assessments is problem-

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

atic for at least three reasons. First, the determination of what constitutes an influential risk assessment may be unclear at the outset. Although some agencies may be able to identify an influential risk assessment at the onset of the analysis,1 others may not be able to.2 The impact of an agency activity that led to the development of the risk assessment may not be known a priori. Some degree of iteration is necessary and appropriate, but the application of additional standards when some arbitrary impact threshold is crossed may lead to needless and inappropriate delays in implementation of the action.

Second, the effort to separate risk assessments arbitrarily into two broad categories does not appropriately recognize the continuum of risk assessment efforts in terms of potential impact on economic, environmental, cultural, and social values. Any attempt to divide that continuum into two categories is unlikely to succeed and will not substantially improve the quality of risk assessments. The use of two categories will tend to lead to costly and slow iterative processes in which a risk assessment may not be judged influential initially but on completion may be found to cross an arbitrary threshold that triggers the additional standards. It may be that additional evaluation and analysis may be appropriate as the impacts of the risk assessment are better identified, but an arbitrary triggering of a new set of standards is not appropriate.

Third, the specific standards to be required of all influential risk assessments appear to be targeted at types of risk assessments and supporting information that may not be appropriate for the broad array of risk assessments that are conducted by federal agencies. Several standards proposed for influential risk assessments appear to be related specifically to human health risk assessments; these standards might not be appropriate for engineering risk assessments that evaluate the safety of structures or systems. Other issues associated with the standards are discussed in the remainder of this chapter.

RANGE OF RISK ESTIMATES AND CENTRAL ESTIMATES

One focus in the bulletin is the presentation of a range of risk estimates and a central estimate; statements on this topic in the bulletin and

1

See Appendix E, pp. DOE-4 and DOL-5.

2

See Appendix E, pp. HHS-13, DOD-9, and CPSC-4.

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

supplementary information are summarized in Table 4-1. Previous National Research Council (NRC) reports have made relevant comments on this and related topics; selected comments from those reports are provided in Table 4-2. The committee agrees with OMB that in some cases “presentation of single estimates of risk is misleading” and that ranges of “plausible risk” should be presented; however, the challenge is in the operational definitions of such words as central, expected, and plausible. The committee’s concerns regarding the use of those words in the bulletin are presented in this section.

Central Estimates of What?

In the supplementary information, a central estimate is defined as a mean of a distribution, the most representative value of a distribution, or a weighted estimate of risk. However, a central estimate is defined in context, and those definitions raise the question of what is being considered.

Distributions arise in considerations of uncertainty and variability. Variability is an inherent property of a system. Ventilation rates, water-consumption amounts, and body-mass indexes all differ in a population of individuals. Obtaining more data will not reduce variability but will provide a better description of the distribution of a variable trait. In contrast, uncertainty reflects ignorance. The mean body-mass index of a population of individuals is typically an unknown value. Similarly, the true statistical model for a dose-response relationship is unknown. Unlike variability, uncertainty might be reduced by obtaining more data. Potential confusion arises because distributions might be used to represent both variability and uncertainty. For variability, the distribution corresponds to the different values of a trait or characteristic in a population. For the uncertainty of a parameter value, a distribution might correspond to the sampling distribution of the parameter in repeated samples (a frequentist perspective) or to the actual distribution of the parameter (a Bayesian perspective).

Variability and uncertainty can lead to a distribution of risk. Consider the use of a single dose-response model (without the additional complexity of model uncertainty) to predict the risk of adverse response as a function of dose, and assume that the coefficients of the model must be estimated (that is, uncertainty) and that there is a distribution of exposures in the population (that is, variability). The point estimates of the

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

TABLE 4-1 Summary of Bulletin and Supplementary Information on Range of Risk Estimates and Central Estimates

Type of Risk Assessment

Bulletin

Supplementary Information

General risk assessment

A risk assessment should “provide a characterization of risk, qualitatively and, whenever possible, quantitatively. When a quantitative characterization of risk is provided, a range of plausible risk estimates shall be provided” (OMB 2006, p. 24).

The reporting of “multiple estimates of risk (and the limitations associated with these estimates)” is recommended to “convey the precision associated with these estimates” (OMB 2006, p. 13). A discussion of the Safe Drinking Water Act (SDWA) noted the need to describe the population addressed by each risk estimate, the expected risk or central estimate for affected populations, and the upper- bound or lower- bound risk estimates, with uncertainty issues, supporting studies, and methods used in the calculations. The SDWA reporting standards “should be met, where feasible, in all risk assessments which address adverse health effects” (OMB 2006, p. 14).

Risk assessment for regulatory analysis

A risk assessment should include “whenever possible, a range of plausible risk estimates, including central or expected estimates, when a quantitative characterization of risk is made available” (OMB 2006, p. 24).

The supplementary information expands on issues related to risk assessments used for regulatory analysis. A “range of plausible risk estimates, including central estimates,” are required with any quantitative characterization of risk. A “central estimate” is defined to be “( i) mean or average of the distribution; (ii) number which contains multiple estimates of risk based on different assumptions, weighted by their relative plausibility; or (iii) any estimate judged to be most representative of the distribution.” This quantity should “neither understate nor overstate the risk, but rather, should provide the risk manager and the public with the expected risk” (OMB 2006, p. 16).

Influential risk assessment

All influential risk assessment shall “highlight central estimates as well as high- end and low-end estimates of risk when such estimates are uncertain” (OMB 2006, p. 25).

The supplementary information expands on issues related to standards for influential risk assessment. An influential risk assessment should meet the standards for regulatory- analysis risk assessment and additional standards. In particular, the standard for

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

 

 

the presentation of numerical estimates directly addresses issues related to central estimates of risk and ranges of risk estimates. Reporting a range avoids a “false sense of precision” when there is uncertainty. Reporting the range and central estimate “conveys a more objective characterization of the magnitude of the risks.” The highlighting of only high- end or low- end risk estimates is discouraged (OMB 2006, p. 17). The supplementary information further notes that “central” and “expected” are used synonymously. If model uncertainty is present, this central model could reflect a weighted average of risk estimates from alternative models or some synthesis of “probability assessments supplied by qualified experts” (OMB 2006, p. 17).

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

TABLE 4-2 Previous NRC Reports Addressing Issues of Uncertainty and Central Risk Estimates

Topic

Comment

Reference

Uncertainty and risk assessment

NRC (1989), a report on risk communication, discusses uncertainty in the risk assessment process: “Any scientific risk estimate is likely to be based on incomplete knowledge combined with assumptions, each of which is a source of uncertainty that limits the accuracy that should be ascribed to the estimate. Does the existence of multiple sources of uncertainty mean that the final estimate is that much more uncertain, or can the different uncertainties be expected to cancel each other out? The problem of how best to interpret multiple uncertainties is one more source of uncertainty and disagreement about risk estimates” (p. 44).

NRC 1989

Central estimates (point estimates) and range of risk estimates

“When a risk estimate is uncertain, it can be described by a point or ‘maximum likelihood’ estimate or by a range of possibilities around the point estimate. But estimates that include a wide range of uncertainties can imply that a disastrous consequence is ‘possible, ’ even when expert opinion is unanimous that the likelihood of disaster is extremely small. The amount of uncertainty to present is a judgment that can potentially influence a recipient's judgment” (p. 83).

NRC 1989

Averaging predictions

“Simply put, although classical decision theory does encourage the use of expected values that take account of all sources of uncertainty, it is not in the decision- maker's or society's best interest to treat fundamentally different predictions as quantities that can be ‘averaged’ without considering the effects of each prediction on the decision that it leads to” (p. 173)

“To create a single risk estimate or PDF [probability density function] out of various different models not only could undermine the entire notion of having default models that can be set aside for sufficient reason, but could lead to misleading and perhaps meaningless hybrid risk estimates” (p. 174).

NRC 1994

Range of plausible values

“One important issue related to uncertainty is the extent to which a risk assessment that generates a point estimate, rather than a range of plausible values, is likely to be too ‘conservative’ (that is, to excessively exaggerate the plausible magnitude of harm that might result from specified environmental exposures)” (p. 180).

NRC 1994

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

model coefficients are obtained (for example, central estimates of a sampling distribution or of some posterior distribution). The mean of the exposure distribution (a central estimate) could be substituted into the dose-response model by using the point estimates of the coefficients to yield a central risk estimate. If the exposure distribution is unimodal and symmetric, the estimate seems like a reasonable central risk estimate. If the exposure distribution is skewed or multimodal, the central estimate may be unreasonable. At this point, uncertainty might enter into the calculation, and confidence or credible intervals for the model coefficients might be used to reflect parameter uncertainty. The bulletin requires the reporting of a plausible range of risk when a quantitative risk assessment is conducted. How should plausible be defined in this context? For example, would substituting the 2.5th percentile and 97.5th percentile from the exposure distribution in the dose-response model yield a plausible range? Without more guidance and operational definitions of terms, the bulletin’s guidance on central estimates and plausible risk ranges is unclear.

More on “Central” Estimates

Expected value has a technical statistical meaning that corresponds to the mean of some random variable. Many central estimates can be constructed, including arithmetic means, geometric means, harmonic means, medians, and trimmed means; however, it is misleading to say that “central” and “expected” estimates are synonymous, as is suggested in the bulletin (see Table 4-1). As noted in NRC (1994, p. 173), “it is not in the decision-maker’s or society’s best interest to treat fundamentally different predictions as quantities that can be ‘averaged’ without considering the effects of each prediction on the decision that it leads to.” In fact, a simple “expected” value may not convey an appropriate message.

It may be reasonable to provide a calculation of the central risk estimate along with the lower and upper bounds of the 95% confidence or credible intervals and thus provide a sense of the degree of uncertainty in a particular risk assessment. Usually, some number will need to be selected for a risk management decision, such as to set an “action level” to control a toxicant under consideration or to offer guidance about how much exposure is acceptable or “safe,” as in the determination of a reference dose or an allowable limit in the workplace. Variability is an important consideration because people might respond differently to a given

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

exposure because of personal vulnerability to exposure or behavior that alters the actual exposure. Often in public-health practice and prevention, the goal is to protect the most vulnerable in the population—children, the elderly, people with illnesses (such as respiratory or cardiac disease), the developing fetus, and workers. Using the mean or central estimate would not accomplish that goal unless it reflected the mean response of the distribution of vulnerable or susceptible individuals.

Risk communication is hampered by the use of vague or meaningless terms. For example, there is no such thing as the “average person.” Is the average person male or female? Does this person weigh 70 kg? Instead, we have average values of measurable attributes of people. Similarly, terms like central, expected, and plausible should be replaced with precise language.

Reporting Plausible Ranges and Central Estimates

The purpose and context of risk assessments frequently influence the need for, and indeed the advisability of, reporting a range and a central estimate. For example, consider a risk assessment that evaluates risks associated with operations conducted in an extreme environment. The setting of National Aeronautics and Space Administration (NASA) spacecraft exposure guidelines (SEGs) illustrates how decisions made in advance can determine the kinds of data that need to be presented and how risks should be reported.

NASA guidelines for chemical exposure on spacecraft, which include SEGs and their predecessors, spacecraft maximum acceptable concentrations (SMACs) and spacecraft water exposure guidelines (NRC 1994, 1996a, 1996b, 2000b, 2004), are set to protect astronauts whose health and job fitness are closely monitored. However, they are engaged in an inherently dangerous activity and are in an environment that presents unique stressors, such as exposure to high levels of solar radiation (associated risks include cancer and hematopoietic toxicity) and microgravity (associated risks include loss of muscle mass and lowered hematocrit). In addition to protecting the health of astronauts, great emphasis is placed on avoiding exposure that would prevent astronauts from performing mission-critical tasks. The guidelines for chemical exposures are derived with those risks in mind.

Because of NASA’s emphasis on safety and the devastating consequences of accidents, relatively conservative assumptions are used in

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

setting exposure guidelines. In addition, the chemical exposure guidelines are used as design points for the environmental control systems for the spacecraft. In early discussions concerning SMACs, NASA engineers working on those systems indicated their preference for a single guideline to use as a design target. Thus, although SEG documentation discusses uncertainties and limitations and transparently describes the derivation of all SEG values, SEG values are set at levels thought to be protective, and central estimates or ranges are not reported.

UNCERTAINTY

This section assesses the extent to which the proposed bulletin achieves its goal of enhancing technical quality and objectivity with respect to the treatment of uncertainty. Understanding the current state of best practice is a precondition for improving that practice. Therefore, this section first provides a historical perspective on uncertainty in risk assessment. That discussion provides the best examples of approaches to uncertainty analysis in the federal agencies. This section then briefly reviews methods used to address uncertainty in risk assessment and next notes relevant statements from previous NRC reports. The section concludes with the committee’s comments on the bulletin’s standards related to uncertainty analysis. This section differs from other sections of the report in that it provides more in-depth discussion of this topic; the level of detail was considered appropriate, given the focus on uncertainty in the bulletin.

Historical Perspective

The desire to do risk assessment properly led to the development of many of the methods for uncertainty analysis, particularly probabilistic risk assessment (PRA, also referred to as probabilistic safety analysis [PSA] in Europe). Although the aerospace industry led the development of reliability engineering, the basic methods for the use of PRA in engineering were developed in the nuclear industry. The concepts of formal and structured development of accident risk scenarios using event trees, extension of the causal models using fault trees, probabilistic treatment of physical dependencies, separation of external and internal sources of risk, and quantification and propagation of parameteric uncertainties first

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

appeared in the Reactor Safety Study. This historical perspective on the development of PRA is discussed below.

The Aerospace Sector

A systematic concern with PRA began in the aerospace sector after the fire in Apollo flight test AS-204 on January 27, 1967, in which three astronauts were killed. Before the Apollo accident, NASA relied on its contractors to apply “good engineering practices” to provide quality assurance and quality control. NASA’s Office of Manned Space Flight initiated the development of quantitative safety goals in 1969. Quantitative safety goals were not adopted; the reason given at the time was that managers would not appreciate the uncertainty in risk calculations: “the problem with quantifying risk assessment is that when managers are given numbers, the numbers are treated as absolute judgments, regardless of warnings against doing so” (Wiggins 1985). After the inquiry into the Challenger accident of January 28, 1986, it came to light that distrust of reassuring risk numbers was not the only reason for abandoning quantitative risk assessment. Rather, initial estimates of catastrophic failure probabilities were so high that their publication would have threatened the political future of the entire space program (Bedford and Cooke 2001).

Since the shuttle accident, NASA has instituted programs of quantitative risk analysis to support safety during the design and operation phases of space travel. On the basis of an earlier U.S. Nuclear Regulatory Commission document, a PRA procedures guide was published in 2002; it included a chapter on uncertainty analysis (NASA 2002). The current NASA Procedural Requirements NPR-8705.5, effective as of July 2004, mandates PRA procedures for NASA programs and projects and stipulates that “any PRA insights reported to decision makers shall include an appreciation of the overall degree of uncertainty about the results and an understanding of which sources of uncertainty are critical. Presentation of PRA results without uncertainties significantly detracts from the quality and credibility of the PRA study” (NASA 2004, p. 12).

The Nuclear Sector

Throughout the 1950s, in accordance with President Eisenhower’s

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Atoms for Peace program, the Atomic Energy Commission pursued an approach to risk management that emphasized using high-quality components and construction; conservatism in the engineering codes and standards for design, construction, and operation of plants; and conservative analysis of accident scenarios using the “maximum credible accident.” Because “credible accidents” were covered by plant design, residual risk was estimated by studying the hypothetical consequences of “incredible accidents.” A study released in 1957 focused on three scenarios of radioactive releases from a 200-megawatt (200-MW) nuclear-power plant operating 30 miles from a large population center. Regarding the probability of such releases, the study concluded that “no one knows now or will ever know the exact magnitude of this low probability” (AEC 1957).

Successive design improvements were intended to reduce the probability of a catastrophic release of the radioactive material from the reactor. However, because of the limitations of the analytic methods, such improvements were not able to have a significant effect on the accident estimates. Moreover, as larger reactors were planned, such as 1,000-MW reactors, the increase in radioactive material in the cores led to larger consequences in the “incredible accident” scenarios.

The desire to quantify and evaluate the effects of those improvements led to the introduction of probabilistic risk assessment. Whereas the earlier studies had dealt with uncertainty by making conservative assumptions, the goal now was to provide a realistic, as opposed to conservative, assessment of risk. A realistic risk assessment necessarily involved an assessment of the uncertainty in the risk calculation. The basic methods of PRA developed in the aerospace program in the 1960s found their first full-scale application, including accident-consequence analysis and uncertainty analysis, in the Reactor Safety Study of 1975 (U.S. NRC 1975), which is rightly considered to be the first modern PRA.

The Reactor Safety Study caused considerable concern within the scientific community. In response to letters from Representative Udall, chairman of the House Committee on Interior and Insular Affairs, the U.S. Nuclear Regulatory Commission created an independent group of experts to review its “achievements and limitations.” The report of that review group (Lewis et al. 1978) led to a policy statement by the commission (U.S. NRC 1979). The policy statement (1) endorsed the review group’s strong criticism of the executive summary, stating that it was misleading and was not a summary of the report, (2) acknowledged that the peer-review process followed in publishing the Reactor Safety Study

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

was inadequate, and (3) stated that the commission “does not regard as reliable the Reactor Safety Study's numerical estimates of the overall risk of reactor accident.” However, the commission also noted that the review group cited major achievements of the Reactor Safety Study and stated that “WASH 1400 [the Reactor Safety Study] was a substantial advance over previous attempts to estimate the risks of the nuclear option. WASH -1400 was largely successful in at least three ways: in making the study of reactor safety more rational, in establishing the topology of many accident sequences, and in delineating procedures through which quantitative estimates of the risk can be derived for those sequences for which a data base exits.”

After the Three Mile Island accident, two influential independent studies (Kemeny 1979; Rogovin and Frampton 1980) recommended greater use of probabilistic analyses in assessing nuclear-plant risks. A new generation of PRAs appeared in which some of the methodologic defects of the Reactor Safety Study were avoided. The Nuclear Regulatory Commission released the Fault Tree Handbook in 1981 (Vesely et al. 1981) and the PRA Procedures Guide in 1983 (U.S. NRC 1983), which shored up and standardized much of the risk assessment methodology. An extensive chapter was devoted to uncertainty and sensitivity analysis. An authoritative review of PRAs conducted after the Three Mile Island accident noted the necessity of modeling uncertainties properly in using PRA as a management tool (Garrick 1984).

In 1990, a suite of studies known as NUREG 1150 (U.S. NRC 1990) appeared; these used structured expert judgment to quantify uncertainty and set new standards for uncertainty analysis. They were followed by a joint U.S.-European program in 1994-1998 (Harper et al. 1995; Goossens et al. 1996; Brown et al. 1997; Goossens et al. 1997; Haskin et al. 1997; Little et al. 1997) for quantifying uncertainty in accident-consequences models. Expert-judgment methods were further elaborated, as were screening and sensitivity analysis. European studies, spun off that work, have applied uncertainty analysis to European consequences models and provided extensive methodologic guidance (Brown et al. 2001; Goossens et al. 2001; Jones et al. 2001a,b,c,d,e). In particular, they address methods for identifying important variables; selecting, interviewing, and combining experts; propagating uncertainty; inferring distributions of model parameter values; and communicating results. The guidance is summarized in a special 2000 issue of Radiation Protection

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Dosimetry.3 All the documents in those reports have undergone peer review.

In August 2006, the Nuclear Regulatory Commission Office of Nuclear Regulatory Research issued Draft Regulatory Guide: An Approach for Determining the Technical Adequacy of Probabilistic Risk Assessment Results for Risk-Informed Activities (U.S. NRC 2006). The document cites related activities of the American Nuclear Society (ANS 2003), the American Society of Mechanical Engineers (ASME 2005), and the Nuclear Energy Institute (NEI 2000); the committee notes guidelines by the Department of Energy (DOE 1996). The Draft Regulatory Guide lists principles and objectives for a standard for PRA and clarifies what a standard should be and do (see Box 4-1). In particular, a standard should identify current good practice, thoroughly define what is technically required, and require a peer-review process that identifies where the technical requirements of the standard are not met.

The committee notes that PRA is often performed without a fullscale uncertainty analysis. Examples include PRAs of individual power plants and of individual chemical plants. In such cases, it is customary to take uncertainty ranges from generic PRAs or other generic sources. Such sources include Federal Guidance Report 13 for cancer risk coefficients for radionuclides (Eckerman et al. 1999) and the Integrated Risk Information System (IRIS) database (IRIS 2006).

Current Good Practice in the Evaluation of Uncertainty in Risk Analysis

Risk analysis typically involves substantial uncertainties, and the quantification of uncertainty has become an integral part of PRA. Standards for PRA therefore require standards for uncertainty analysis that are based on current good practice. Three levels of good practice are distinguished here.


Level 1—uncertainty methods that are accepted and standardized. These are methods and techniques about which there is near unanimity in the scientific community. The subjective, or Bayesian, interpretation of probability for representing uncertainty would be in this category, as

3

Volume 90 (Issue 3).

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

BOX 4-1

Principles and Objectives of a PRA Standard (U.S. NRC 2006, p. 22)

  1. The PRA standard provides well-defined criteria against which the strengths and weaknesses of the PRA may be judged so that decision makers can determine the degree of reliance that can be placed on the PRA results of interest.

  2. The standard is based on current good practices (see Note below) as reflected in publicly available documents. The need for the documentation to be publicly available follows from the fact that the standard may be used to support safety decisions.

  3. To facilitate the use of the standard for a wide range of applications, categories can be defined to aid in determining the applicability of the PRA for various types of applications.

  4. The standard thoroughly and completely defines what is technically required and should, where appropriate, identify one or more acceptable methods.

  5. The standard requires a peer review process that identifies and assesses where the technical requirements of the standard are not met. The standard needs to ensure that the peer review process:

  • determines whether methods identified in the standard have been used appropriately;

  • determines that, when acceptable methods are not specified in the standard, or when alternative methods are used in lieu of those identified in the standard, the methods used are adequate to meet the requirements of the standard;

  • assesses the significance of the results and insights gained from the PRA of not meeting the technical requirements in the standard;

  • highlights key [emphasis added] assumptions that may significantly [emphasis removed] impact the results and provides an assessment of the reasonableness of the assumptions;

  • is flexible and accommodates alternative peer review approaches; and

  • includes a peer review team that is composed of members who are knowledgeable in the technical elements of a PRA, are familiar with the plant design and operation, and are independent with no conflicts of interest that may influence the outcome of the peer review [this clause was not in the ASME definition].

  1. The standard addresses the maintenance and update of the PRA to incorporate changes that can substantially impact the risk profile so that the PRA adequately represents the current as-built and as operated plant.

  2. The standard is a living document. Consequently, it should not impede research. It is structured so that, when improvements in the state of knowledge occur, the standard can be easily updated.

Note: Current good practices are those practices that are generally accepted throughout the industry and have shown to be technically acceptable in documented analyses or engineering assessments.

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

would Monte Carlo methods of propagation, including Latin hypercube sampling, stratified sampling, and pseudo-random-number sampling. Standard statistical techniques for quantifying the uncertainty associated with estimates of model parameters also belong here.

Level 2—uncertainty methods that are used but not standardized. Many techniques are being applied and have passed muster in peer review but cannot claim universal assent. One example is expert-judgment methods. NUREG 1150 (U.S. NRC 1990) developed techniques to capture experts’ modeling assumptions and used equal weighting to combine their distributions. The U.S. Nuclear Regulatory Commission and EU studies applied performance-based weighting in addition to equal weighting, leaving the choice to the discretion of the problem-owner (Harper et al. 1995). They also developed a different method for dealing with model uncertainty by using expert judgment to derive a distribution over models with probabilistic inversion. Guidelines for probabilistic inversion (Jones et al. 2001a) and structured expert judgment (Cooke and Goossens 1999) were published in that project. The Senior Seismic Hazard committee has developed a third approach (Budnitz et al. 1997). Model uncertainty is another subject for which several techniques populate the field of applications. Examples include Bayesian model averaging (Raftery 1995; Hoeting et al. 1999; Kang et al. 2000; Morales et al. 2006; NRC 2000a; Bailer et al. 2005), expert model elicitation (U.S. NRC 1990), and probabilistic inversion (Jones et al. 2001a). Where the scientific community has not yet resolved which techniques are most suitable in which situations, the only prudent course for technical guidance is to delineate the relevant techniques and their current status.

Level 3—uncertainty methods in the research mode. These are methods and techniques that address recognized problems but that the research community is still developing. Techniques do not have full procedures, applicable computer codes are of “research grade” (to be used only by the developer), and other, possibly better, techniques are on the drawing board. Perhaps the most important category still in the research mode is dependence modeling. Mathematical methods for representing dependence in high-dimensional distributions are still in development. One can consider graphical models (Markov trees, Bayesian belief nets, and Vines) as probes. Other aspects are the efficient sampling of high-dimensional dependent distributions and the elicitation or learning of dependence structures. The common distinction of uncertainty vs variability is clear enough in many applications, but it can glide easily into thorny issues of dependence modeling. The final point in the principles

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

and objectives of a PRA standard cited earlier is worth recalling: “The standard is a living document. Consequently, it should not impede research. It is structured so that, when improvements in the state of knowledge occur, the standard can easily be updated” (U.S. NRC 2006, p. 22).

The committee finds that where practice is not standardized it is not judicious to mandate methods. Research methods and techniques need research support to evolve into practical tools that can be evaluated in the field. Any “ongoing effort to improve the quality, objectivity, utility, and integrity of information disseminated by the federal government to the public” must be cognizant of the state of the art (OMB 2006, p. 1).

Uncertainty Addressed in Previous National Research Council Reports

The standards and related discussion in the proposed OMB bulletin and supplementary information on uncertainty echo previous NRC reports. For example, NRC (1989, p. 12) noted that “risk messages and supporting materials should not minimize the existence of uncertainty.” That report also noted the importance of considering the distribution of exposure and sensitivity in a population as components of an evaluation of total population risk. Uncertainties about risks and benefits are described, including those in assumptions and models that serve as the basis of risk estimates.

The issue of uncertainty was clearly of concern in the NRC report on human exposure assessment for airborne pollutants (NRC 1991):


Limited information is available regarding the accuracy of most contaminant concentration models and less is known about exposure models because most models have not been adequately validated. Model users should understand that model outputs have uncertainties, not just those arising from the uncertainties in the input data, and that actual exposure lies somewhere in the range of that uncertainty. The results of models should be presented with their estimated uncertainties. To the extent possible, the description of the model results should distinguish between input and model uncertainty. A major objective for improving models should be to reduce uncertainty due to the model itself so that the estimated exposure

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

is closer to the real exposure and the uncertainties are primarily associated with the uncertainties in the input data (p. 173).


Uncertainty was also considered in detail in the NRC (1993) report in the context of ecologic risk assessment. In that report, uncertainty was classified as being related to measurements (including inadequacy of data, measurement difficulties, and variability in organismal response), to conditions of observation (including laboratory-field condition differences), and to inadequacies of models (including parameter-value uncertainty, mechanistic uncertainty, and extrapolation) (see p. 261). NRC (1993) contained some of the most expansive discussions of uncertainty and variability among all the documents prepared in the last 2 decades.

The importance of reflecting uncertainty in risk assessments was underscored by NRC (1994, p. 161), which stated that the “committee believes that uncertainty analysis is the only way to combat the ‘false sense of certainty,’ which is caused by a refusal to acknowledge and (attempt to) quantify the uncertainty in risk predictions.” NRC (1994, Table 9-1) provided a taxonomy of uncertainty in risk analysis that emphasized parameter-value uncertainty and model uncertainty. That report also suggested a strategy for improving a quantitative estimate of uncertainty (Table 9-2) and suggested “that analysts present separate assessments of the parameter uncertainty that remains for each independent choice of the underlying model(s) involved” (NRC 1994, p. 173).

The NRC (2002) report Estimating the Public Health Benefits of Proposed Air Pollution Regulations identified three barriers to the acceptance of recent Environmental Protection Agency (EPA) health benefit analyses: large amounts of uncertainty inherent in such analyses, EPA’s manner of dealing with them, and the fact that “projected health benefits are often reported as absolute numbers of avoided death or adverse health outcomes” (p. 126). A primary analysis provides a quantification of uncertainty, but “the probability models in EPA’s primary analyses incorporate only one of the many sources of uncertainty in these analyses: the random sampling error in the estimated concentration-response function” (p. 128).

NRC (2002) stated that modelers often assume that their models are correct and base estimates of the models’ parameter values on single studies. If a different sample of the same size has been drawn from the population, that procedure would result in different estimates. Uncertainty in the estimated concentration-response function arising in that way is termed random sampling error. Obviously, the model may not be

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

correct or may not be complete, and uncertainties from these sources may be significant.

Ancillary uncertainty analyses list other sources of uncertainty and provide supplementary calculations based on alternative hypotheses. Those uncertainties “may be characterized only subjectively by reference to expert judgment” (NRC 2002, p. 135). EPA is encouraged to “explore alternative options for incorporating expert judgment into its probabilistic uncertainty analyses” (NRC 2002, p. 137).

The Bulletin and the Committee’s Response to Proposed Uncertainty Standards

The implications of variability and uncertainty with regard to central estimates and plausible ranges of risks have been discussed. The focus here is on the bulletin sections that address uncertainty directly. The bulletin's special standards for influential risk assessments include the following requirements:


4. Characterize uncertainty with respect to the major findings of the assessment including: a. document and disclose the nature and quantitative implications of model uncertainty, and the relative plausibility of different models based on scientific judgment; and where feasible; b. include a sensitivity analysis; and c. provide a quantitative distribution of the uncertainty (OMB 2006, p. 25).


Requiring strict adherence to point 4.c would make the analyst’s job very difficult in many circumstances. Such a quantitative distribution of uncertainty can often be produced, but the numerical values of the uncertainty distribution may not be highly accurate. The qualifying “where feasible” is too vague to serve as technical guidance. How is feasibility determined? Could studies with unwelcome results be held to higher feasibility standards? Clear guidance regarding uncertainty defaults would involve recognizing the necessity of such defaults while recognizing their provisional character. At the same time, research is needed to improve and validate default uncertainty factors.

Model uncertainty is mentioned only in the supplementary information:

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

A model is a mathematical representation—usually a simplified one—of reality. Where a risk can be plausibly characterized by alternative models, the difference between the results of the alternative models is model uncertainty…When risk assessors face model uncertainty, they need to document and disclose the nature and degree of model uncertainty. This can be done by performing multiple assessments with different models and reporting the extent of the differences in results. A weighted average of results from alternative models based on expert weightings may also be informative (OMB 2006, p. 18).


Model uncertainty has been addressed in various ways. For example, parameter uncertainty can be described by placing (joint) distributions over the parameters of a particular model (Jones et al. 2001a), and then model uncertainty can be addressed by defining a distribution over possible models (Bailer et al. 2005). Although there are applicable methods for evaluating model uncertainty, there is not yet a standard method (level 2 of good practice). Methods for determining “a weighted average of results from alternative models based on expert weightings” (OMB 2006, p. 18) would constitute a research program rather than a body of applicable techniques. Thus, the committee emphasizes that although methods exist for addressing model uncertainty, there are no standard methods, and some methods are still in the initial stages of development. Furthermore, model uncertainty may dominate parameter uncertainty in many situations and, as indicated by the lack of standard methods, may be more difficult, if not impossible, to quantify. That problem is a key limitation to the bulletin’s call for model uncertainty analysis. The committee notes that the selection of the models considered for any averaging process should reflect candidate models that are plausible. The averaging of output from plausible and implausible models is a useless exercise.

The bulletin is silent on dependence modeling. OMB Circular A-4 does broach this issue briefly:


You should pay attention to correlated inputs. Often times, the standard defaults in Monte Carlo and other similar simulation packages assume independence across distributions. Failing to correctly account for correlated distributions of inputs can cause the resultant output uncertainty intervals to be too large,

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

although in many cases the overall effect is ambiguous. You should make a special effort to portray the probabilistic results—in graphs and/or tables—clearly and meaningfully (OMB 2003, pp. 41-42).


Neglecting dependence can lead to large errors, both conservative and nonconservative. The bulletin’s silence in this regard is a serious omission. As indicated previously, techniques for inferring, eliciting, and modeling dependence are still the subjects of active research in risk assessment. Technical guidance is premature, but the issue of dependence modeling should be acknowledged and targeted for research.

The ability to quantify and propagate uncertainty is still in development. However, uncertainty analysis has developed further and faster than our ability to use the tools in decision-making. Questions, such as how uncertainty analysis should be used to set action levels and make regulatory decisions, deserve more attention.

ADVERSE EFFECTS

The core task of risk assessment is the analysis of risks associated with a particular activity, outcome, or event. The choice of the end point of interest is a critical step in risk assessment. The end point could be a human health effect, such as death, or other events, such as collapse of a bridge or failure of a nuclear reactor. In the discussion of adverse effects, the bulletin limits its discussion almost exclusively to adverse human health effects, with no discussion of adverse effects regarding engineering or other types of adverse effects. This section reviews the statements that are found in the bulletin regarding adverse effects and thus focuses on human health effects.

The bulletin focuses on the choice and determination of an “adverse effect” as the end point of risk assessment and states that “where human health effects are a concern, determinations of which effects are adverse shall be specifically identified and justified based on the best available scientific information generally accepted in the relevant clinical and toxicological communities” (OMB 2006, p. 25).

The supplementary information further emphasizes the choice of the adverse effect rather than a nonadverse effect as the end point and states:

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

It may be necessary for risk assessment reports to distinguish effects which are adverse from those which are non-adverse…In chemical risk assessment, for example, measuring the concentration of a chemical metabolite in a target tissue of the body is not a demonstration of adverse effect, though it may be a valid indicator of chemical exposure. Even the measurement of a biological event in the human body resulting from exposure to a specific chemical may not be a demonstration of an adverse effect. Adversity typically implies some functional impairment or pathologic lesion that affects the performance of the whole organism or reduces an organism's ability to withstand or respond to additional environmental challenges. In cases where qualified specialists disagree as to whether a measured effect is adverse or likely to be adverse, the extent of the differences in scientific opinion about adversity should be disclosed in the risk assessment report. In order to convey how the choice of the adverse effect influences a safety assessment, it is useful for the analyst to provide a graphical portrayal of different “safe levels” based on different effects observed in various experiments. If an unusual or mild effect is used in making the adverse-effect determination, the assessment should describe the ramifications of the effect and its degree of adversity compared to adverse effects that are better understood and commonly used in safety assessment (OMB 2006, p. 20).


The above definition of a human adverse effect as typically one in which “some functional impairment or pathologic lesion…affects the performance of the whole organism or reduces an organism’s ability to withstand or respond to additional environmental challenges” (OMB 2006, p. 20) implies a clinically apparent effect.4 However, a goal of public health is to control exposures before the occurrence of functional impairment of the whole organism. Recent efforts have been made to identify measurable adverse effects or biologic changes that occur at a point in which they are minor, reversible, or subclinical and that do not

4

The proposed definition of adverse effect generally follows the approach of EPA. However, the distinction is that previous EPA guidance on this matter has been relatively flexible and could be adjusted or changed as science advanced.

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

result in functional impairment of the whole organism, even in the most vulnerable or susceptible individuals in the population.

Dividing effects into dichotomous categories of adverse and nonadverse is problematic. Adverse effects usually develop along a continuum, starting with uptake of a toxicant, distribution and metabolism, contact with a target organ, biologic change, physiologic response and repair, and clinical disease. Thus, with some doses and hosts, biologic changes occur, but the body has sufficient defense mechanisms for detoxification or adaptation, and there is little or no adverse cumulative effect, particularly at a low dose. In other situations, biologic changes are measurable and are precursors of an adverse clinical change, so an adverse effect, or precursor of an adverse effect, could be defined in terms of a chemical metabolite or biologic change that is an indicator of both exposure and effect. The same biologic change could have little impact at a small dose (and so be termed nonadverse using the bulletin's approach) but produce a much larger impact at a greater dose or in a more vulnerable person (and thus be termed adverse).

Two common examples are exposure to carbon monoxide (CO) and exposure to an organophosphorus pesticide. CO binds to hemoglobin about 200 times better than oxygen and thus reduces the amount of oxygen carried and released to the body’s tissues. For CO, a biologic (biochemical) monitoring test is used to measure carboxyhemoglobin concentration. At very low concentrations, such as current background concentrations (1-2%), enough oxygen is usually brought to the tissues for there to be no discernible clinical or subclinical effects. However, even mild increases (to 4-6%) can cause symptoms in vulnerable populations. For example, those with underlying heart conditions can experience an increase in cardiac arrhythmias (Sheps et al. 1990) and a decrease in exercise performance (Allred et al. 1989). The developing fetus is also more susceptible to decreases in oxygen content and increases in CO. Therefore, ambient-air standards for CO are set well below the concentrations that would be expected to cause clinical effects even in more susceptible populations.

Organophosphorus and carbamate insecticides can inhibit the metabolism of a variety of enzymes called esterases. An important one is the enzyme acetylcholinesterase, which breaks down the neurotransmitter acetylcholine. If too much acetylcholine accumulates, there is excessive stimulation of cholinergic nerves, which can lead to a variety of symptoms, including blurry vision, increased salivation, diarrhea, muscle twitching, and, at higher doses, lowered heart rate, cardiac collapse, and

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

death. Biologic monitoring relies on the measurement of blood cholinesterase activity. The percentage reduction of cholinesterase activity is a measurement of the extent of exposure and often correlates with adverse effects. Medical surveillance and regulations have been based on changes in cholinesterase activity (see, for example, Cal. Code of Regs. tit. 3 § 6728 [2003]). Actions are ordered, even if workers have no clinical symptoms, to avoid continued exposure, reduce the risk of further inhibition of acetylcholinesterase, and prevent the development of acute clinical poisoning or subclinical effects. Toxicologic risk assessment of these insecticides could be based on an end point related to the mode of action (for example, a drop in acetylcholinesterase to 70% of baseline) even if exposed people have no symptoms at that concentration. Rather than dwell on whether an effect can be technically classified as adverse or nonadverse, it seems preferable to explain the rationale for the choice of whatever end point is chosen for the risk assessment, using the best available scientific information generally accepted in the relevant clinical and toxicology communities.

The characterization of adversity as “some functional impairment or pathologic lesion that affects the performance of the whole organism” (OMB 2006, p. 20) is not appropriate for microbial risk assessment. Microbial risk assessment often focuses on the risk of infection rather than directly on the manifestation of adverse effects. Infection (replication of an organism in a host) does not always result in illness, death, or symptoms that affect the performance of the whole organism. The outcome depends on the virulence of the organism, immune responses, and other host factors, such as other underlying diseases. In many enteric infections, many people may have a relatively asymptomatic infection or mild illness but be able to spread the infection through the community. For example, young children infected with hepatitis A have a relatively mild or subclinical illness but shed the virus and are important vehicles of transmission in households and day-care settings (Wallace 1998, pp. 174-178). Similarly, the vast majority of poliovirus infections are asymptomatic, but infected people can excrete the virus in their stools for several weeks (Wallace 1998, pp. 123-127). Others infected with the same enteric organisms may experience frank illness that in some cases can have serious sequelae, including paralysis and other residual neurologic impairment (in the case of poliovirus). Thus, EPA established a goal for the treatment of surface waters of 1:10,000 per year to reduce the annual risks of infection (not illness or disease outcome) (Regli et al. 1991).

Another problem with the focus on effects on the whole organism is

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

that it does not address toxicants that preferentially affect one organ (the target organ), such as cadmium and the kidney. The Occupational Safety and Health Administration standard for cadmium mandates both measurement of an exposure (blood and urine cadmium) and evidence of potential injury to the kidney tubule (in the form of beta-2-microglobulin). Actions are taken at various levels of exposure and increases in beta-2-microglobulin to prevent irreversible kidney damage; those actions are taken at levels well below those expected to result in clinically detectable impairment of kidney function (such as a rise in blood urea nitrogen or creatinine) (29 CFR1910.1027 [2006]).

Sometimes, a biologic effect is chosen because there are more reliable data available for it, and it is a precursor of a more serious adverse outcome. That is the rationale offered in the recent NRC report Health Implications of Perchlorate Ingestion (NRC 2005). The NRC committee acknowledged that perchlorate's inhibition of iodide uptake “is the key biochemical event and not an adverse effect [and] should be used as the basis of the risk assessment. Inhibition of iodide uptake is a more reliable and valid measure, it has been unequivocally demonstrated in humans exposed to perchlorate, and it is the key event that precedes all thyroid-mediated effects of perchlorate exposure” (NRC 2005, p. 14). In this situation, the NRC perchlorate committee recommended using a “nonadverse” effect rather than an “adverse” effect as the point of departure for the perchlorate risk assessment as a health-protective approach. One reason for that approach was the lack of data on the association of perchlorate exposure with thyroid dysfunction in the groups of greatest concern: low-birthweight or preterm newborns, offspring of mothers who had iodide deficiency during gestation, and offspring of hypothyroid mothers.

Among the questions to OMB, the committee asked whether the bulletin supports using a precursor of an adverse effect or other mechanistic data as the basis of a risk assessment, as was recommended in the perchlorate review. OMB responded that although the bulletin does not speak to specific use of precursor effects, it does not preclude the use of a precursor of an adverse effect or other mechanistic data as the basis of a risk assessment. The committee nevertheless concludes that the bulletin’s focus on the choice of an adverse effect and the description of what is and is not an adverse effect give a strong message for what would be considered acceptable and nonacceptable end points for toxicologic risk assessment.

In summary, on the topic of risk assessment end points and adverse effects, the committee concludes that the bulletin has ventured into a

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

technical realm of the risk assessment process that is scientifically complex and uncertain and has offered simplistic and restrictive guidance concerning adverse effects. The committee notes that this issue is one of many scientifically difficult matters that much be confronted in the conduct of risk assessment. Why OMB has chosen to emphasize this one matter as opposed to any of a number of other complex issues is unclear.

RISK COMPARISIONS

The bulletin’s Standard 6 under general risk assessment standards states that a risk assessment should “provide an executive summary including…. d. information that places the risk in context/perspective with other risks familiar to the target audience” (OMB 2006, p. 24). The supplementary information adds, “Due care must be taken in making risk comparisons. Agencies might want to consult the risk communication literature when considering appropriate comparisons. Although the risk assessor has considerable latitude in making risk comparisons, the fundamental point is that risk should be placed in context that is useful and relevant to the intended audience” (OMB 2006, p. 15).

There are two conceivable legitimate purposes for risk comparisons. Readers who consult the risk communication literature will find that serving either purpose requires both formal analysis to ensure that defensible comparisons are being made and dedicated empirical research to ensure that the result is understood as intended. Readers of that literature will also find that poorly done risk comparisons can confuse, mislead, and antagonize recipients. Unless done in a scientifically sound way, risk comparisons are unlikely to be useful and relevant and hence should be avoided.

One conceivable legitimate purpose is giving recipients an intuitive feeling for just how large a risk is by comparing it with another, otherwise similar, risk that recipients understand. For example, roughly one American in a million dies from lightning in an average year (NOAA 1995). “As likely as being hit by lightning” would be a relevant and useful comparison for someone who has an accurate intuitive feeling for the probability of being hit by lightning, faces roughly that “average” risk, and considers the comparison risk to be like death by lightning in all important respects. It is not hard to imagine each of these conditions failing, rendering the comparisons irrelevant or harmful:

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
  1. Lightning deaths are so vivid and newsworthy that they might be overestimated relative to other, equally probable events. But “being struck by lightning” is an iconic very-low-probability risk, meaning that it might be underestimated. Where either occurs, the comparison will mislead (Lichtenstein et al. 1978; NRC 1989).

  2. Individual Americans face different risks from lightning. For example, they are, on the average, much higher for golfers than for nursing-home residents. A blanket statement would mislead readers who did not think about this variability and what their risk is relative to that of the average American (Slovic 2000; Tversky and Kahneman 1974).

  3. Death by lightning has distinctive properties. It is sometimes immediate, sometimes preceded by painful suffering. It can leave victims and their survivors unprepared. It offers some possibility of risk reduction, which people may understand to some degree. It poses an acute threat at some very limited times but typically no threat at all. Each of those properties may lead people to judge them differently—and undermine the relevance of comparisons with risks having different properties (Fischhoff et al. 1978; Lowrance 1976).

  4. It is often assumed that the risks being used for comparison are widely considered acceptable at their present levels. The risks may be accepted in the trivial sense that people are, in fact, living with them. But that does not make them acceptable in the sense that people believe that they are as low as they should or could be. It would be wrong to make comparisons with risks that responsible organizations are working diligently to reduce. For example, the National Lightning Safety Institute (NLSI) and the United States Golf Association do not consider contemporary risks of injury and death from lightning strikes to be acceptable: “A strong case can be made for reducing lightning’s human and economic costs through the adoption of proactive defensive guidelines” (Kithil 1995).

The second conceivable use of risk comparisons is to facilitate making consistent decisions regarding different risks. Other things being equal, one would want similar risks from different sources to be treated the same. However, many things might need to be held equal, including the various properties of risks (discussed above) that might make people want to treat them differently despite similarity in one dimension (for example, annual fatality rate among Americans) (HM Treasury 2005; Wittenberg et al. 2003).

The same risk may be acceptable in one setting but not another if

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

the associated benefits are different (for example, being struck by lightning while golfing or working on a road crew). Even when making voluntary decisions, people do not accept risks in isolation but in the context of the associated benefits. As a result, acceptable risk is a misnomer except as shorthand for a voluntarily assumed risk accompanied by acceptable benefits (Fischhoff et al. 1981).

The bulletin does not convey how difficult it is to produce useful and relevant risk comparisons. Unless such comparisons are developed in a scientifically sound and empirically evaluated way that addresses the values and circumstances of all recipients, risk comparisons should not be made.

SUMMARY OF COMMITTEE COMMENTS ON INDIVIDUAL STANDARDS

The proposed bulletin describes 16 standards for risk assessment; they are listed in Table 4-3. Many of the standards have multiple components. Section IV of the bulletin describes seven general risk assessment and reporting standards of which the seventh refers to risk assessments for regulatory analysis. Section V describes nine special standards for influential risk assessments. The standards comprise a mixture of qualitative and quantitative requirements. The committee reviewed each standard and component separately for soundness and clarity. The committee also considered the general question of developing and implementing the risk assessment guidance for all federal agencies.

The committee found many of the standards to be unclear or flawed. It also evaluated whether each proposed standard pertained to risk assessment and hence should be addressed by risk assessors or should guide risk managers or others. The committee’s concerns are described below and summarized in Table 4-3. Because of the major changes required to rectify the proposed standards, the lack of a clear rationale for proposing the particular standards, and the heterogeneity of risk assessment applications in the federal government, the committee concludes that OMB should not be issuing these standards as technical guidance and that, as discussed in Chapter 7, the development of technical guidance should be left to the individual agencies.

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

TABLE 4-3 Technical and Scientific Circumstances That Prevent the Current Applicability of the Bulletin

General Risk Assessment Standards (from Bulletin)

Technical and Scientific Evaluation (from Committee)

1. Provide a clear statement of the informational needs of decision makers, including the objectives of the risk assessment.

These qualitative standards (1-2. e) could improve the clarity of risk assessment in the federal government for any risk assessments that do not already implement such standards (although the existence of such problems is not established by the bulletin). They are consistent with recommendations of cited studies.

2. Clearly summarize the scope of the assessment, including a description of:

  1. the agent, technology and/ or activity that is the subject of the assessment;

  2. the hazard of concern;

  3. the affected entities (population(s), subpopulation(s), individuals, natural resources, ecosystems, or other) that are the subject of the assessment;

  4. the exposure/event scenarios relevant to the objectives of the assessment; and

  5. the type of event-consequence or dose-response relationship for the hazard of concern.

3. Provide a characterization of risk, qualitatively and, whenever possible, quantitatively. When a quantitative characterization of risk is provided, a range of plausible risk estimates shall be provided.

The standard does not provide clear guidance on how such a range is to be defined. As a result, it may produce confusion that could erode the quality of risk assessment.

The term plausible risk estimate is undefined in the bulletin. If a distribution is substantially skewed or bimodal, identifying a single estimate considered a “plausible” estimate of the distribution is not meaningful. In that case, a “central” estimate will not reasonably represent the distribution. When distributions reflect variability, the ambiguous term plausible appears to be at odds with the fundamental orientation of public-health practice and prevention, especially when the applicable laws seek to protect the most

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

 

vulnerable in the population: infants, children, the elderly, and those with illnesses or predispositions to illness. Using a mean or central estimate to identify the most “plausible” individual would undermine public- health goals.

This and related standards represent technical guidance. Although the issues of uncertainty and variability are topics of general concern in risk assessment, the detailed implementation of uncertainty and variability analyses is best addressed by individual agencies to reflect their expertise in the relevant science as applied to their mandate.

4. Be scientifically objective:

Components 4. a and 4. c could improve the clarity of a risk assessment in any context where they are not already practiced (although the existence of such problems is not es tablished by the bulletin). Component 4. a could, however, degrade risk analysis if it were interpreted so as to deprive decision- makers of important information on sensitive subpopulations on the grounds that such information may generate risk estimates considerably higher than a central tendency or general population estimates. Information on the variability of effects across potentially affected populations is essential to decision making.

Component 4. b is problematic because it requires assigning weights, despite the lack of a standard procedure for doing so. Assigning weights is a matter of judgment and explicitly introduces another element of uncertainty. Thus, specific weights should not be assigned to studies. Rather, the most appropriate scientific information should be considered during preparation of risk assessments.

  1. as a matter of substance, neither minimizing nor exaggerating the nature and magnitude of risks;

  2. giving weight to both positive and negative studies in light of each study’s technical quality; and

  3. as a matter of presentation:

  4. presenting the information about risk in an accurate, clear, complete and unbiased manner; and

  5. describing the data, methods, and assumptions used in the assessment with a high degree of transparency.

5. For critical assumptions in the assessment, whenever possible, include a quantitative evaluation of reasonable alternative assumptions and their implications for the key findings of the assessment.

This standard’s lack of specificity prevents any meaningful application. If the standard is referring to model assumptions, it may be ignoring more important assumptions in other aspects of the risk assessment. If it is referring to default assumptions, it may be ignoring critical science- policy concerns or statutory requirements, such as protection of sensitive people. The vagueness of the standard could open the door to endless guesswork.

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

General Risk Assessment Standards (from Bulletin)

Technical and Scientific Evaluation (from Committee)

6. Provide an executive summary including:

These qualitative standards (6a-c) could improve the clarity of risk assessment in the federal government if risk assessments do not implement them already (although the existence of such problems is not established by the bulletin). They are consistent with recommendations of cited studies. However, there are numerous problems in the bulletin’s treatment of uncertainty.

Standard 6. d does not pertain to risk assessment, but to risk communication. This component is at variance with the scientific literature, and the reference to risk communication is inconsistent with current scientific understanding, as set forth by NRC and other bodies. The bulletin suggests one- way risk communication. Instead, risk communication is an essential element of all stages of risk assessment. Unless the risk communication occurs at all stages of the risk assessment and to all stakeholders, it may be misdirected and mistrusted. The wording of this standard is so vague as to be useless. It also recommends a practice (risk comparison) that has been widely criticized as potentially illogical and as potentially reducing rather than enhancing the usefulness of such an assessment.

  1. key elements of the assessment’s objectives and scope;

  2. key findings;

  3. key scientific limitations and uncertainties and, whenever possible, their quantitative implications; and

  4. information that places the risk in context/perspective with other risks familiar to the target audience.

General Risk Assessment Standards for Risk Assessments Applicable to Regulatory Analysis (from Bulletin)

Technical and Scientific Evaluation (from Committee)

  1. For risk assessments that will be used for regulatory analysis, the risk assessment also shall include:

The standards identified as 7a-c should not be part of risk assessment guidance. The first recommendation of the Red Book is this: “We recommend that regulatory agencies take steps to establish and maintain a clear conceptual distinction between assessment of risks and consideration of risk management alternatives; that is, the scientific findings and policy judgments embodied in risk assessments should be explicitly distinguished from the political, economic, and technical considerations that influence the design and choice of regulatory strategies” (NRC 1983, p. 7).

a. an evaluation of alternative options, clearly establishing the baseline risk as well as the risk reduction alternatives that will be evaluated;

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
  1. a comparison of the baseline risk against the risk associated with the alternative mitigation measures being considered, and assess, to the extent feasible, countervailing risks caused by alternative mitigation measures;

  2. information on the timing of exposure and the onset of the adverse effect(s), as well as the timing of control measures and the reduction or cessation of adverse effects;

  3. estimates of population risk when estimates of individual risk are developed; and

The standards described in 7a-c ignore the essential distinction between risk assessment and risk management. Although it is important that there be communication between risk assessor and risk manager, risk management tasks and decisions should not be delegated to the risk assessment. Furthermore, although risk assessors may evaluate various risk management options (for example, mitigation options for a Superfund site), the risk assessment should remain distinct from the political, economic, and technical considerations of risk management.

The issue of resources also arises with these standards. For example, standard 7b requires that the countervailing risks caused by alternative mitigation measures be considered.

That requirement would substantially broaden the content and scope of risk assessments. Resources to accomplish that task would probably not be available, and information on choices of alternative measures would probably reside with the risk manager, not the risk assessor.

The reference to population risk is unclear and undefined in the bulletin. A common practice in health risk assessment is to estimate population risk by calculating the total population impact (that is, risk times population exposure). That estimate may be referred to as "population burden" rather than "population risk." In other cases, population risk may be evaluated by considering the distribution of risks within a population or the shift of the population with regard to a risk. Thus, although the health risk assessment field usually follows the total- population- impact convention, the situation is dynamic, and as methods evolve, the applicability of one convention to all types of risk assessments and situations is not likely to be practical or possible.

Another important consideration is that estimates of individual risk are generally developed to address concerns for the most vulnerable people in a population—who, almost by definition, lie in the tails of the probability distribution. To protect the entire population, one often evaluates the risk to the most vulnerable. Thus, the risk to the entire population includes the risk to the most vulnerable. For example, if one were to evaluate the risk of

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

General Risk Assessment Standards for Risk Assessments Applicable to Regulatory Analysis (from Bulletin)

Technical and Scientific Evaluation (from Committee)

 

death for a particular ambient-air quality standard, those mostly likely to die from exposure would be the most vulnerable in the population.

  1. whenever possible, a range of plausible risk estimates, including central or expected estimates, when a quantitative characterization of risk is made available.

The bulletin’s discussion of central and expected estimates and uncertainty is confused and prevents useful application of the standard. It is misleading to suggest as the bulletin does that “central” and “expected” estimates are synonymous. As discussed in the section on “Range of Risk Estimates and Central Estimates,” the presentation of single estimates may provide an incomplete picture, and without proper definitions and context, use of the range or “central estimate” will be misleading. The bulletin does not state the distribution to be considered in the evaluation or even whether it is to reflect uncertainty or variability. Instead, the bulletin appears to suggest “averaging” the information from a combined distribution of uncertainty and variability. It is not in decision- makers’ or society’s interest to treat fundamentally different predictions as quantities that can be “averaged.”

Influential Risk Assessment Standards (from Bulletin)

Technical and Scientific Evaluation (from Committee)

General comments on the category

Whether an analysis constitutes an “influential risk assessment” may not be clear at the outset. Moreover, this standard’s focus on economic impacts imposes risk management concerns on risk assessment. Arbitrarily separating risk assessments into two broad categories (influential and noninfluential) ignores the continuum of risk assessment efforts.

All influential agency risk assessments shall:

1. Be “capable of being substantially reproduced” as defined in the OMB Information Quality Guidelines

Appears to represent a good practice activity for any risk assessments that do not already implement such standards (although the existence of such problems is not established by the bulletin).

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

2. Compare the results of the assessment to other results published on the same topic from qualified scientific organizations.

It may be appropriate for some assessments to compare their results with those derived by other scientific organizations regarding such issues as the affected population, geography, time scales, and the definition of adverse effects. However, the bulletin fails to define which comparisons are required. With specialized risk assessments, substantial assumptions may be needed to make comparisons. When external risk assessments have not followed an agency’s own standards, comparisons may undermine the quality of its work. Finally, requiring resources to compare results with external risk assessments that do not meet the standards may result in less time to devote to risk assessments by the agency and thus affect the quality and output of agency products.

3. Highlight central estimates as well as high-end and low-end estimates of risk when such estimates are uncertain.

The bulletin’s discussion of central estimates and uncertainty is confusing and prevents useful application of the standard. The detailed requirement of a central estimate, as well as high- and low- end estimates, is clearly inapplicable and inappropriate for some types of risk assessments. Some NRC committees have warned that descriptions of “central estimates” of risk may have little meaning when applied to models for high- to low-dose extrapolation.

Finally, the strong emphasis on central estimates in the bulletin means that the most vulnerable people in a population—who, almost by definition, lie in the tails of the probability distribution—might be underrepresented, depending on the characterization of the central estimate.

4. Characterize uncertainty with respect to the major findings of the assessment including:

The aspiration level in the bulletin is at the edge of the current state of the art and exceeds what is practically feasible. The recent NRC assessments of perchlorate, arsenic, and methyl mercury discussed uncertainties qualitatively, not quantitatively. This standard will force agencies into an unsuitable role of requiring basic research before they can perform their assigned roles. These requirements are not fully articulated, either by the bulletin or by the research community. Many terms are vague, leaving the requirements

  1. document and disclose the nature and quantitative implications of model uncertainty, and the relative plausibility of different models based on scientific judgment; and where feasible:

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Influential Risk Assessment Standards (from Bulletin)

Technical and Scientific Evaluation (from Committee)

  1. include a sensitivity analysis; and

  2. provide a quantitative distribution of the uncertainty.

not operational. The bulletin does not constitute “technical guidance” and hence cannot “enhance the technical quality…of risk assessments” (OMB 2006, p. 3).

5. Portray results based on different effects observed and/or different studies to convey how the choice of effect and/or study influences the assessment.

The wording of this standard may undermine good scientific practice. In many cases, basing results on alternative effects can be counterproductive. For example, for methyl mercury, the assessment should be based on the most sensitive effect and not on a less sensitive or “alternate” effect. The presentation of alternative analyses may not be informative. The standard does not allow risk assessors sufficient flexibility to adapt to the nature of the available data and science.

6. Characterize, to the extent feasible, variability through a quantitative distribution, reflecting different affected population(s), time scales, geography, or other parameters relevant to the needs and objectives of the assessment.

If this standard is implemented literally, few risk assessments could be completed without significant new research and tool development.

7. Where human health effects are a concern, determinations of which effects are adverse shall be specifically identified and justified based on the best available scientific information generally accepted in the relevant clinical and toxicological communities.

The bulletin’s definition of adverse effect implies a clinically apparent effect. This ignores public health’s fundamental goal of controlling exposures well before the occurrence of functional impairment of the whole organism. Dividing effects into dichotomous categories of “adverse” and “nonadverse” ignores the scientific reality that adverse effects may be manifested along a continuum. Furthermore, many effects of central importance to public health (and risk assessment) are not adverse themselves but associated with healthy functioning, for example, carboxyhemoglobin formation, acetylcholinesterase inhibition, and microbial infection. The bulletin proposes simplistic and restrictive guidance concerning adverse effects, which is at odds with relevant science and legislation.

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

8. Provide discussion, to the extent possible, of the nature, difficulty, feasibility, cost and time associated with undertaking research to resolve a report's key scientific limitations and uncertainties.

This standard appears to address how much the uncertainty could be reduced with various investments in research. That is not risk assessment, but management of risk assessment. Because each risk assessment involves many “default” assumptions, it would be more cost-effective to undertake this activity as an overarching research activity and not as a component of each influential risk assessment.

9. Consider all significant comments received on a draft risk assessment report and:

Appears to represent a good practice for any risk assessments that do not already implement such standards (although the existence of such problems is not established by the bulletin). However, requiring a federal agency to provide a rationale for why its position is preferable to positions proposed by commenters is likely to expend excessive resources and might result in less time to devote to agency risk assessments, thus affecting the quality and output of agency products.

  1. issue a "response-to-comment" document that summarizes the significant comments received and the agency's responses to those comments; and

  2. provide a rationale for why the agency has not adopted the position suggested by commenters and why the agency position is preferable.

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

REFERENCES

AEC (U.S. Atomic Energy Commission). 1957. Theoretical Possibilities and Consequences of Major Accidents in Large Nuclear Power Plants. WASH-740. Washington, DC: U.S. Atomic Energy Commission.

Allred, E.N., E.R. Bleecker, B.R. Chaitman, T.E. Dahms, S.O. Gottlieb, J.D. Hackney, M. Pagano, R.H. Selvester, S.M. Walden, and J. Warren. 1989. Short term effects of carbon monoxide exposure on the exercise performance of subjects with coronary artery disease. N. Engl. J. Med. 321(21):1426-1432.

ANS (American Nuclear Society). 2003. ANSI/ANS-58.21-2003. External-Events PRA Methodology: American National Standard. American Nuclear Society. December 2003.

ASME (American Society of Mechanical Engineers). 2005. ASME RA-Sb-2005. Standard for Probabilistic Risk Assessment for Nuclear Power Plant Applications: Addendum B to ASME RA-S-2002. American Society of Mechanical Engineer. December 30, 2005.

Bailer, A.J., R.B. Noble, and M.W. Wheeler. 2005. Model uncertainty and risk estimation for experimental studies of quantal responses. Risk Anal. 25(2):291-299.

Bedford, T., and R.M. Cooke. 2001. P. 5 in Probabilistic Risk Analysis: Foundations and Methods. Cambridge: Cambridge University Press.

Brown, J., L.H.J. Goossens, F.T. Harper, B.C.P. Kraan, F.E. Haskin, M.L. Abbott, R.M. Cooke, M.L. Young, J.A. Jones S.C. Hora, A. Rood and J. Randall. 1997. Probabilistic Accident Consequence Uncertainty Study: Food Chain Uncertainty Assessment, Vols. 1 and 2. NUREG/CR-6523, EUR 16771, SAND97-0335. Prepared for Division of Systems Technology, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC and Commission of the European Communities, Brussels. Luxembourg: Office for Publications of the European Communities [online]. Available: http://www.osti.gov/bridge/servlets/purl/510290-keuF8T/webviewable/510290.pdf and http: //www.osti.gov/bridge/servlets/purl/510291-eUsNPE/webviewable/510291.pdf [accessed Oct. 16, 2006].

Brown, J., J. Ehrhardt, L.H.J. Goossens, R.M. Cooke, F. Fischer, I. Hasemann, J.A. Jones, B.C.P. Kraan, and J.G. Smith. 2001. Probabilistic Accident Consequence Uncertainty Assessment Using COSYMA: Uncertainty from the Food Chain Module. EUR 18823. FZKA 6309. European Communities [online]. Available: ftp://ftp.cordis.europa.eu/pub/fp5-euratom/docs/eur18823_en.pdf [accessed Oct. 17, 2006].

Budnitz, R.J., G. Apostolakis, D.M. Boore, L.S. Cluff, K.J. Coppersmith, C.A. Cornell, and P.A. Morris. 1997. Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Experts. NUREG/CR-6372. Prepared for Division of Engineering Technology,

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington DC, Office of Defense Programs, U.S. Department of Energy, Germantown, MD, and Electric Power Research Institute, Palo Alto, CA, by Senior Seismic Hazard Analysis Committee (SSHAC) [online]. Available: http://www.osti.gov/ energycitations/servlets/purl/479072-krGkYU/webviewable/479072.pdf [accessed Oct. 18, 2006].

Cooke, R.M., and L.H.J. Goossens. 1999. Nuclear Science and Technology: Procedures Guide for Structured Expert Judgment. EUR 18820EN. Prepared for Commission of European Communities Directorate-general XI (Environment and Nuclear Safety), Luxembourg, by Delft University of Technology, Delft, The Netherlands. June 1999 [online]. Available: ftp://ftp.cordis.europa.eu/pub/fp5-euratom/docs/eur18820_en.pdf [accessed Oct. 17, 2006].

DOE (U.S. Department of Energy). 1996. Characterization of Uncertainties in Risk Assessment with Special Reference to Probabilistic Uncertainty Analysis. RCRA/CERCLA Information Brief. EH 413-068/0496. U.S. Department of Energy, Office of Environmental Policy and Assistance [online]. Available: http://www.eh.doe.gov/oepa/guidance/risk/un-cert.pdf [accessed Oct. 16, 2006].

Eckerman, K.F., R.W. Leggett, C.B. Nelson, J.S. Puskin, and A.C.B. Richardson. 1999. Cancer Risk Coefficients for Environmental Exposure to Radionuclides. Federal Guidance Report No.13. EPA 402-R-99-001. Prepared for Office of Radiation and Indoor Air, U.S. Environmental Protection Agency, Washington, DC, by Oak Ridge National Laboratory, Oak Ridge, TN. September 1999 [online]. Available: http://www.epa.gov/radiation/docs/federal/402-r-99-001.pdf [accessed Oct. 16, 2006].

Fischhoff, B., P. Slovic, S. Lichtenstein, S. Read, and B. Combs. 1978. How safe is safe enough? A psychometric study of attitudes towards technological risks and benefits. Policy Sci. 9(2):127-152.

Fischhoff, B., S. Lichtenstein, P. Slovic, S.L. Derby, and R.L. Keeney. 1981. Acceptable Risk. New York: Cambridge University Press.

Garrick, B.J. 1984. Recent case studies and advancements in probabilistic risk assessments. Risk Anal. 4(4):267-279.

Goossens, L.H.J., R.M. Cooke, and B.C.P. Kraan. 1996. Evaluation of Weighting Schemes for Expert Judgement Studies. Prepared for Commission of the European Communities, Directorate-General for Science, Research and Development, XII-F-6, by Delft University of Technology, Delft, The Netherlands. 75 pp.

Goossens, L.H.J., J. Boardman, F.T. Harper, B.C.P. Kraan, R.M. Cooke, M.L. Young, J.A. Jones, and S.C. Hora. 1997. Probabilistic Accident Consequence Uncertainty Study: Uncertainty Assessment for Deposited Material and External Doses, Vols. 1 and 2. NUREG/CR-6526. EUR 16772. SAND97-2323. Prepared for Division of Systems Technology,

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC and Commission of the European Communities, Brussels. Luxembourg: Office for Publications of the European Communities [online]. Available: http://www.osti.gov/bridge/servlets/purl/291006-pL3L0D/webviewable/291006.pdf http://www.osti.gov/bridge/servlets/purl/291007-7XzROO/webviewable/291007.pdf [accessed Oct. 16, 2006].

Goossens, L.H.J, J.A. Jones, J. Ehrhardt, B.C.P. Kraan, and R.M. Cooke. 2001. Probabilistic Accident Consequence Uncertainty Assessment: Countermeasures Uncertainty Assessment. EUR 18821. FZKA 6307. European Communities [online]. Available: ftp://ftp.cordis.europa.eu/pub/fp5-euratom/docs/eur18821_en.pdf [accessed Oct. 17, 2006].

Harper, F.T., L.H.J. Goossens, R.M. Cooke, S.C. Hora, M.L. Young, J. Päsler-Sauer, L.A. Miller, B. Kraan, C. Lui, M.D. McKay, J.C. Helton and J.A. Jones. 1995. Probabilistic Accident Consequence Uncertainty Study: Dispersion and Deposition Uncertainty Assessment, Vol. 1 and 2. NUREG/CR-6244. EUR 15855 EN, SAND94-1453. Prepared for Division of Systems Technology, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC, and Commission of the European Communities, Brussels. Luxembourg: Office for Publications of the European Communities [online]. Available: http://www.osti.gov/bridge/servlets/purl/10125585-aArQNy/webviewable/10125585.pdf, http://www.osti.gov/bridge/servlets/purl/25041-SZccBx/webviewable/25041.pdf [accessed Oct. 17, 2006].

Haskin, F.E., F.T. Harper, L.H.J. Goossens, B.C.P. Kraan, J.B. Grupa, and J. Randall. 1997. Probabilistic Accident Consequence Uncertainty Study: Early Health Effects Uncertainty Assessment, Vols. 1 and 2. NUREG/CR-6545. EUR 16775. SAND97-2689. Prepared for Division of Systems Technology, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC, and Commission of the European Communities, Brussels. Luxembourg: Office for Publications of the European Communities [online]. Available: http://www.osti.gov/bridge/servlets/purl/291010-cH8Oey/webviewable/291010.pdf and http://www.osti.gov/bridge/servlets/purl/291011-8Nk nmm/webviewable/291011.pdf [accessed Oct. 17, 2006].

HM Treasury (Her Majesty’s Treasury). 2005. Managing Risks to the Public: Appraisal Guidance. London: HM Treasury. June 2005 [online]. Available: http://www.hm-treasury.gov.uk/media/8AB/54/Managing_risks_to_the_public.pdf [accessed Oct. 18, 2006].

Hoeting, J.A., D. Madigan, A.E. Raftery, and C.T. Volinsky. 1999. Bayesian model averaging: A tutorial. Stat. Sci. 14(4):382-417.

Jones, J., J. Ehrhardt, L.H.J. Goossens, J. Brown, R.M. Cooke, F. Fischer, I. Hasemann, and B.C.P. Kraan. 2001a. Probabilistic Accident Consequence Uncertainty Assessment Using COSYMA: Methodology and Processing Techniques. EUR 18827. FZKA-6313. European Communities

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

[online]. Available: ftp://ftp.cordis.europa.eu/pub/fp5-euratom/docs/eur18827_en.pdf [accessed Oct.17, 2006].

Jones, J., J. Ehrhardt, L.H.J. Goossens, J. Brown, R.M. Cooke, F. Fischer, I. Hasemann, and B.C.P. Kraan. 2001b. Probabilistic Accident Consequence Uncertainty Assessment Using COSYMA: Overall Uncertainty Analysis. EUR 18826. FZKA 6312. European Communities [online]. Available: ftp://ftp.cordis.europa.eu/pub/fp5-euratom/docs/eur18826_en.pdf [accessed Oct. 17, 2006].

Jones, J., J. Ehrhardt, L.H.J. Goossens, J. Brown, R.M. Cooke, F. Fischer, I. Hasemann, B.C.P. Kraan, A. Khursheed, and A. Phipps. 2001c. Probabilistic Accident Consequence Uncertainty Assessment Using COSYMA: Uncertainty from the Dose Module. EUR 18825. FZKA-6311. European Communities [online]. Available: ftp://ftp.cordis. eu-ropa.eu/pub/fp5-euratom/docs/eur18825_en.pdf [accessed Oct. 17, 2006].

Jones, J., J. Ehrhardt, L.H.J. Goossens, R.M. Cooke, F. Fischer, Hasemann, and B.C.P. Kraan. 2001d. Probabilistic Accident Consequence Uncertainty Assessment Using COSYMA: Uncertainty from the Early and Late Health Effects Module. EUR 18824. FZKA-6310. European Communities [online]. Available: ftp://ftp.cordis.europa.eu/pub/fp5-euratom/docs/eur18824_en.pdf [accessed Oct. 17, 2006].

Jones, J., J. Ehrhardt, L.H.J. Goossens, R.M. Cooke, F. Fischer, Hasemann, and B.C.P. Kraan. 2001e. Probabilistic Accident Consequence Uncertainty Assessment Using COSYMA: Uncertainty from the Atmospheric Dispersion and Deposition Module. EUR 18822. FZKA-6308. European Communities [online]. Available: ftp://ftp.cordis.europa.eu/pub/fp5-euratom/docs/eur18822_en.pdf [accessed Oct. 17, 2006].

Kang, S.H., R.L. Kodell, and J.J. Chen. 2000. Incorporating model uncertainties along with data uncertainties in microbial risk assessment. Regul. Toxicol. Pharmacol. 32(1):68-72.

Kemeny, J. 1979. Report of the President's Commission on the Accident at Three Mile Island. Report of the Public Health And Safety Task Force. Washington, DC: U.S. Government Printing Office. October 1979 [online]. Available: http://www.threemileisland.org/downloads//193.pdf [accessed Nov. 27, 2006].

Kithil, R. 1995. Lightning's Social and Economic Costs. Presentation at International Aerospace and Ground Conference on Lightning and Static Electricity, September 26-28, 1995, Williamsburg, VA [online]. Available: http://www.lightningsafety.com/nlsi_lls/sec.html [accessed Oct. 18, 2006].

IRIS (Integrated Risk Information System). 2006. IRIS Database for Risk Assessment, U.S. Environmental Protection Agency [online]. Available: http://www.epa.gov/iris/ [accessed Nov. 10. 2006].

Lewis, H.W., R.J. Budnitz, H.J.C. Kouts, W.B. Loewenstein, W.D. Rowe, F. von Hippel, and F. Zachanariasen. 1978.

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Risk Assessment Review Group Report to the U.S. Nuclear Regulatory Commission. NUREG/CR-0400. Nuclear Regulatory Commission, Washington, DC. 76pp.

Lichtenstein, S., P. Slovic, B. Fischhoff, M. Layman, and B. Combs. 1978. Judged frequency of lethal events. J. Exp. Psychol. Learn. 4:551-578.

Little, M.P., C.M. Muirhead, L.H.J. Goossens, F.T. Harper, B.C.P. Kraan, R.M. Cooke, and S.C. Hora. 1997. Probabilistic Accident Consequence Uncertainty Analysis: Late Health Effects Uncertainty Assessment, Vols. 1 and 2. NUREG/CR-6555. EUR 16774. SAND97-2322. Prepared for Division of Systems Technology, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC, and Commission of the European Communities, Brussels. Luxembourg: Office for Publications of the European Communities [online]. Available: http://www.osti.gov/bridge/servlets/purl/291008-wV5DjS/webviewable/291008.pdf and http://www.osti.gov/bridge/servlets/purl/291009-fM9p1b/webviewable/291009.pdf [accessed Oct. 18, 2006].

Lowrance, W.W. 1976. Of Acceptable Risk: Science and the Determination of Safety. Los Altos, CA: W. Kaufmann.

Morales, K.H, J.G. Ibrahim, C.J. Chen, and L.M. Ryan. 2006. Bayesian model averaging with applications to benchmark dose estimation for arsenic in drinking water. J. Am. Stat. Assoc. 101(473):9-17.

NASA (National Aeronautics and Space Administration). 2002. Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners, Version 1.1. Prepared for Office of Safety and Mission Assurance NASA Headquarters, Washington, DC [online]. Available: http://www.hq.nasa.gov/office/codeq/doctree/praguide.pdf [accessed Oct. 18, 2006].

NASA (National Aeronautics and Space Administration). 2004. Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners. NPR 8705.5. Office of Safety and Mission Assurance NASA Headquarters, Washington, DC [online]. Available: http://nodis3.gsfc.nasa.gov/npg_img/N_PR_8705_0005_/N_PR_8705_0005_.pdf [accessed Oct. 23, 2006].

NEI (Nuclear Energy Institute). 2000. Probabilistic Risk Assessment Peer Review Process Guidance, Revision A3. NEI-00-02. Washington, DC: Nuclear Energy Institute. March 20, 2000.

NOAA (National Oceanic and Atmospheric Administration). 1995. Natural Hazard Fatalities for the United States, 1994. National Oceanic and Atmospheric Administration, Washington, DC.

NRC (National Research Council). 1983. Risk Assessment in the Federal Government: Managing the Process. Washington, DC: National Academy Press.

NRC (National Research Council). 1989. Improving Risk Communication. Washington DC: National Academy Press.

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

NRC (National Research Council). 1991. Human Exposure Assessment for Airborne Pollutants. Washington DC: National Academy Press.

NRC (National Research Council). 1993. Issues in Risk Assessment, Volumes I, II and III Washington DC: National Academy Press.

NRC (National Research Council). 1994. Spacecraft Maximum Allowable Concentrations for Selected Airborne Contaminants, Vol. 1. Washington, DC: National Academy Press.

NRC (National Research Council). 1996a. Spacecraft Maximum Allowable Concentrations for Selected Airborne Contaminants, Vol. 2. Washington, DC: National Academy Press.

NRC (National Research Council). 1996b. Spacecraft Maximum Allowable Concentrations for Selected Airborne Contaminants, Vol. 3. Washington, DC: National Academy Press.

NRC (National Research Council). 2000a. Toxicological Effects of Methylmercury. Washington, DC: National Academy Press.

NRC (National Research Council). 2000b. Spacecraft Maximum Allowable Concentrations for Selected Airborne Contaminants, Volume 4. Washington, DC: National Academy Press.

NRC (National Research Council). 2002. Estimating the Public Health Benefits of Proposed Air Pollution Regulations. Washington, DC: National Academies Press.

NRC (National Research Council). 2004. Spacecraft Water Exposure Guidelines for Selected Contaminants, Vol. 1. Washington, DC: National Academies Press.

NRC (National Research Council). 2005. Health Implications of Perchlorate Ingestion. Washington, DC: National Academies Press.

OMB (U.S. Office of Management and Budget). 2003. Regulatory Analysis. Circular A-4 to the Heads of Executive Agencies and Establishments, September 17, 2003 [online]. Available: http://www.whitehouse.gov/omb/circulars/a004/a-4.pdf [accessed Oct. 12, 2006].

OMB (U.S. Office of Management and Budget). 2006. Proposed Risk Assessment Bulletin. Released January 9, 2006. Washington, DC: Office of Management and Budget, Executive Office of the President [online]. Available: http://www.whitehouse.gov/omb/inforeg/proposed_risk_ assessment_bulletin_010906.pdf [accessed Oct. 11, 2006].

Raftery, A.E. 1995. Bayesian model selection in social research. Sociol. Methodol. 25:111-163.

Regli, S., J.B. Rose, C.N. Haas, and C.P. Gerba. 1991. Modelling the risk from Giardia and viruses in drinking water. Am. Water Works Assoc. J. 83(11):76-84.

Rogovin, M., and G.T. Frampton. 1980. Three Mile Island: A Report to the Commissioners and the Public. Washington, DC: U.S. Government Printing Office [online]. Available: http://www.threemileisland.org/downloads//354.pdf [accessed Nov. 27, 2006].

Sheps, D.S., M.C. Herbst, A.L. Hinderliter, K.F. Adams, L.G. Ekelund,

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

J.J. O'Neil, G.M. Goldstein, P.A. Bromberg, J.L. Dalton, M.N. Ballenger, et al. 1990. Production of arrhythmias by elevated carboxyhemoglobin in patients with coronary artery disease. Ann. Intern. Med. 113(5):343-351.

Slovic, P. 2000. The Perception of Risk. London: Earthscan.

Tversky, A., and D. Kahneman. 1974. Judgment under uncertainty: Heuristics and biases. Science 185(4157):1124-1131.

U.S. NRC (U.S. Nuclear Regulatory Commission). 1975. Reactor Safety Study: An Assessment of Accident Risks in the U.S. Commercial Nuclear Power Plants. Wash-1400, NUREG-75/014. Washington, DC: U.S. Nuclear Regulatory Commission.

U.S. NRC (U.S. Nuclear Regulatory Commission). 1979. Nuclear Regulatory Commission Issues Policy Statement on Reactor Safety Study and Review by Lewis Panel: NRC Statement on Risk Assessment and the Reactor Safety Study Report (WASH-1400) in Light of the Risk Assessment Review Group Report, January 18, 1979. No. 79-19. Office of Public Affairs, U.S. Nuclear Regulatory Commission, Washington, DC.

U.S. NRC (U.S. Nuclear Regulatory Commission). 1983. PRA Procedures Guide: A Guide to the Performance of Probabilistic Risk Assessments for Nuclear Power Plants. NUREG/CR-2300. Washington, DC: U.S. Nuclear Regulatory Commission.

U.S. NRC (U.S. Nuclear Regulatory Commission). 1990. Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants, Vol. 1. - Final Summary Report; Vol. 2, Appendices A, B & C; Vol. 3. Appendices D & E. NUREG-1150. Division of Systems Research, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC [online]. Available: http://www.nrc.gov/reading-rm/doc-collections/nuregs/staff/sr1150/ [accessed Oct. 20. 2006].

U.S. NRC (U.S. Nuclear Regulatory Commission). 2006. An Approach for Determining the Technical Adequacy of Probabilistic Risk Assessment Results for Risk-Informed Activities. Draft Regulatory Guide Dg-1161. Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC [online]. Available: http://ruleforum.llnl.gov/cgi-bin/downloader/rg_lib/123-0198.pdf [accessed Oct. 20, 2006].

Vesely, W.E., F.F. Goldberg, N.H. Roberts, and D.F. Haasl. 1981. Fault Tree Handbook. NUREG-0492. Systems and Reliability Research, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC. January 1981 [online]. Available: http://www.nrc.gov/reading-rm/doc-collections/nuregs/staff/sr0492/sr0492.pdf [accessed Oct. 23, 2006].

Wallace, R.B., ed. 1998. Maxcy-Rosenau-Last Public Health and Preventive Medicine, 14th Ed. Stamford, CT: Appleton and Lange.

Wiggins, J. 1985. ESA Safety Optimization Study. HEI-685/1026. Hernandez Engineering, Houston, TX.

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×

Wittenberg, E., S.J. Goldie, B. Fischhoff, and J.D. Graham. 2003. Rationing decisions and individual responsibility in illness: Are all lives equal? Med. Decis. Making 23(3):194-221.

Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 35
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 36
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 37
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 38
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 39
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 40
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 41
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 42
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 43
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 44
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 45
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 46
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 47
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 48
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 49
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 50
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 51
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 52
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 53
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 54
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 55
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 56
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 57
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 58
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 59
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 60
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 61
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 62
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 63
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 64
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 65
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 66
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 67
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 68
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 69
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 70
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 71
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 72
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 73
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 74
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 75
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 76
Suggested Citation:"4 Standards for Risk Assessment." National Research Council. 2007. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. doi: 10.17226/11811.
×
Page 77
Next: 5 Omissions from the Bulletin »
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget Get This Book
×
Buy Paperback | $84.00 Buy Ebook | $64.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!