The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
problem unless a threshold dose is exceeded. All substances would express toxicity at sufficiently high doses, but under the Lehman-Fitzhugh model, all such substances would be safe (i.e., pose no significant risks) unless the threshold dose was exceeded. The problem they attempted to solve was to identify the threshold dose for a large and variable human population.
This threshold model was not applied to carcinogens. Exposure to carcinogens at any level above zero was thought to increase the probability of a carcinogenic process moving along toward completion. This gave rise to the phrase “no safe level” and the Delaney Clause, which required zero tolerance for any intentionally introduced food additive that could be demonstrated to cause cancer in lab animals or man. For this reason, regulatory agencies often avoided dealing with carcinogens, and either banned them where it was easy to diagnose, or ignored them, or resorted to criteria unrelated to health for decision-making. In 1973 FDA developed a model for the relationship between exposure and carcinogenic risk that assumed the absence of a threshold and a direct proportionality between dose and risks. Through the use of this model, the FDA made decisions that human health could still be protected at a very small predetermined level of risk and that scientific uncertainties would be based on conservative health protective assumptions.
In 1983, in response to a Congressional request to set up a separate, nonfederal institution to conduct risk assessments to keep then “untainted” by the regulatory process, the National Academy of Sciences published a report titled Risk Assessment in the Federal Government. This report, for the first time ever, clearly elucidated a framework for both the risk assessment and risk management processes. An updated version of the report was published by the Academy in 1994 that further promoted the rise of explicit regulatory guidelines for risk assessments to ensure that risk assessments would not be manipulated, on a case-by-case basis, to achieve predetermined regulatory outcomes.
The risk assessment and management processes were developed for two major reasons. One of the most important reasons is that, in almost all cases, it is beyond current technological capabilities to directly measure risks to large populations from chemical agents, pathogens, and other hazards. Without going through the risk assessment process, there is no scientific basis for regulatory decision-making. Another reason is that statues require premarket determinations of safety so that the level of risk of a substance to human health can be evaluated prior to exposure.
Initial risk evaluation of an agent involves defining its characteristics, specifically its inherent hazardous properties. This includes describing the kind of toxicity or the type of illness it causes, as well as whether the information is derived from human, animal, or other studies. Further evaluation frames the dose-response assessment. This analysis defines how the severity or incidence (or both) of adverse effects change with exposure conditions.
The final stage in the evaluation of an agent is the risk characterization process that estimates the risks involved as well as describes the potential uncertainties to the population being evaluated. This step defines the distribution of a population around a predetermined threshold or estimates the probability of an effect to the population over a period of time. It answers the question of how many people might be affected by this agent and to what degree.
From this information, risk management decisions can be made about exposure levels that pose insignificant risks for large populations, taking into account not just the data, but its limitations and applicability to large populations.
In analyzing and working with the data, two areas requiring special consideration are accounting for variability and identifying exceptions. Currently, adequate research is not available to provide data on distributions for either thresholds or effects in populations or to