the hazardous effects of substances and to account for other uncertainties (WHO, 1987). Uncertainty factors are used to make inferences about the threshold dose of substances for members of a large and diverse human population from data on adverse effects obtained in epidemiological or experimental studies. These factors are applied consistently when data of specific types and quality are available. They are typically used to derive acceptable daily intakes for food additives and other substances for which data on adverse effects are considered sufficient to meet minimum standards of quality and completeness (FAO/WHO, 1982). These adopted or recognized uncertainty factors have sometimes been coupled with other factors to compensate for deficiencies in the available data and other uncertainties regarding data.
When possible, the UL is based on a no-observed-adverse-effect level (NOAEL), which is the highest intake (or experimental oral dose) of a nutrient at which no adverse effects have been observed in the individuals studied. This is identified for a specific circumstance in the hazard identification and dose-response assessment steps of the risk assessment. If there are no adequate data demonstrating a NOAEL, then a lowest-observed-adverse-effect level (LOAEL) may be used. A LOAEL is the lowest intake (or experimental oral dose) at which an adverse effect has been identified. The derivation of a UL from a NOAEL (or LOAEL) involves a series of choices about what factors should be used to deal with uncertainties. Uncertainty factors (UFs) are applied in an attempt to deal both with gaps in data and incomplete knowledge regarding the inferences required (for example, the expected variability in response within the human population). The problems of both data and inference uncertainties arise in all steps of the risk assessment. A discussion of options available for dealing with these uncertainties is presented below and in greater detail in Appendix B.
A UL is not, in itself, a description of human risk. It is derived by application of the hazard identification and dose-response evaluation steps (Steps 1 and 2) of the risk assessment model. To determine whether populations are at risk requires an intake or exposure assessment (Step 3, evaluation of intakes of the nutrient by the population) and a determination of the fractions of those populations, if any, whose intakes exceed the UL. In the intake assessment and risk characterization steps (Steps 3 and 4), the distribution of actual intakes for the population is used as a basis in determining whether and to what extent the population is at risk.
Application of the Risk Assessment Model to Nutrients
This section provides guidance for applying the risk assessment framework (the model) to the derivation of ULs for nutrients.
Special Problems Associated with Substances Required for Human Nutrition
Although the risk assessment model outlined above can be applied to nutrients to derive ULs, it must be recognized that nutrients possess some properties that distinguish them from the types of agents for which the risk assessment model was originally developed (NRC, 1983). In the application of accepted standards for assessing risks of environmental chemicals to the risk assessment of nutrients and food components, a fundamental difference between the two categories must be recognized: within a certain range of intakes, many nutrients are essential for human well-being and usually for life itself. Nonetheless, they may share with other chemicals the production of adverse effects at excessive exposures. Because the consumption of balanced diets is consistent with the development and survival of humankind over many millennia, there is less need for the large uncertainty factors that have been used in the typical risk assessment of nonessential chemicals. In addition, if data on the adverse effects of nutrients are available primarily from studies in human populations, there will be less uncertainty than is associated with the types of data available on nonessential chemicals.
There is no evidence to suggest that nutrients consumed at the recommended intake (the RDA or AI) present a risk of adverse effects to the general population. It is clear, however, that the addition of nutrients to a diet, either through the ingestion of large amounts of highly fortified food or nonfood sources such as supplements, or both, may (at some level) pose a risk of adverse health effects.2 The UL is the highest level of daily nutrient intake that is likely to pose no risk of adverse health effects to almost all individuals in the general population. As intake increases above the UL, the risk of adverse effects increases.
If adverse effects have been associated with total intake, ULs are based on total intake of a nutrient from food, water, and supplements. For cases in which adverse effects have been associated with intake only from supplements and/or food fortificants, the UL is based on intake from those sources only, rather than on total intake. The effects of nutrients from fortified foods or supplements may differ from those of naturally occurring constituents of foods because of several factors: the chemical form of the nutrient, the timing of the intake and amount consumed in a single bolus dose, the matrix supplied by the food, and the relation of the nutrient to the other constituents of the diet. Nutrient requirements and food intake are related to the metabolizing body mass, which is also at least an indirect measure of the space in which the nutrients are
distributed. This relation between food intake and space of distribution supports homeostasis, which maintains nutrient concentrations in that space within a range compatible with health. However, excessive intake of a single nutrient from supplements or fortificants may compromise this homeostatic mechanism. Such elevations alone may pose risks of adverse effects; imbalances among the concentrations of mineral elements (for example, calcium, iron, zinc, and copper) can result in additional risks (Mertz et al., 1994). These reasons and those discussed previously support the need to include the form and pattern of consumption in the assessment of risk from high nutrient intake.
Consideration of Variability in Sensitivity
The risk assessment model outlined in this paper is consistent with classical risk assessment approaches in that it must consider variability in the sensitivity of individuals to adverse effects of nutrients. A discussion of how variability is dealt with in the context of nutritional risk assessment follows.
Physiological changes and common conditions associated with growth and maturation that occur during an individual's lifespan may influence sensitivity to nutrient toxicity. For example, (1) sensitivity increases with declines in lean body mass and with declines in renal and liver function that occur with aging; (2) sensitivity changes in direct relation to intestinal absorption or intestinal synthesis of nutrients (for example, vitamin K, biotin); (3) in the newborn infant, sensitivity is also increased because of rapid brain growth and limited ability to secrete or biotransform toxicants; and (4) sensitivity increases with decreases in the rate of metabolism of nutrients. During pregnancy, the increase in total body water and glomerular filtration results in lower blood levels of water soluble vitamins dose-for-dose, and therefore, reduced susceptibility to potential adverse effects. However, in the unborn fetus this may be offset by active placental transfer, accumulation of certain nutrients in the amniotic fluid, and rapid development of the brain. Examples of life stage groups that may differ in terms of nutritional needs and toxicological sensitivity include infants and children, the elderly, and women during pregnancy and lactation.
Even within relatively homogeneous life stage groups, there is a range of sensitivities to toxic effects. The model described below accounts for normally expected variability in sensitivity, but it excludes subpopulations with extreme and distinct vulnerabilities. Such subpopulations consist of individuals needing medical supervision; they are better served through the use of public health screening, product labeling, or other individualized health care strategies. (Such populations may not be at "negligible risk" when their intakes reach the UL developed for the healthy population.) The decision to treat identifiable vulnerable subgroups as distinct (not protected by the UL) is a matter of judgment and is made evident in the rationale provided for characterizing the UL.
In the context of toxicity, the bioavailability of an ingested nutrient can be defined as its accessibility to normal metabolic and physiological processes. Bioavailability influences a nutrient's beneficial effects at physiological levels of intake and also may affect the nature and severity of toxicity due to excessive intakes. Factors that affect bioavailability include the concentration and chemical form of the nutrient, the nutrition and health of the individual, and excretory losses. Bioavailability data for specific nutrients must be considered and incorporated by the risk assessment process.
Some nutrients, for example, folate, may be less readily absorbed when they are part of a meal than when taken separately. Supplemental forms of some nutrients, such as some of the B vitamins, phosphorus, or magnesium, may require special consideration if they have higher bioavailability and therefore may present a higher risk of producing adverse effects than equivalent amounts from the natural form found in food.
A diverse array of adverse health effects can occur as a result of the interaction of nutrients. The potential risks of adverse nutrient-nutrient interactions increase when there is an imbalance in the intake of two or more nutrients. Excessive intake of one nutrient may interfere with absorption, excretion, transport, storage, function, or metabolism of a second nutrient. For example, dietary interactions can affect the chemical forms of elements at the site of absorption through ligand binding or changes in the valence state of an element (Mertz et al., 1994). Phytates, phosphates, and tannins are among the most powerful depressants of bioavailability, and organic acids, such as citric and ascorbic acid, are strong enhancers for some minerals and trace elements. Thus dietary interactions strongly influence the bioavailability of elements by affecting their partition between the absorbed and the nonabsorbed portion of the diet. The large differences of bioavailability ensuing from these interactions support the need to specify the chemical form of the nutrient when setting ULs. Dietary interactions can also alter nutrient bioavailability through their effect on excretion. For example, dietary intake of protein, phosphorus, sodium, and chloride all affect urinary calcium excretion and hence calcium bioavailability. Interactions that significantly elevate or reduce bioavailability may represent adverse health effects.
Although it is critical to include knowledge of any such interactions in the risk assessment, it is difficult to evaluate the possibility of interactions without reference to a particular level of intake. This difficulty can be overcome if a UL for a nutrient or food component is first derived based on other measures of