Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
PART IV: APPENDIX C 435 C METHODS T he general methods for examining and interpreting the evidence on re- quirements for nutrients are presented in this appendix, with special attention given to approaches used to provide Dietary Reference Intakes (DRIs) where data are lacking for specific subgroups of the population (typi- cally for infants, children, pregnant and lactating women, and older adults). Included as well are discussions of methodological problems in assessing re- quirements and estimating intakes from dietary survey data. METHODOLOGICAL CONSIDERATIONS Types of Data Used The scientific data for developing the Dietary Reference Intakes (DRIs) have essentially come from observational and experimental studies in humans. Observational studies include single-case and case-series reports and cross- sectional, cohort, and case-control studies. Experimental studies include randomized and nonrandomized prevention trials and controlled doseâresponse, balance, turnover, and depletionârepletion physiological studies. Results from animal experiments are generally not applicable to the establishment of DRIs, but selected animal studies are considered in the absence of human data. ANIMAL MODELS Basic research using experimental animals affords considerable advantage in terms of control of nutrient exposures, environmental factors, and even genet- ics. In contrast, the relevance to free-living humans may be unclear. In addi- tion, dose levels and routes of administration that are practical in animal ex- periments may differ greatly from those relevant to humans. Nevertheless, ani- mal feeding experiments were sometimes included in the evidence reviewed to determine the ability to specify DRIs. HUMAN FEEDING STUDIES Controlled feeding studies, usually in a confined setting such as a metabolic unit, can yield valuable information on the relationship between nutrient con-
DRIs: THE ESSENTIAL GUIDE TO NUTRIENT REQUIREMENTS 436 sumption and health-related biomarkers. Much of the understanding of human nutrient requirements to prevent deficiencies is based on studies of this type. Studies in which the subjects are confined allow for close control of both intake and activities. Complete collections of nutrient losses through urine and feces are possible, as are recurring sampling of biological materials such as blood. Nutrient balance studies measure nutrient status in relation to intake. Depletionârepletion studies, by contrast, measure nutrient status while sub- jects are maintained on diets containing marginally low or deficient levels of a nutrient; then the deficit is corrected with measured amounts of that nutrient. Unfortunately, these two types of studies have several limitations. Typically they are limited in time to a few days or weeks, and so longer-term outcomes cannot be measured with the same level of accuracy. In addition, subjects may be con- fined, and findings are therefore not always generalizable to free-living indi- viduals. Finally, the time and expense involved in such studies usually limit the number of subjects and the number of doses or intake levels that can be tested. In spite of these limitations, feeding studies play an important role in un- derstanding nutrient needs and metabolism. Such data were considered in the DRI process and were given particular attention in the absence of reliable data to directly relate nutrient intake to disease risk. OBSERVATIONAL STUDIES In comparison to human feeding studies, observational epidemiological studies are frequently of direct relevance to free-living humans, but they lack the con- trolled setting. Hence they are useful in establishing evidence of an association between the consumption of a nutrient and disease risk but are limited in their ability to ascribe a causal relationship. A judgment of causality may be sup- ported by a consistency of association among studies in diverse populations, and it may be strengthened by the use of laboratory-based tools to measure exposures and confounding factors, such as personal interviews, rather than other means of data collection. In recent years, rapid advances in laboratory technology have made possible the increased use of biomarkers of exposure, susceptibility, and disease outcome in molecular epidemiological research. For example, one area of great potential in advancing current knowledge of the effects of diet on health is the study of genetic markers of disease susceptibility (especially polymorphisms in genes encoding metabolizing enzymes) in rela- tion to dietary exposures. This development is expected to provide more accu- rate assessments of the risk associated with different levels of intake of both nutrients and nonnutritive food constituents. While analytic epidemiological studies (studies that relate exposure to dis- ease outcomes in individuals) have provided convincing evidence of an associa- tive relationship between selected nondietary exposures and disease risk, there
PART IV: APPENDIX C 437 are a number of other factors that limit study reliability in research relating nutrient intakes to disease risk. First, the variation in nutrient intake may be rather limited in populations selected for study. This feature alone may yield modest relative risk trends across intake categories in the population, even if the nutrient is an important factor in explaining large disease rate variations among populations. A second factor, one that gives rise to particular concerns about confound- ing, is the human dietâs complex mixture of foods and nutrients that includes many substances that may be highly correlated. Third, many cohort and case- control studies have relied on self-reports of diet, typically food records, 24- hour recalls, or diet history questionnaires. Repeated application of such in- struments to the same individuals shows considerable variation in nutrient consumption estimates from one time period to another with correlations often in the 0.3 to 0.7 range. In addition, there may be systematic bias in nutrient consumption estimates from self-reports as the reporting of food intakes and portion sizes may depend on individual characteristics such as body mass, ethnicity, and age. For example, total energy consumption may tend to be sub- stantially underreported (30 to 50 percent) among obese persons, with little or no underreporting among lean persons. Such systematic bias, in conjunction with random measurement error and limited intake range, has the potential to greatly impact analytic epidemiological studies based on self-reported dietary habits. Note that cohort studies using objective (biomarker) measures of nutri- ent intake may have an important advantage in the avoidance of systematic bias, though important sources of bias (e.g., confounding) may remain. RANDOMIZED CLINICAL TRIALS By randomly allocating subjects to the (nutrient) exposure of interest, clinical trials eliminate the confounding that may be introduced in observational stud- ies by self-selection. The unique strength of randomized trials is that if the sample is large enough, the study groups will be similar with respect not only to those confounding variables known to the investigators, but also to any unknown factors that might be related to risk of the disease. Thus, random- ized trials achieve a degree of control of confounding that is simply not pos- sible with any observational design strategy, and thus they allow for the testing of small effects that are beyond the ability of observational studies to detect reliably. Although randomized controlled trials represent the accepted standard for studies of nutrient consumption in relation to human health, they too possess important limitations. Specifically, persons agreeing to be part of a randomized trial may be a select subset of the population of interest, thus limiting the gen- eralization of trial results. For practical reasons, only a small number of nutri-
DRIs: THE ESSENTIAL GUIDE TO NUTRIENT REQUIREMENTS 438 ents or nutrient combinations at a single intake level are generally studied in a randomized trial (although a few intervention trials to compare specific dietary patterns have been initiated in recent years). In addition, the follow-up period will typically be short relative to the preceding time period of nutrient con- sumption that may be relevant to the health outcomes under study, particularly if chronic disease endpoints are sought. Also, dietary intervention or supple- mentation trials tend to be costly and logistically difficult, and the maintenance of intervention adherence can be a particular challenge. Because of the many complexities in conducting studies among free-living human populations and the attendant potential for bias and confounding, it is the totality of the evidence from both observational and intervention studies, appropriately weighted, that must form the basis for conclusions about causal relationships between particular exposures and disease outcomes. WEIGHING THE EVIDENCE As a principle, only studies published in peer-reviewed journals were used in the original DRI series and, thus, used as the basis for this book. However, studies published in other scientific journals or readily available reports were considered if they appeared to provide important information not documented elsewhere. To the extent possible, original scientific studies have been used to derive the DRIs. On the basis of a thorough review of the scientific literature, clinical, functional, and biochemical indicators of nutritional adequacy and excess were evaluated for each nutrient. The quality of the study was considered in weighing the evidence. The characteristics examined included the study design and the representativeness of the study population; the validity, reliability, and precision of the methods used for measuring intake and indicators of adequacy or excess; the control of biases and confounding factors; and the power of the study to demonstrate a given difference or correlation. Publications solely expressing opinions were not used in setting DRIs. The assessment acknowledged the inherent reliability of each type of study design as described above, and it applied standard criteria from Hill concerning the strength, doseâresponse, and temporal pattern of es- timated nutrientâdisease or adverse effect associations, the consistency of asso- ciations among studies of various types, and the specificity and biological plau- sibility of the suggested relationships. For example, biological plausibility would not be sufficient in the presence of a weak association and lack of evidence that exposure preceded the effect. Data were examined to determine whether similar estimates of the require- ment resulted from the use of different indicators and different types of studies. In the DRI model described in Part I, for a single nutrient, the criterion for setting the Estimated Average Requirement (EAR) may differ from one life stage
PART IV: APPENDIX C 439 group to another because the critical function or the risk of disease may be different. When no or very poor data are available for a given life stage group, extrapolation is made from the EAR or Adequate Intake (AI) set for another group (see section later on extrapolation); explicit and logical assumptions on relative requirements were made. Because EARs can be used for multiple pur- poses, unlike AIs, they are established whenever sufficient supporting data were available. DATA LIMITATIONS Although the reference values in the original DRI report series were based on data, the data were often scanty or drawn from studies that had limitations in addressing the various questions that confronted the DRI panels. Therefore, many of the questions raised about the requirements for and recommended intakes of these nutrients cannot be answered fully. Apart from studies of overt deficiency diseases, there is a dearth of studies that address specific effects of inadequate intakes on specific indicators of health status, and thus a research agenda was proposed in each of the original DRI series reports. For many of the nutrients in the DRI reports, estimated requirements are based on factorial, balance, and biochemical indicator data because there is little information re- lating health status indicators to functional sufficiency or insufficiency. Thus, after careful review and analysis of the evidence, including examination of the extent of congruent findings, scientific judgment was used to determine the basis for establishing the values. Method for Determining the Adequate Intake for Infants The AI for young infants is generally taken to be the average intake by full-term infants who are born to healthy, well-nourished mothers and who are exclu- sively fed human milk. The extent to which intake of a nutrient from human milk may exceed the actual requirements of infants is not known, and ethics of experimentation preclude testing the levels known to be potentially inadequate. Using the infant exclusively fed human milk as a model is in keeping with the basis for earlier recommendations for intake. It also supports the recommenda- tion that exclusive intake of human milk is the preferred method of feeding for normal full-term infants for the first 4 to 6 months of life. This recommenda- tion has been made by the Canadian Paediatric Society, the American Academy of Pediatrics, the Institute of Medicine, and many other expert groups, even though most U.S. babies no longer receive human milk by age 6 months. In general, this book does not cover possible variations in physiological need during the first month after birth or the variations in intake of nutrients
DRIs: THE ESSENTIAL GUIDE TO NUTRIENT REQUIREMENTS 440 from human milk that result from differences in milk volume and nutrient con- centration during early lactation. In keeping with the decision made by the Standing Committee on the Sci- entific Evaluation of Dietary Reference Intakes, there were not specific recom- mended intakes to meet the needs of formula-fed infants. The use of formula introduces a large number of complex issues, one of which is the bioavailability of different forms of the nutrient in different formula types. AGES 0 THROUGH 6 MONTHS To derive the AI for infants ages 0 through 6 months, the mean intake of a nutrient was calculated based on (1) the average concentration of the nutrient from 2 to 6 months of lactation using consensus values from several reported studies, if possible, and (2) an average volume of milk intake of 0.78 L/day. This volume was reported from studies that used test weighing of full-term infants. In this procedure, the infant is weighed before and after each feeding. Because there is variation in both the composition of milk and the volume consumed, the computed value represents the mean. It is expected that infants will con- sume increased volumes of human milk during growth spurts. AGES 7 THROUGH 12 MONTHS During the period of infant growth and gradual weaning to a mixed diet of human milk and solid foods from ages 7 through 12 months, there is no evi- dence for markedly different nutrient needs. The AI can be derived for this age group by calculating the sum of (1) the content of the nutrient provided by 0. 6 L/day of human milk, which is the average volume of milk reported from stud- ies of infants receiving human milk in this age category and (2) that provided by the usual intakes of complementary weaning foods consumed by infants in this age category. Such an approach is in keeping with the current recommen- dations of the Canadian Paediatric Society, the American Academy of Pediat- rics, and the Institute of Medicine for continued feeding of infants with human milk through 9 to 12 months of age with appropriate introduction of solid foods. The World Health Organization recommends the introduction of solid foods after 6 months of age. For some of the nutrients in other DRI reports, two other approaches were considered as well: (1) extrapolation downward from the EAR for young adults by adjusting for metabolic or total body size and growth and adding a factor for variability and (2) extrapolation upward from the AI for infants ages 0 through 6 months by using the same type of adjustment. Both of these methods are described below. The results of the methods are evaluated in the process of setting the AI.
PART IV: APPENDIX C 441 Method for Extrapolating Data from Younger to Older Infants When information is not available on the nutrient intake of older infants, in- take data can be extrapolated from young to older infants. Using the metabolic weight ratio method to extrapolate data from younger to older infants involves metabolic scaling but does not include an adjustment for growth because it is based on a value for a growing infant. To extrapolate from the AI for infants ages 0 through 6 months to an AI for infants ages 7 through 12 months, the follow- ing formula is used: AI7â12 mo = AI0â6 mo Â¥ F, where F = (Weight7â12 mo/Weight0â6 mo)0.75. Method for Extrapolating Data from Adults to Children SETTING THE AI FOR CHILDREN When data are lacking to set an EAR or AI for children and adolescents, the values can often be extrapolated from adult values. The EAR or AI can be ex- trapolated down by scaling requirements to the 0.75 power of body mass, which adjusts for metabolic differences demonstrated to be related to body weight. Other approaches include extrapolating down based on the reference body weights, which has been done in developing ULs for some nutrients, and ex- trapolating on the basis of energy intake. Methods for Determining Increased Needs for Pregnancy It is known that the placenta actively transports certain nutrients from the mother to the fetus against a concentration gradient. However, for many nutrients, ex- perimental data that could be used to set an EAR and RDA or an AI for preg- nancy are lacking. In these cases, the potential increased need for these nutri- ents during pregnancy is based on theoretical considerations, including obliga- tory fetal transfer, if data are available, and on increased maternal needs related to increases in energy or protein metabolism, as applicable. Thus, in some cases, the EAR can be determined by the additional weight gained during pregnancy.
DRIs: THE ESSENTIAL GUIDE TO NUTRIENT REQUIREMENTS 442 Methods for Determining Increased Needs for Lactation For most nutrients, it is assumed that the total nutrient requirements for lactat- ing women equal the requirements for nonpregnant, nonlactating women of similar age plus an increment to cover the amount needed for milk production. ESTIMATES OF NUTRIENT INTAKES Reliable and valid methods of food composition analysis are crucial in deter- mining the intake of a nutrient needed to meet a requirement. Methodological Considerations The quality of nutrient intake data varies widely across studies. The most valid intake data are those collected from the metabolic study protocols in which all food is provided by the researchers, amounts consumed are measured accu- rately, and the nutrient composition of the food is determined by reliable and valid laboratory analyses. Such protocols are usually possible with only a few subjects. Thus, in many studies, intake data are self-reported (e.g., through 24- hour recalls of food intake, diet records, or food frequency questionnaires). Potential sources of error in self-reported intake data include over- or underreporting of portion sizes and frequency of intake, omission of foods, and inaccuracies related to the use of food composition tables. In addition, because a high percentage of the food consumed in the United States and Canada is not prepared from scratch in the home, errors can occur due to a lack of informa- tion on how a food was manufactured, prepared, and served. Therefore, the values reported by nationwide surveys or studies that rely on self-report are often inaccurate and possibly biased, with a greater tendency to underestimate actual intake. Adjusting for Day-to-Day Variation Because of day-to-day variation in dietary intakes, the distribution of 1-day (or 2-day) intakes for a group is wider than the distribution of usual intakes even though the mean of the intakes may be the same. To reduce this problem, statis- tical adjustments have been developed that require at least 2 days of dietary data from a representative subsample of the population of interest. However, no accepted method is available to adjust for the underreporting of intake, which may average as much as 20 percent for energy.
PART IV: APPENDIX C 443 DIETARY INTAKES IN THE UNITED STATES AND CANADA Sources of Dietary Intake Data At the time the original DRI reports were published, the major sources of cur- rent dietary intake data for the U.S. population were the National Health and Nutrition Examination Survey (NHANES), which was conducted by the U.S. Department of Health and Human Services, and the Continuing Survey of Food Intakes by Individuals (CSFII), which was conducted by the U.S. Department of Agriculture (USDA). Both surveys used the food composition database de- veloped by USDA to calculate nutrient intakes. National survey data for Canada for these nutrients was collected in 10 provinces. Sources of Supplement Intake Data Data on supplement use was obtained via the 1986 National Health Interview Survey, involving 11,558 adults and 1,877 children. Participants were asked about their use of supplements during the previous two weeks, and supple- ment composition was obtained from product labels whenever possible. Food Sources For some nutrients, two types of information are provided about food sources: identification of the foods that are the major contributors of the nutri- ents to diets in the United States and Canada and identification of the foods that contain the highest amounts of the nutrient. The determination of foods that are major contributors depends on both nutrient content of a food and total con- sumption of the food (amount and frequency). Therefore, a food that has a relatively low concentration of the nutrient might still be a large contributor to total intake if that food is consumed in relatively large amounts. METHODS TO DETERMINE UPPER LEVELS The Tolerable Upper Intake Level (UL) refers to the highest level of daily nutri- ent intake that is likely to pose no risk of adverse health effects for almost all people in a population. As intake increases above the UL, the potential risk of adverse effects increases. Risk Assessment Model The model used to derive the ULs consists of a set of scientific factors that are considered explicitly. The factors are organized into a framework called risk
DRIs: THE ESSENTIAL GUIDE TO NUTRIENT REQUIREMENTS 444 Hazard Identification Determination of adverse health effects caused by high intakes of the nutrient or food component Dose-Response Assessment â¢ Selection of critical data set â¢ Identification of NOAEL (or LOAEL) â¢ Assessment of uncertainty (UF) â¢ Derivation of Tolerable Upper Intake Level Intake Assessment Evaluation of the range and the distribution of human intakes of the nutrient or the food component Risk Characterization â¢ Estimation of the fraction of the population, if any, with intakes greater than the UL â¢ Evaluation of the magnitude with which these excess intakes exceed the UL FIGURE C-1 Risk assessment model for nutrient toxicity. assessment. In determining ULs, risk assessment is used to systematically evaluate the likelihood of adverse effects due to excess exposure to a nutrient. The steps used in risk assessment are summarized in Figure C-1 and ex- plained in more detail in the text that follows. STEP 1: HAZARD IDENTIFICATION In this step, a thorough review of the scientific literature is conducted to iden- tify adverse health effects caused by consuming excess amounts of the nutrient
PART IV: APPENDIX C 445 in question. Data from human, animal, and in vitro research is examined, and scientific judgment is used to determine which observed effects are adverse. In addition, adverse nutrientânutrient interactions are considered in defining an adverse effect. When available, data regarding the rate of nutrient absorption, distribution, metabolism, and excretion may also be used to help identify po- tential hazards. Any available knowledge of the molecular and cellular mecha- nisms by which a nutrient causes an adverse effect may also be identified. The scientific quality and quantity of the database evaluated as well. Finally, distinct subgroups that are highly sensitive to the adverse effects of high nutrient intake are identified. STEP 2: DOSEâRESPONSE ASSESSMENT At this stage, the most critical data pertaining to the UL are selected. These data are chosen based on their relevance to human route of expected intake, and expected magnitude and duration of intake. Once the critical data have been chosen, a threshold âdose,â or intake, is determined. For nutrients, a key assumption underlying risk assessment is that no risk of adverse effects is expected unless the threshold dose, or intake, is exceeded. When possible, a no-observed-adverse-effect level (NOAEL) is identified. This is the highest intake (or experimental oral dose) of a nutrient at which no adverse effects have been observed in the people studied. If there are not enough data to select a NOAEL, then a lowest-observed-adverse-effect level (LOAEL) may be used. The LOAEL is the lowest intake (or experimental dose) at which an adverse effect has been identified. Uncertainty Factors Because the UL is intended to be an estimate of the level of intake that will protect the health of virtually all healthy members of a population, a critical part of risk assessment is accounting for uncertainty that is inherent in the process. In addition, the fact that excessive levels of a nutrient can cause more than one adverse effect must be considered. The NOAELs and LOAELs for these unique effects will typically differ. To help account for such variations, an uncertainty factor (UF) is selected. The UF is intended to incorporate all potential sources of uncertainties. In general, the UFs are lower when the available data are high quality and when the adverse effects of the nutrient are extremely mild and reversible. When determining a UF the following potential sources of uncertainty are generally , considered:
DRIs: THE ESSENTIAL GUIDE TO NUTRIENT REQUIREMENTS 446 â¢ Individual variations in sensitivity to a nutrient â¢ Extrapolation from data from experimental animal studies to humans, when animal data constitute the primary evidence available â¢ Absence of NOAEL (to account for uncertainty of deriving a UL from the LOAEL) â¢ Use of data showing effects of subchronic nutrient exposures (NOAEL) to predict the potential effects of chronic exposure The UL is derived by dividing the NOAEL (or LOAEL) by a single UF that incorporates all the relevant uncertainties. Scientific judgment is used to derive the appropriate NOAELs, LOAELs, and UFs. The considerations and uncer- tainties that are accounted for in the setting of ULs are detailed in the original DRI reports. STEP 3: INTAKE ASSESSMENT Information on the nutrient intake of the population is assessed. In cases where the UL pertains only to supplemental intake of the nutrient (as opposed to intake from food), the assessment is directed at supplement intakes only. STEP 4: RISK CHARACTERIZATION Several factors are considered to determine whether nutrient intakes create a risk of adverse effects to a population: â¢ The fraction of the group consistently consuming the nutrient at levels in excess of the UL â¢ The seriousness of the adverse effects associated with the nutrient â¢ The extent to which the effect is reversible when intakes are reduced to levels less than the UL â¢ The fraction of the population with consistent intakes above the NOAEL or even the LOAEL