National Academies Press: OpenBook

The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary (2008)

Chapter: 2 Conceptual Framework for DRI Development: Session 1

« Previous: 1 Workshop Introduction
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 11
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 12
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 13
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 14
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 15
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 16
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 17
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 18
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 19
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 20
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 21
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 22
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 23
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 24
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 25
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 26
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 27
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 28
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 29
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 30
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 31
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 32
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 33
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 34
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 35
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 36
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 37
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 38
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 39
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 40
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 41
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 42
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 43
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 44
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 45
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 46
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 47
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 48
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 49
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 50
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 51
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 52
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 53
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 54
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 55
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 56
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 57
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 58
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 59
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 60
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 61
Suggested Citation:"2 Conceptual Framework for DRI Development: Session 1." Institute of Medicine. 2008. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12086.
×
Page 62

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

2 Conceptual Framework for DRI Development: Session 1 Prior to the workshop, Session 1 participants were asked to consider several general questions (shown in Box 2-1) in preparing their presenta- tions. Session 1 addressed both the conceptual underpinnings and several overarching “roadmap” issues as described in Workshop Introduction (see Chapter 1). The session was moderated by Dr. Stephanie Atkinson of McMaster University. Dr. Robert Russell of Tufts University discussed the pros and cons of the current framework for Dietary Reference Intake (DRI) devel- opment. Two case studies were then presented. Dr. Paula Trumbo, a for- mer study director for DRI micronutrient, macronutrient, fiber, and water and electrolyte study committees who is now at the U.S. Food and Drug Administration (FDA), explored considerations when applying the DRI framework to chronic disease endpoints. Dr. Allison Yates, who served as director of the Food and Nutrition Board (FNB) from 1994 through 2003 and is now director of the Agricultural Research Service Human Nutrition Center at the U.S. Department of Agriculture (USDA), discussed applying the DRI framework to non-chronic disease endpoints. Perspectives on the DRIs were offered by Dr. George Beaton and Dr. Janet King. Dr. Beaton is professor emeritus at the University of Toronto and has served as a consultant to the Institute of Medicine (IOM). Dr. King   This chapter is an edited version of remarks presented by Drs. Russell, Trumbo, Yates, Beaton, King, Lichtenstein, and Yetley at the workshop. Discussions are composites of input from various panel members, discussants, presenters, moderators, and audience members. 11

12 THE DEVELOPMENT OF DRIs 1994–2004 BOX 2-1 General Questions for Session 1 Participants Conceptual Underpinnings • How has the Dietary Reference Intake (DRI) framework “held up” over time? • What is the general purpose of the DRIs? Is it still for planning and assessing? • Do the Estimated Average Requirements (EARs), Recommended Dietary Allowances (RDAs) and Tolerable Upper Intake Levels (ULs) continue to be desirable values? Is the Adequate Intake (AI) useful and needed? Does the Acceptable Macronutrient Distribution Range (AMDR) pave the way to consid- ering macronutrients using a different approach? • Should we continue to include chronic disease risk as an endpoint option? Should we explore multiple endpoints for the same age/gender group? • Should the focus of the DRIs continue to expand beyond classic nutrients? Is a modified DRI approach needed to address macronutrients and nonessential nutrient substances? Overarching Road Map Issues • What is the role of systematic evidence-based reviews (SEBRs) in DRI development? • Can an organizing scheme for DRI development be specified? is senior scientist at the Children’s Hospital Oakland Research Institute and is a former chair of the FNB. Dr. Alice Lichtenstein of Tufts University examined the issues in ap- plying systematic evidence-based review (SEBR) approaches to DRI devel- opment. Dr. Elizabeth Yetley, a Senior Nutrition Research Scientist with the Office of Dietary Supplements at the National Institutes of Health, discussed whether risk assessment is a relevant organizing structure for the DRI development process. Designated discussants followed Drs. Russell, Trumbo, and Yates, and a designated discussant engaged Drs. Lichtenstein and Yetley. In each case, the discussions were followed by input from the workshop audience. The session concluded with a panel discussion, at which point the session was again opened to the audience for comment.

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 13 CURRENT FRAMEWORK FOR DRI DEVELOPMENT: WHAT ARE THE PROS AND CONS? Presenter: Robert M. Russell In 1994, two major changes were made to the development of reference values. One was that the values could be based on an endpoint associated with the risk of chronic disease. The second was that reference values in addition to the Recommended Dietary Allowance (RDA) would be pro- vided to address the increasingly broad applications of reference values. However, these major changes to the DRI development process have both pros and cons. Reference Values Expressed: EARs, RDAs, and AIs The Estimated Average Requirement (EAR) is the level of intake for which the risk of inadequacy would be 50 percent. The RDA is two standard deviations (SDs) above the EAR, covering 97 percent of the population. The Adequate Intake (AI) as a reference value was not envisioned until the lack of dose–response data precluded study committees from determin- ing the level at which the risk of inadequacy would be 50 percent. This was often exacerbated by a lack of longitudinal studies. As a result, AIs were generally set when an EAR could not be established. These include calcium, vitamin D, chloride, chromium, fluoride, potassium, manganese, sodium, and vitamin K. For calcium, an AI was issued due to uncertainty about methods used in older balance studies, a lack of concordance between observational and experimental data (i.e., the mean intakes of the population are lower than the values needed to achieve calcium retention), and a lack of longitudinal dose–response data to verify an association between the amounts needed for calcium retention and bone fracture or bone loss. For vitamin D, an AI was developed because the study committee did not know how much dietary vitamin D is needed to maintain normal calcium metabolism and bone health, primarily because vitamin D is a complicated hormone: Exposure to sunlight, skin pigmentation, the lati- tude at which one lives, and the amount of clothing one wears all affect the amount of vitamin D needed. Furthermore, there were uncertainties   An exception is the reference value for young infants, for whom AIs were specifically deter- mined as opposed to developed when an EAR could not be developed. The AI for young in- fants has generally been the average intake by full-term infants born to healthy, well-nourished mothers and exclusively fed human milk. The only exception to this criterion is vitamin D, which occurs in low concentrations in human milk (IOM, 2006).

14 THE DEVELOPMENT OF DRIs 1994–2004 about the accuracy of the vitamin D food composition database and levels of food fortification. When a chronic disease endpoint was selected as the basis for a refer- ence value—which occurred for five nutrients—all of the reference values were AIs rather than EARs. Calcium and vitamin D AIs were set primarily on the basis of experimental data on bone density and fracture, fluoride on dental caries, potassium on hypertension, and fiber on coronary artery disease. The selection of an endpoint for EARs presented some difficulties. A variety of endpoints were used. For example, maximum glutathione reduc- tase activity was the endpoint used for selenium. A factorial approach was used for vitamin A, zinc, and iron. The maximum neutrophil concentration that would give minimal urinary loss was used to determine the vitamin C EAR. Physiological function was used for vitamin E (the level that would inhibit peroxide-induced hemolysis) and vitamin B12 (maintaining a normal hematological status). The study committees encountered numerous data gaps. The prime one was the lack of defined health-related endpoints associated with status and a lack of biomarkers to define chronic disease. Age-specific data were lack- ing, so extrapolation was used. Also, there was a lack of information on variability of responses (needed to calculate RDAs). As already mentioned, another data gap was the lack of dose–response data (ending up with AIs) combined with a lack of long-term studies. Adding to this list are the lack of knowledge as to which systems dysfunction with excess, as seen with bone, and the lack of uniform rules on how to apply uncertainty factors. Another problem has been extrapolation. Using the case of vitamin A, the AI for 0- to 6-month-olds is 400 µg retinol activity equivalents (RAEs) per day. The study committee extrapolated up for the 7- to 12-month-olds to get an AI of 500 µg RAE/day, which is very close to the tolerable upper intake level (UL) (based on bulging fontanels) of 600 µg RAE/day. In using these numbers, more than half the infants (4–5 months old) in the USDA’s Special Supplemental Nutrition Program for Women, Infants and Children (WIC) are eating above the UL, yet adverse effects on these infants have not been observed. Another odd observation is the lower requirement for 1- to 3-year-olds (300 µg RAE/day) than for 7- to 12-month-olds (500 µg RAE/ day), because the AI for 7- to 12-month-olds was extrapolated up from 0- to 6-month-olds and the EAR for 1- to 3-year-olds was extrapolated down from the adult number. The validity of these numbers is therefore questionable.   factorial approach can take several forms but generally derives a total nutrient require- A ment by summing the individual physiological needs of various functional components (e.g., body maintenance, milk synthesis, skin sloughing).

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 15 The major challenge in deriving an RDA from an EAR is variance. To establish an RDA, one determines the EAR, assesses the variability, then calculates the RDA as the EAR plus two SDs. However, variance is not known for most nutrients, and a coefficient of variation (CV) is assumed instead. A 10 percent CV was assumed for thiamin, riboflavin, niacin, vita- min B6, folate, vitamin B12, vitamin C, vitamin E, selenium, and zinc. The CV is known for some nutrients, such as vitamin A. Although the study committee was initially enthusiastic about using a physiological end- point (abnormal dark adaptation) for determining an EAR for vitamin A, the pooled data from four studies gave a CV of 40 percent. Therefore, the study committee decided not to use dark adaptation as the endpoint, and no EAR or RDA was established on this basis. Instead, a higher EAR (625 µg for men and 500 µg for women, compared with 300 µg) was determined using a factorial approach. Reference Values Expressed: ULs The UL is the highest level of daily nutrient intake that poses no risk of an adverse effect to almost any individual in the general population. It is not a recommended or desirable level of intake. It is derived by dividing a no-observed-adverse-effect level (NOAEL) or a lowest-observed-adverse- effect level (LOAEL) by an uncertainty factor. A concern is that the uncertainty factor is subjective. The sources of uncertainty that the study committees considered were interindividual variation, extrapolation from animals to humans, short-term versus chronic exposures, use of a LOAEL instead of a NOAEL, small numbers of people studied, and the severity of the effects (the higher the severity, the higher the uncertainty factor). The example in Box 2-2 illustrates the subjectivity that study committee members face in trying to derive logical and scientifi- cally valid numbers. Applicability of the Framework to All Nutrient Substances The framework did not “fit” well for establishing reference values for fat and macronutrients. Such substances are not essential and have no beneficial role, except for essential fatty acids and amino acids. Rather, an Acceptable Macronutrient Distribution Range (AMDR) for fat was deter- mined to be 20–35 percent of calories. Furthermore, a UL was not provided for the effects of intakes of saturated fat or trans fat on low-density lipo- protein (LDL) cholesterol, as coronary heart disease (CHD) risk increases progressively. For fiber, an AI was set on the basis of heart disease preven- tion, as the effect on CHD occurs continuously across the range of intakes. No UL could be determined for fiber, because fiber intake is accompanied

16 THE DEVELOPMENT OF DRIs 1994–2004 BOX 2-2 Vitamin A and Uncertainty Factors Four adverse effects were considered in setting a tolerable upper intake level (UL) for vitamin A: bone mineral density, liver toxicity, teratogenicity (for women of reproductive age), and bulging fontanels (for infants). In a study of the daily dietary intake of retinol associated with risk for hip fracture in two populations in Sweden and the United States (Melhus et al., 1998), it was determined that there was a rise in the risk for hip fracture above a vitamin A intake of 1,500 µg/day. However, two other papers were unable to show any effect of vitamin A intake on bone mineral density (Sowers and Wallace, 1990; Ballew et al., 2001). Therefore, the United States decided to use liver toxicity as the critical effect for the general adult population and derived a UL of 3,000 µg/day (twice as high as the UL based on hip fracture). For women of reproductive age, a UL of 3,000 µg/day for terato- genicity was determined based primarily on a study by Rothman et al. (1995). The United Kingdom (UK) panel decided that the Rothman et al. (1995) paper was biased and did not set any UL for teratogenicity, as it considered the evidence base inadequate. It suggested that intakes greater than 1,500 µg/day may be inappropriate and advised pregnant women not to take vitamin A supplements. The European Union (EU), looking at the same database used by the Institute of Medicine (IOM) study committee and the UK panel, established a UL of 3,000 µg/day, the lowest-observed-adverse-effect level (LOAEL) for teratogenicity based on the Rothman et al. (1995) paper. The EU did not use any uncertainty factor because it believed that data from other studies supported a true threshold of more than 3,000 µg/day and that this number covered the risk of hepatotoxicity. Using the same paper, the IOM study committee determined the no-observed- adverse-effect level (NOAEL) for teratogenicity to be 4,500 µg/day and used an uncertainty factor of 1.5 to establish a UL of 3,000 µg/day. However, the IOM study committee had already decided to use 14,000 µg/day as the LOAEL for liver toxicity, with a high uncertainty factor of 5 because of the severity of the effect, resulting in a UL of 3,000 µg/day. Because the study committee believed it would be confusing to have women of reproductive age with one UL and all others with another UL, it somewhat adjusted the numbers to come out with the same UL. by phytate intake, a confounding factor. For the Estimated Energy Require- ment (EER), the goal was to maintain a healthy weight at an acceptable level of physical activity. That is, the EER was based on energy balance (no weight gain), not on reduction of disease risk—a different type of paradigm than originally envisioned. Selection of Endpoints In general, the selection of endpoints was based on data availabil- ity. For ULs, the endpoints were frequently concerned with public health

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 17 protection, often using a benign adverse effect (e.g., flushing rather than liver dysfunction) to be more protective. The selection of endpoints was not based, for the most part, on the strength or consistency of evidence or on the severity or clinical importance of the endpoints. When the (ideal) data were lacking, the study committees still had to provide numbers; it was emphasized that “no decision was not an option.” This is because the numbers are needed for so many purposes, such as goals for individuals, di- etary assessment and planning, food fortification, food assistance program evaluation, food labeling, agricultural policies, dietary guidance policies, and educational program planning. In the future, it might be better to select endpoints more scientifically. The use of biomarkers that correlate with a disease or physiological state would be very helpful. The biomarkers should be attributable and respon- sive to the nutrient in question—key questions that can be answered using SEBRs. Further SEBRs allow the ranking of the quality of the evidence ac- cording to the degree of confidence in the conclusion. If the biomarker is found to be valid, the dietary intake can be correlated with the biomarker, and the overall quality of the data can be ranked. Systematic Evidence-Based Reviews SEBRs can answer only limited types of questions. Nevertheless, they are independent and unbiased reviews of a defined topic by a group with no stake in the outcome. They can account for confounders (e.g., dietary supplements) in ranking. They can determine the validity of extrapolations or interpolations. They can increase the transparency of decisions made about specific endpoints, which increases the replicability of the data by other groups. The importance of the SEBR is illustrated in Box 2-3. Other Challenges One quandary for application is that sodium, potassium, calcium, vi- tamin D, vitamin E, and linoleic acid DRIs are unrealistic values, given the North American food supply and dietary habits. Almost no one meets the numbers for these nutrients. While the science for setting DRI values takes precedence and should not be compromised because of real or perceived inconsistencies about what the population is eating, DRI reports may need to include more discussion about these problems when they occur. While decisions about the use of DRIs for nutrition labeling are outside the purview of the DRI development process, related issues raise interesting questions, such as what to do if there is no DRI (e.g., trans fat), what to   SEBRs are discussed in further detail in a separate presentation later in this chapter.

18 THE DEVELOPMENT OF DRIs 1994–2004 BOX 2-3 β-Carotene Case Study and the Evidence-Based Review The β-carotene trials were started on the basis of many epidemiological stud- ies showing that the higher the β-carotene in the serum or diet, the lower the incidence of lung cancer in smokers. However, when an intervention trial was done with β-carotene at a fairly high dose, more lung cancers, not fewer, were found in the β-carotene group (Heinonen and Albanes, 1994). This was backed up by a second trial in the United States, the CARET trial, done in 1996 (Omenn et al., 1996). Three years before the first of these trials, in 1991, the Food and Drug Ad- ministration (FDA) had looked at the large number of available studies (mostly retrospective or prospective epidemiological studies) with either cancer or pre- malignancy as the endpoint. The first criterion used to evaluate the studies was: Did they allow attribution of β-carotene per se to the observed health effects, not simply to diets or dietary patterns that were rich sources of these nutrients or to serum/plasma levels that could be markers of diets rich in these nutrients? The second criterion was: Did they provide a sufficient basis for relating intakes to the actual reduced risk of cancer (because there were no validated biomarkers at the time to serve as surrogates for cancer sites)? The bottom line was that the FDA’s systematic evidence-based review (SEBR) led it to reject the health claim that antioxidants collectively and carotene specifi- cally could protect against cancer. The government might have saved itself con- siderable expense if it had paid attention to the FDA’s SEBR performed 3 years before the huge intervention trial began. do if there is an AI (e.g., calcium), how to identify a single dietary value if there is a distribution range, and how to choose between an EAR and an RDA. It should be remembered that people use food labels to choose among food products, not to formulate their diets. Whether an approximate (e.g., interpolated) EAR that is scientifically based can be derived when the data are nonexistent or inadequate should be investigated. If it can be derived, the best way to express that value to make it more useful should be determined. Consistent guidelines should be developed for setting uncertainty factors and for rating the overall evi- dence for a DRI value, based on the strength of the data, the consistency, the public health relevance, and the applicability to the person or persons of interest. Usefulness of the DRI Framework and Conclusions The DRI framework has often been found not to be useful for planning for groups, such as WIC, primarily because too many assumptions have to

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 19 be made (e.g., the distribution of intakes will not change with a particular intervention). For planning for individuals, it is questionable how the RDA is to be used. The RDA is probably most useful as a goal that is either met or not met. For assessing individual dietary adequacy, the probability equations have been found to be too cumbersome to use; as a result, only 5 percent of dietitians admit to using them. However, for assessing intakes of groups, such as WIC, the framework has worked well. In summary, the pros and cons of the past paradigm are listed below. Pros • A comprehensive review of scientific literature at the time was performed. • A risk assessment model was developed. • The framework for assessing group dietary intakes worked well us- ing the EAR cutpoint method for prevalence of inadequacy. Cons • For the most part, the health endpoint data on which to base DRIs were lacking. • Variance data were lacking. • It was necessary to make many extrapolations, the scientific validity of which was unknown. • Long-term data were limited. • The uncertainty factors for deriving ULs were very subjective. CASE STUDY: APPLYING THE DRI FRAMEWORK TO CHRONIC DISEASE ENDPOINTS Presenter: Paula Trumbo The conclusion that the “reduction in risk of chronic disease is a con- cept that should be included in the formulation of future RDAs where suf- ficient data for efficacy and safety exist” (IOM, 1994) had a notable impact on the DRI development process. It influenced the way in which nutrients were grouped for review, as noted in the following examples: • Calcium and related nutrients were grouped together because of their role in bone health and general health. • Antioxidants were reviewed together because of their potential role in reduction of risk of chronic diseases, such as cancer and CHD.

20 THE DEVELOPMENT OF DRIs 1994–2004 • Electrolytes were grouped because of their role in blood pressure and hypertension. Moreover, a guiding principle that was conveyed to the DRI study com- mittees was the need to review the evidence on chronic disease first to determine if it was possible to use such data to set a DRI. Setting EARs Based on Chronic Disease Endpoints Of the nutrients that were assigned reference values related to nutri- tional adequacy, only five were based on chronic disease endpoints. While the DRI study committees were encouraged to set an EAR rather than an AI because of the limited utility of the AI for assessment purposes, the refer- ence values related to nutritional adequacy that were developed for nutri- ents based on chronic disease endpoints were all AIs. The endpoints were • osteoporosis and fractures for calcium and vitamin D, as well as balance data and biomarkers for vitamin D; • dental caries for fluoride; • CHD for fiber; and • a combination of endpoints, including salt sensitivity (a risk factor of hypertension), kidney stones, and blood pressure, for potassium. An important question to ask is “Could EARs have been set using chronic disease endpoints if sufficient data had been available?” The EAR is an average daily nutrient intake level that is estimated to meet the require- ment (defined by the nutrient-specific indicator or criterion of adequacy) of half the healthy individuals in a subpopulation. In Figure 2-1, at a very low intake of 30 units for nutrient X, there is a risk of inadequacy in 100 percent of the subpopulation. At an intake level equivalent to the EAR of 100 units, the risk of inadequacy is 50 percent. At an intake level of ap- proximately 140 units, there is only a 2–3 percent risk of inadequacy for nutrient X (i.e., the RDA). This DRI paradigm worked well when the EAR was based on essenti- ality because nutrient-specific indicators were being used, such as balance data for molybdenum, factorial data for iron and zinc, status biomarkers that were unique to copper and vitamin E, and turnover data for iodine and carbohydrate. Furthermore, endpoints of inadequacy could be used to set an EAR because all individuals are at risk of inadequacy for essential nutrients. The challenge in fitting a chronic disease endpoint into this DRI para- digm is illustrated by a clinical trial that evaluated potassium intake and frequency of salt sensitivity. This trial provided multiple doses of potassium

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 21 FIGURE 2-1  Estimated Average Requirement for hypothetical nutrient X. NOTE: EAR = Estimated Average Requirement. 2-1.eps low-res bitmap image to individuals who were consuming very high levels of salt. The highest frequency of salt sensitivity occurred at a very low level of potassium intake (30 mmol/day) and was about 78 percent for African Americans and 37 percent for Caucasians (Figure 2-2). It became obvious that it was difficult to apply data from such a trial to the DRI paradigm, which as- sumed that the risk of inadequacy at very low intake is 100 percent for the population. If the EAR is to be based on chronic disease risk reduction rather than reduction of the risk of nutrient inadequacy, then the definition of the EAR would be the nutrient intake level to reduce the risk of chronic disease in half the healthy individuals in a particular subpopulation, or to achieve an absolute risk reduction of 50 percent (where absolute risk is the probability of getting a disease over a certain time and is affected by the relative risk of a particular risk factor, such as intake of an individual nutrient). Each component in absolute risk reduction has challenges. One is the assumption that the absolute risk of a chronic disease is 100 percent for a subpopulation, as is the case for risk of inadequacy based on essenti- ality. Perhaps this is the case for dental caries, but it is not the case for other disease endpoints, such as osteoporosis, CHD, and kidney stones. The absolute risk of osteoporosis is not 100 percent, even for Caucasian postmenopausal women, and the absolute risk for CHD is even less than that for osteoporosis. The prevalence of kidney stones is approximately

22 THE DEVELOPMENT OF DRIs 1994–2004 FIGURE 2-2  Effect of potassium intake on frequency of salt sensitivity in nonhy- pertensive African American men (solid bar) and white men (gray bar). SOURCE: Morris et al. (1999). Normotensive salt sensitivity: Effects of race and dietary potassium. Hypertension 33(1):18–23. 5 percent, and the prevalence is even less for certain individual cancers, depending on the type. The other challenge is observing a chronic disease risk reduction of as much as 50 percent in response to the intake of an individual nutrient. Chronic diseases are not nutrient specific. Rather, they are multifactorial, with other factors, such as genetics, age, environment, lifestyle, and other nutrients, contributing to the risk. Unlike the effectiveness of reducing the risk of a nutrient deficiency, risk reduction of most chronic diseases by diet is limited. For instance, although one of the endpoints considered for calcium was fracture risk, the DRI study committee chose to reject the observational data on fracture risk because of the influence of confounding factors. One reason given for not setting an EAR for vitamin D was that it could not account for the contribution of sunlight exposure, which is affected by a wide variety of factors (this would also influence reference values related to nutritional adequacy based on essentiality). For dental caries, while the absolute risk is probably at or near 100 percent in North America, the DRI study committee on macronutrients stated that caries occurrence was influenced by frequency of meals and snacks, sugar products, content of foods, oral hygiene, and exposure to fluoride.

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 23 In general, individual nutrients do not usually yield a relative risk reduction as high as 50 percent. The AI for fiber was based on three pro- spective cohorts, with a relative risk reduction for CHD ranging from 16 to 41 percent. The greatest risk reduction was only 41 percent, because CHD, like other chronic diseases, is multifactorial. An individual nutrient would not be expected to result in a risk reduction of that magnitude. The AI for potassium was based, in part, on the risk of kidney stones, where three prospective cohorts yielded a relative risk reduction ranging from 21 to 51 percent. Thus, while one of the three prospective cohorts observed a 51 percent risk reduction at the highest quintile of potassium intake, the weighted average relative risk reduction would fall short of 50 percent. Another complication was the macronutrients, particularly fat and carbohydrate, because of their interrelatedness in the diet. Although the DRI macronutrient study committee tried to define specific reference values related to nutritional adequacy for the individual macronutrients, it became obvious this was not possible. Thus, the AMDRs were developed and set for the macronutrients, and some of them were based, in part, on risk bio- markers of chronic disease, such as CHD for fat. In summary, the challenges of using chronic disease endpoints for setting an EAR/RDA are • a nutrient-specific indicator is not being applied; • the absolute risk of most chronic diseases applies to only a portion of the population; and • achieving risk reductions as high as 50 percent is very difficult for most chronic diseases because of the multifactorial nature of chronic diseases. Therefore, the definition of an EAR does not allow for the use of chronic disease risk reduction in setting recommended intake levels, which is an opinion shared by many who have worked closely with the DRI process. Setting ULs Based on Chronic Disease Endpoints Chronic disease endpoints have also been used to set ULs. A UL could be set if sufficient data were available for identifying a LOAEL or, prefer- ably, a NOAEL. For essential nutrients, only one UL was set based on a chronic disease endpoint: sodium and blood pressure (a surrogate endpoint for cardiovascular disease [CVD]). A NOAEL could not be identified be- cause of the lack of a threshold, and it was not known if blood pressure would continue to drop below the lowest sodium intake level provided (50 mmol/day). However, because sodium is essential, an AI was set based on factorial data (65 mmol/day). The UL was set based on a LOAEL of 100 mmol/day, even though a threshold was lacking.

24 THE DEVELOPMENT OF DRIs 1994–2004 For nonessential macronutrients, such as trans fat, cholesterol, and saturated fat, there was also no observed threshold effect using risk bio- markers of CHD. As intake of any of the three macronutrients decreased, the biomarkers of heart disease (e.g., change in total cholesterol, change in LDL/HDL cholesterol) continued to decrease. The lowest intake levels approached zero for percentage change in these risk biomarkers. Because these three macronutrients are not essential, the UL should be 0 percent of energy; however, this level would have required extraordinary and un- achievable changes in dietary patterns. Therefore, a UL could not be set for these three macronutrients. Implications for Reference Values Related to Nutritional Adequacy The challenges of setting reference values related to nutritional ad- equacy based on chronic disease risk reduction were recognized when the framework for revising the RDAs was being considered. A 1994 IOM report stated that “If reduction of risk of chronic disease is to become a cri- terion in the development of future RDAs, many questions must be faced” (IOM, 1994). Some of these questions and associated comments follow: • “How can concerns regarding potential interactions among nutrients be addressed?” This could include the interaction of nutrients that are confounders of disease risk. • “Should levels of nutrient intake be expressed in terms of numerical ranges, in terms of food patterns, or in some other way?” Numerical ranges (AMDRs), rather than a specific intake level, were set for the macronutrients. • “How can desirable levels of intake be extrapolated for groups not included in clinical trials (such as children, adolescents, young adults, and the elderly)?” Gender and, most often, age can be con- founders of disease risk. Furthermore, at the 1993 FNB meeting that preceded the 1994 IOM report, some commenters argued that “the RDAs should remain distinct from the dietary guidelines for reducing the risk of chronic disease.” Despite the limitations in the use of the AI, setting AIs based on chronic disease worked rather well. This is because a prescriptive approach was not being used to derive AIs as it was for setting EARs and therefore RDAs. Another issue is that AIs can be based on observed or experimentally de- termined estimates of intake (i.e., observational studies that alone were sufficient for setting AIs, but not EARs). Furthermore, the AI is expected to meet or exceed the amount needed to maintain a defined nutritional state or criterion of adequacy for essentially all members of a specific subpopulation

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 25 (i.e., the RDA). Along with that, the reference values related to nutritional adequacy based on chronic diseases would be expected to be greater than reference values related to nutritional adequacy based on the daily require- ment for many essential nutrients (e.g., if an RDA for potassium had been set based on essentiality, it would have been much lower than the AI of 120 mmol/day based on chronic disease risk reduction). Possible approaches for addressing chronic disease endpoints in terms of reference values related to nutritional adequacy include the following: • Continue to set AIs based on clinical/observational data. • For the macronutrients, continue to set AMDRs, particularly in the lower range, based on clinical/observational data, as well as dietary intake data. • Develop a new criterion/DRI that provides a prescriptive way to set recommendations based on chronic disease endpoints. In addition, the approach used to set the upper range of the AMDRs might be useful in setting a maximum intake level for nonessential nutrients without a threshold or NOAEL by relying on clinical/observational data and dietary data (e.g., menu modeling and survey data). CASE STUDY: APPLYING THE DRI FRAMEWORK TO NON-CHRONIC DISEASE ENDPOINTS Presenter: Allison Yates Discussions about the experience of using non-chronic disease end- points (specifically adequacy status endpoints) to establish DRIs benefit from acknowledging some underlying realities. First, the reality is that “no decision is not an option,” meaning that the absence of some type of DRI value leaves a scientific gap and is problematic for users, particularly government policy makers. The DRI process therefore focused on deriving a value whenever possible or offering good justification when it was not possible. Second, endpoints reflect a “continuum of adequacy” for every nutrient, whether to prevent a frank deficiency state or a chronic disease. The expectation is, particularly in deriving reference values related to nu- tritional adequacy, that a quantitative determination of adequacy can be developed based on a validated biomarker methodology with a dataset (the evidence-based component). Key components of the DRI framework are as follows: • A criterion of adequacy or excess based on decreasing the risk or a validated biomarker with strong evidence

26 THE DEVELOPMENT OF DRIs 1994–2004 • An EAR based on a reliable dose–response, so that half of the indi- viduals have inadequate intakes • Primary endpoint data from more than one laboratory • A UL based on chronic intake and a serious adverse effect To describe the experience of using non-chronic disease endpoints, three nutrients are highlighted in this discussion: • Vitamin C, an example of an antioxidant with a known continuum of adequacy • Iodine, as an example of a deficiency state that has significant public health significance in many parts of the world today • Vitamin K, as an example of a nutrient with poorly characterized intake and requirements when compared with other nutrients Antioxidants: Vitamin C The study committee on dietary antioxidants and related compounds not only was asked to develop dietary reference levels of intake, but also was given other tasks. They were defining dietary antioxidants, reviewing the scientific literature on the antioxidants and selected food components that may influence their bioavailability, addressing the safety of high in- takes, and providing guidance on uses of the developed reference intakes. The study committee defined dietary antioxidant as a substance in foods that significantly decreases the adverse effects of reactive species, such as reactive oxygen and nitrogen species, on normal physiological function in humans. In addition to vitamin C, vitamin E, and selenium the study committee examined data about β-carotene and other carotenoids (α-carotene, β-cryptoxanthin, lutein, lycopene, and zeaxanthin). The continuum for vitamin C is shown in Figure 2-3. Very low levels of vitamin C are required to prevent scurvy. Bleeding gums occur at a slightly higher level. Urinary excretion is observed at about 60 mg in urine. Al- though many have evaluated the effect of vitamin C on chronic disease, that endpoint was not chosen by the panel. Diseases associated with increased levels of reactive oxygen and nitrogen species include age-related eye dis- ease, atherosclerosis, cancer, CHD, diabetes, inflammatory bowel disease, neurodegenerative disease, respiratory diseases, and rheumatoid arthritis. All of these have other causative factors in the diet and the environment, and genetics plays a major role, which makes it difficult to use these as criteria of adequacy. Possible biomarkers for vitamin C include inhibition of superoxide in neutrophils, oxidative deoxyribonucleic acid (DNA) and chromosome damage, immune markers, and relationship to chronic disease outcomes.

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 27 Capacity to Excess Repair (100% Chronic Scurvy Bleeding Gums (Urinary Neutrophil Diseases? Excretion) Saturation) // 0 10 70 XXX? Increasing Intake of Vitamin C, mg/d FIGURE 2-3  Vitamin C endpoints. 2-3.eps What was chosen as the indicator of adequacy for adults was the capac- ity to repair (neutrophil saturation with ascorbate and the ability to deal with superoxide compounds) at a level of about 70 percent saturation, which was also the point at which urinary excretion of vitamin C became appreciable. The study committee wanted scientifically valid experiments. It was looking to measure relevant biomarkers that were significantly related to the disease endpoints, were based on in vivo experiments, and played a role in health. It also wanted reliable intake data. What it did not want were strictly observational data, strictly antioxidant-type functions, or overreli- ance on animal data and associations, rather than causation. The findings can be found in the IOM report on vitamin C and other related nutrients (IOM, 2000b). The rationale for the recommendation for vitamin C was that there was no accepted methodology comparing vita- min C intake with an in vivo antioxidant effect; they could find vitamin C functioning as an antioxidant in white blood cells or neutrophils, but there were no data relating that to intake; and there were data relating leukocyte ascorbate levels to liver and body pools of ascorbate. EARs and RDAs were developed for children and adults as well as pregnant and lactating women. Most of the values were based on extrapo- lation from data from one study in men with a small sample size. Research recommendations were made, indicating that more data are needed in cer- tain areas, including the establishment of a reliable functional biomarker, interaction of vitamin C and iron, and the effect of vitamin C supplements on the fetus. The IOM (2000b) applied the EAR cutpoint methodology to vitamin C intake from the National Health and Nutrition Examination Survey (NHANES), showing that the intakes of 10 percent of women and 21 percent of men were below the EAR. The value of the EAR is that one could assume there was a similar percentage of lower levels of saturation

28 THE DEVELOPMENT OF DRIs 1994–2004 of ascorbate and ability to deal with superoxides, not that scurvy itself was of concern. Micronutrients: Iodine Iodine is an essential component of the thyroid hormones involved in the regulation of various enzymes and metabolic processes. The continuum for iodine (Figure 2-4) goes from cretinism at very low levels through iodine accumulation to higher levels of urinary excretion. The iodine EAR for adults is based on iodine accumulation (IOM, 2001). Three studies were available on thyroid iodine (radioiodine) ac- cumulation and turnover in adults, but they were limited by small sample sizes. The requirements were 96.5 µg/day (n = 18) (Fisher and Oddie, 1969a), 91.2 µg/day (n = 274) (Fisher and Oddie, 1969b), and an absolute iodine uptake of 21 to 97 µg/day (n = 3) (DeGroot, 1966). The study com- mittee on micronutrients selected turnover as the basis for the requirement and calculated an EAR of 95 µg/day, which was assumed to be adequate for about half the individuals. This EAR for adults 19–50 years of age was extrapolated to other parts of the population (e.g., >51 years). For children 1–3 years, an iodine balance study on nutritionally rehabilitated children 1.5–2.5 years of age (Ingenbleek and Malvaux, 1974) gave an EAR of 65 µg/day. As the EAR extrapolated from adults would be ~36 µg/day, the study committee used the balance study as a basis for the EAR, as it resulted in a higher estimate. The same occurred with the age group 4–8 years, but the EAR was based on a different iodine balance study (Malvaux et al., 1969). In the case of children 9–13 years of age, the actual iodine balance data in children that age (Malvaux et al., 1969) resulted in an EAR of 55 µg/day. As extrapola- tion from adults gave an EAR of 73 µg/day, the study committee used the more protective higher estimate. Cretinism, Iodine Urinary Elevated Mental Goiter Accumulation Excretion TSH Levels Retardation and Turnover >100 µg/L 0 70 100 Increasing Intake of Iodine, µg/d FIGURE 2-4  Iodine endpoints. 2-4.eps NOTE: TSH = Thyroid stimulating hormone.

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 29 Vitamin K Not much information is available on vitamin K, which is required as a coenzyme for the synthesis of proteins active in blood coagulation and bone metabolism. Dietary intake data were available to the micronutrient study committee, but the order or continuum of endpoints considered (including possible relationships with osteoporosis and atherosclerosis) had not been identified (Figure 2-5). The median intake from NHANES III was the basis of the AIs for children 1 year and older to adults. Conclusion The major challenges experienced in setting reference values using non-chronic disease endpoints result from the existence of a continuum of adequate levels of intake reflective of the possible endpoints that could be selected. This continuum is different for different nutrients. Moreover, the quantitative determination is critical, regardless of what nutrient is being considered. Finally, challenges always arise when scientific judgment must be used; it is used frequently when data are limited, yet there is the clear need to derive reference values so that policy decisions can be based on some scientific data as opposed to no scientific data. DRI development is a long-term, iterative process, and so we should expect that new data will provide new answers. DISCUSSION: FRAMEWORK PROS/CONS; CASE STUDIES Co-Discussants: Patsy Brannon and Alice H. Lichtenstein The session moderator, Dr. Stephanie Atkinson, introduced the discus- sants and invited each one to offer an opening remark. Order of Potential Endpoints Not Identified; Relevance of Some Endpoints Uncertain Urinary GLA Undercarboxylated Osteoporosis? Hypoprothrombinemia Plasma Factor VII Residues Prothrombin, Osteocalcin Atherosclerosis? 0 100 + Increasing Intake of Vitamin K, µg/d FIGURE 2-5  Possible vitamin K endpoints. NOTE: GLA= γ-Carboxyglutamic acid. 2-5.eps

30 THE DEVELOPMENT OF DRIs 1994–2004 Discussant Opening Remarks Dr. Brannon opened the discussion by reflecting that our concept of good nutrition has evolved as nutritional and biomedical sciences have advanced. Because we now think in terms of decreasing risk of disease as well as eliminating deficiencies, the DRI framework has become much more complex, and it may become even more so as our understanding of the interactions among genetics, nutrients, diet, and environment continues to increase. She raised several important issues: the need to make decisions based on limited data; the need to determine whether the data have evolved sufficiently to allow EARs to be estimated for the nutrients for which AIs were established; the value of conducting SEBRs; the setting of priorities between innovative research and research that deepens our understanding but does not necessarily advance our knowledge; and the use of the risk assessment model. Dr. Lichtenstein mentioned additional issues concerning the need to (1) address requirements for single or very similar nutrient groups rather than for such large groupings of very different nutrients; (2) foster study commit- tees’ ability to consider the unique aspects of each nutrient; (3) reconsider life stage groupings for nutrients, given that the values are needed even in the face of limited data and more data are available for some life stage groupings than for others; (4) increase understanding about the nature of the goals the DRIs reflect, particularly that some values reflect a goal to in- crease intake (vitamin A) and others a goal to decrease intake (cholesterol); and (5) encourage further discussion about the multiple causative factors for chronic diseases, particularly as it may relate to separating dietary pat- terns and individual nutrients. General Discussion Drs. Russell, Trumbo, and Yates joined the discussants at the dais, and a brief group discussion took place. The initial focus included macronutri- ent recommendations, the target population for DRIs, the ability to achieve DRI intake levels through typical diets, and the value in constituting a single study committee to develop both EARs/RDAs and ULs. Macronutrients One participant noted that quantitative reference values were not given in the reports on macronutrients; rather, advice such as “intake should be as low as possible while consuming a nutritionally adequate diet” was provided. She suggested that a quantitative value would have been useful and could have been developed through the use of modeling techniques. Another participant suggested that a different approach for developing

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 31 DRIs for macronutrients may be needed as compared with that for vita- mins and minerals. Furthermore, efforts to improve communication about macronutrient recommendations should be pursued. For example, a rec- ommendation for “up to 25 percent of energy” for sugar intake could be misinterpreted as the target goal for intake. Target Population The moderator noted that the morning’s presentations included the issue of setting DRIs for “healthy populations” and asked whether obesity would be viewed as a disease. Another participant commented that overall there was a need to establish a “representative state of health” that might serve as a starting point for DRI development. Nonetheless, there was agreement that this issue represents a quandary. DRI Recommendations Versus Estimated Intake A participant asked about the appropriate strategy when a scientifically valid reference value for intake cannot be achieved realistically with the cur- rent diet. One response was that the answer would depend on whether the endpoint selected for the value reflected a public health concern or not, and that consideration should be given to the level of risk to be tolerated, with the view that the EAR is the median estimated requirement. The discussion continued, focusing on supplementation as a solution, noting specifically the possible role of targeted supplementation. These comments led one participant to remark on the value of tasking a single study committee with responsibilities for both EAR/RDA and UL development because these ref- erence values at some point become highly related. It was noted that during DRI development the groups had collaborated; however, given the amount of information to review, those responsible for EARs or ULs were often unwilling or unable to review other draft sections of the report. Dietary Patterns When the discussion was opened to all members of the audience, one participant noted that during the past 10 years, the development approach moved from specifying a “black-and-white” cutoff in the form of an RDA to consideration of a probability model. This approach made it clear that there was a distribution of requirements in the population. Given this probability paradigm and the interest in dealing with chronic disease, he suggested that consideration of dietary patterns would be useful and could address in part the multiple causes of disease; it also would have more direct clinical and practical application. Furthermore, advances in genetics and nutrigenomics may follow the identification of groups of people who

32 THE DEVELOPMENT OF DRIs 1994–2004 are at greater risk or who will benefit from a modified pattern of nutrient intake. Another audience member pointed out that confounding may be introduced by wide use of dietary supplements. Endpoint Continuum One participant remarked that there is no clear distinction between a chronic disease endpoint and a more traditional adequacy endpoint. Instead, all endpoints are part of a continuum. He also noted the need to focus on the way changes in nutrient intake impact biomarkers as well as nutrient interactions. Another participant preferred the use of physiological endpoints rather than chronic disease endpoints. He offered the example of dark adaptation. It is not a disease but a dysfunction of physiology, and such a condition can continue for years without progressing to the next stage. Estimating Intake With regard to the activities for setting DRIs, one commenter indicated that it is important to consider the effects of measurement error (imprecise measurement of the true usual intake) in self-reports of dietary intake, par- ticularly when food frequency questionnaires are used. In general, failing to adjust for measurement error causes problems in estimating relationships between diet and health outcomes by attenuating the true relationship be- tween diet and the outcome. AIs and Chronic Disease A participant admitted to being a vocal opponent of AIs, but was per- suaded to consider using AIs when chronic disease endpoints are used, as other countries have done. She questioned whether it would be possible to have EARs and RDAs for essentiality and perhaps an AI for the same nutri- ent as appropriate for reducing the risk of chronic disease. It was considered possible, but there would be ramifications for user applications, including considerable need for information and education. TWO PERSPECTIVES: THE DRI FRAMEWORK Perspective I Presenter: George Beaton I have watched the evolution of DRIs and their application, with the sense that I have been responsible for part of it. I now stand before you

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 33 to say that we have gone too far. The probability approach that everyone espouses, and which I promoted, continues to be a useful concept. How- ever, in terms of its current application, particularly to individuals, we have gone far beyond the data. We must retreat a bit and ensure that we ground ourselves in science. There will be negative remarks at this workshop, of course—what the DRIs did not accomplish. Yet it is important to recognize what the DRIs did accomplish. First, they are probably the most comprehensive literature review in nutritional science to date. Second, they provide formal recog- nition of the importance of the EAR and the fact that we cannot escape dealing with distributions. Failure to recognize this reality is a major cause of confusion and controversy in the nutrition community. Third, the intro- duction of the UL was a major breakthrough because our community tends to believe that more is always better and thus encourages the use of often unnecessary supplements. We must build on what we have accomplished in the DRI process and move forward. The sole purpose of the DRI development process is to foster the ap- plication of nutritional science. If this activity advances that science or pro- motes research, so much the better. But that is not its purpose. Rather, the goal is to apply the principles of scientific analysis throughout the process. There will always be issues of judgment, not to mention individuals with strong views. But the most important task is to ground the DRI process in the principles and concepts of the scientific method. From the views being expressed by users, it is clear that everyone wants a single number—not distributions or ranges—that fits their application. However, they have to remember that the numbers are not the same for all the applications. It has been a problem from the beginning that the number needed should differ among applications. The original driving force behind reference values or nutritional stan- dards was to try to plan food supplies for war-torn countries and then survivors of prison camps. This purpose was carried on by the Food and Agriculture Organization of the United Nations and by agriculture minis- tries in, for example, the United States and Canada. Many other applica- tions now exist, including food programs, nutrition labeling, and individual counseling. Each application is different. It is impossible to develop a single number that would fit all applications. It is possible to develop core param- eters of the requirement and adverse effect distributions that will allow the development of evidence-based approaches to the diverse applications. The DRI process should not attempt to provide derived reference values for all applications. Only the core values that are absolutely needed should be derived. These are the central tendency of the requirement distribu- tion, the EAR, and the tolerable upper level, or UL. Groups with relevant expertise could then be convened to provide guidance on how to use the core values for specific types of application. The IOM committee that dealt

34 THE DEVELOPMENT OF DRIs 1994–2004 with nutritional labeling and fortification (IOM, 2003b) is an excellent example of providing guidance on the application of DRIs in particular circumstances. That should be seen as the model of the future. For the DRIs of the future, we should consider what core estimates must be available to serve the diverse needs of users. Having established these core needs, meeting them should be the primary objective of the exercise. Equally important, we must consider ways in which we can evaluate and validate DRI requirement estimates. We need biomarkers of require- ment that are meaningful and measurable in the field (e.g., refer to the at- tempted validation of the iron requirement estimates in the DRI report on iron [IOM, 2001]). The desirability and ultimate utility of examining the “reality” of the application of the DRI values can be illustrated with protein. Based on NHANES III protein intake data as used in the DRI reports to describe distributions of usual intake, I conducted dietary assessments using the methods provided as guidance for users. When the prevalence of apparently inadequate protein intakes as grams per kilogram body weight per day were considered by age, gender, and usable protein (taking into account likely digestibility and amino acid score, both of which fall as vegetable source protein increases in the diet [IOM, 2002/2005]), the results were surprising. No problem was apparent for youngsters, who are supposed to be vulner- able, but there was an approximately 25 percent prevalence of inadequate protein intakes in the older adult groups (over 50 years of age). Since the report on macronutrients suggested a major increase in lysine requirement which would affect the amino acid score, further examination was undertaken. Individuals were classified by vegetable protein intake constituting less than or more than 50 percent of the total dietary protein. For each subgroup, the estimated prevalence of protein inadequacy was estimated (Figure 2-6). A shocking 63 percent of women over age 51 ap- peared to have inadequate intakes if they consumed more than 50 percent of intake from vegetable sources (these persons were assumed to be vegetar- ians). This might imply a major nutritional problem among an identifiable subgroup of the United States population, a problem warranting a high priority for action. However, there is a serious quandary. These analyses were based on requirement expressed as grams per kilogram body weight per day, the original unit in which protein requirements were estimated. The DRI study committee requested that requirements be presented as grams per day applied to reference individuals (omitting any provision for variation in body size). I compared these modes of expressing protein requirements, specifically (a) grams per kilograms per day, (b) the grams per kilograms per day referred to relevant reference persons, a man weighing 70 kg and

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 35 FIGURE 2-6  Impact of choice of dietary protein source on protein inadequacy. 2-6.eps NOTE: High vegetable protein group = 12 percent of subjects. low-res bitmap image a woman weighing 57 kg, and (c) the suggested lower limit of the AMDR (IOM, 2002/2005). The apparent “problem” of inadequate intakes was seen only when requirement was expressed as grams per kilograms per day (Figure 2-7). For females 19–50 years of age, the prevalence of apparent inadequacy dropped from 14 percent to 3.3 percent, and even lower using the AMDR lower limit (10 percent of energy as protein). Equally dramatic effects would be seen in the apparent prevalence of inadequacy among women over 51 years consuming vegetarian diets shown in Figure 2-6. We are left with three different estimates of the apparent adequacy of protein intakes among adults in the United States. These range from the inference of the existence of a major public health problem in an identifi- able subgroup of the population, to satisfaction that protein intakes are adequate for nearly all persons. But which estimate is valid? How do we determine the “truth” using independent measures? We have no field- applicable measure equivalent to the nitrogen balance criterion used to es- timate requirements. That is a most unsatisfactory situation. Unfortunately, parallel situations hold for several other nutrients. How do we validate the estimated prevalence of inadequate intakes if we cannot measure prevalence by direct assessment? These issues are important both for national and re- gional planning and for scientific validation of requirement estimates, but we have not developed the concepts and tools we need to address them. This is an urgent need for future DRIs.

36 THE DEVELOPMENT OF DRIs 1994–2004 Prevalence of Apparent 20 Original Male Inadequacy % 15 Female 10 DRI AMDR 5 Expression 0 g/kg/d g/d <10% kcal Descriptor of Protein Need FIGURE 2-7  Impact of choice of assessment criterion for protein, adults 19–50 2-7.eps years of age. NOTE: DRI = Dietary Reference Intake; AMDR = Acceptable Macronutrient Dis- tribution Range. From my perspective, there are dreams about the things we would like to have for use in developing future DRIs, shown on the left in Table 2-1, and there are the realities of whether we can generate them with a founda- tion in science, shown on the right of the table. Much depends on the precision wanted/needed in any final application. It also depends on how far we are prepared to abandon science in favor of opinion and judgment. In the end, we are constrained by a realization that what we dream is TABLE 2-1  Dreams Versus Realities of the DRI Process Feasibility of Obtaining Desired Information This Information Requirement distribution midpoint (EARa) Yes (for some) Full requirement distribution (e.g., CVb) No (for all except at very high cost) Midpoint intake of detrimental effect No (for all) distribution Judged start of risk of detrimental effect (ULc) Yes (for some) Usual intake distribution, groups and Yes (if data collected) populations Usual intake, individual No (for all) Correlation between intake and requirement No (for all) aEAR = Estimated Average Requirement. bCV = coefficient of variation. cUL = tolerable upper intake level.

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 37 desirable and helps drive scientific advancement, but what we must deal with is reality. We must temper our theories, approaches, desires, and dreams with reality. We must always remember that the whole purpose of any DRI process must be to come up with evidence-based information that can be applied to real life. Perspective II Presenter: Janet King My perspectives on the DRIs come from two experiences: as chair of the FNB in 1994 and as chair of the Dietary Guidelines Advisory Commit- tee in 2005. In 1994, we began the process of revising the RDAs, defined as “the levels of intake of essential nutrients that, on the basis of scientific knowl- edge, are judged by the FNB to be adequate to meet the known nutrient needs of practically all healthy persons” (NRC, 1989a). We wanted to in- corporate new research about the role of diet and specific nutrients in the prevention or reduction of the risk of chronic disease, and we recognized that the RDA alone was not going to be sufficient for all applications. The DRIs evolved over the next 10 years. Two features in particular set the DRIs apart from the old RDAs. One is that they are based on an explicit functional or physiological criterion or endpoint. The second is that each criterion has a distribution of requirements within the population, assumed to be normal for most nutrients. The changes caused users to begin asking whether different applications required different criteria, and a false sense of confidence about the precision of our understanding of the distribution of intakes and requirements for the population developed. We learned that nutrient requirements are known for only small groups of individuals at one point in time and in one setting, so that standards set for populations are only estimates; and that the goals and process for estimating nutrient requirements differ from those of estimating healthy food patterns to pre- vent chronic disease and should not be mixed. DRIs are only as good as the science base on which they are built. Many DRIs stem from metabolic studies, which have limitations as an approach to determining nutrient requirements—healthy people are usually studied, large changes in nutrient intakes are required to overcome homeostatic control of the endpoint (which makes it difficult to get a precise estimate of nutrient requirements), and the studies are expensive. Moreover, available studies often have small sample sizes, which make it difficult to evaluate the true variance in nutrient requirements. Research is needed to give us sensitive, specific measures of nutrient requirements that integrate genetic, environmental, and developmental influences (Figure 2-8).

38 THE DEVELOPMENT OF DRIs 1994–2004 “Omics” Environment Genomics, transcriptomics, Diet, geography, climate, proteomics, metabolomics education, occupation, lifestyle Phenotype Body fat, serum cholesterol, blood pressure, bone density, enzymatic alleles, etc. Development Age, sex, early nutrition, developmental stage FIGURE 2-8  Research need: Sensitive, specific measures of nutrient requirements that integrate genetic, environmental, and developmental influences. 2-8.eps Several implications of incorporating chronic disease prevention into the DRIs were unanticipated: • It immediately expanded the science base. We learned that endpoints for reducing the risk of chronic disease have multiple dietary deter- minants, that the relationship between specific nutrients and disease endpoints varies widely in a population, and that the quantitative information needed to relate nutrients, foods, or food patterns to chronic disease is extremely limited. • It led to the development of mixed criteria for recommendations and a set of DRIs that gave widely disparate standards among the nutrients. The AIs for calcium and vitamin D are a good example. Both of them were set to reduce the risk of osteoporosis or bone disease, but the endpoints were very different: a desirable calcium retention level for bone for calcium, and the amounts to maintain normal serum 25-hydroxyvitamin D levels for vitamin D. • The committee expertise needed to incorporate reducing the risk of chronic disease into the dietary guidelines led to the need for a more diverse committee and multiple demands on that committee. • It led to a new set of recommendations, the AMDRs, defined as the “range of intakes for a particular energy source that is associated with a reduced risk of chronic disease while providing adequate intakes of essential nutrients” (IOM, 2002/2005). However, the AMDRs have flaws that stem from the inadequacy of the research data from which they were derived. Evidence is accumulating that the types of carbohydrates and fat may be more important than the

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 39 total amounts. There was no consideration of the relationship be- tween the intake of sugar and the risk of chronic disease. There were no specific recommendations for dietary fiber or monounsaturated fatty acids. Users were unsure how to apply the qualitative standards for cholesterol, trans fatty acids, saturated fats, and sugars. No data were available on upper protein intake levels; the value was actually derived from the AMDRs for carbohydrates and fat, which led to two different sets of protein recommendations. It is becoming apparent that the DRIs and the Dietary Guidelines for Americans have different primary purposes. The DRIs are estimates of nu- trient requirements, primarily to prevent nutrient deficiencies and excesses. The dietary guidelines are the basis for the food and nutrition policies in the United States and Canada and for consumer food and nutrition education by the government. In the United States, the DRIs are the science back- bone of all nutrition policy. From the DRIs stems the Dietary Guidelines Advisory Committee report that translates the DRIs and additional science linking food, physical activity, and chronic disease into dietary guidelines. From that come two other reports: the Dietary Guidelines for Americans policy document and a companion consumer brochure, both prepared by the U.S. Department of Health and Human Services and the USDA. There is also MyPyramid, which develops a food pattern for individuals that stems from both the DRIs and the dietary guidelines. We need to clearly delineate standards for preventing nutrient defi- ciency from standards for preventing chronic disease. They have different goals. Different scientific committees with different expertise are needed to address their development, and separate scientific documents are needed to provide the evidence base for the subsequent government and other reviews of these reports. Whether the IOM or a government advisory committee should be responsible for developing these science-based reports is not clear. Also, our research keeps evolving, which leads to questions about which in- formation should be part of the DRIs or the dietary guidelines. For instance, there is science that links the proportion and sources of food and nutrients to individual metabolism. Furthermore, there is emerging research on the link between specific foods in the diet and physiological function. In conclusion, we, as scientists and users, need to clearly define the DRIs before establishing the next process. We should think about establish- ing EARs and ULs for only one criterion per age/gender group. We need to clearly differentiate the standards for reducing deficiencies of essential nutrients from nutrient and food intakes for reducing chronic disease. We need to keep it simple and, if necessary, have separate reports for differ- ent applications to ensure that we are addressing the primary goal in each report. We need to try to find a logical, clear way forward.

40 THE DEVELOPMENT OF DRIs 1994–2004 EVALUATING EVIDENCE FOR DRI DEVELOPMENT: WHAT ARE THE ISSUES IN APPLYING SYSTEMATIC EVIDENCE- BASED REVIEW APPROACHES TO DRI DEVELOPMENT? Presenter: Alice H. Lichtenstein The formal use of SEBRs as part of the DRI development process is intended to supplement, rather than displace the efforts of the IOM study committees and in many cases allow them to focus their limited time on interpreting the available data rather than identifying and collating the in- formation. Therefore, it is helpful to first address general issues about what SEBR can and cannot be expected to do. These are listed in Box 2-4. Description of SEBR SEBR comprehensively identifies and tabulates available literature. Ap- propriately defined questions can supplement traditional approaches to DRI development and increase the consistency of the process. SEBRs are defined by the IOM study committees and other stakeholders. Once the ques- BOX 2-4 What Systematic Evidence-Based Review (SEBR) Does and Does Not Do What SEBR Is • SEBR is one tool for use by the study committee as it deliberates during the Dietary Reference Intake (DRI) process. • SEBR allows study committee members to focus on the larger picture. Many factors go into making decisions and evaluating the nutritional literature be- yond a systematic analysis of the available literature. • SEBR can offer increased transparency to the DRI process. • SEBR expands the documentation process in a way that allows for more ef- ficient updating as new data emerge. • SEBR more precisely identifies research gaps and, therefore, provides a more persuasive argument to target specific funding to close some of those gaps. What SEBR Is Not • SEBR does not “automate” the review process and relegate decisions to com- puter modeling. • SEBR does not shift the decision-making process from the study committee to the SEBR group. • SEBR does not diminish the need for expert opinion and scientific judgment.

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 41 tions are formulated, a critical step involves refining the questions through discussions with the study committee and other designated individuals to ensure that the intended outcome will be achieved. The various steps in SEBR are described below: 1. Formulate and refine the question.  The first critical step is to de- velop the question(s). Individuals requesting the SEBR must clearly and specifically define the question of interest. An example would be: What is the efficacy or association of omega-3 fatty acids in preventing incident CVD outcomes in people without known CVD (primary prevention) and with known CVD (secondary prevention)? During this phase, questions are often refined and clarified by it- erative discussions between the group requesting the SEBR and the SEBR panel before the start of the review. Critical components of the questions can be summarized by PICO—population, interven- tion, comparator, and outcome. In the omega-3 fatty acids example, the population was primary prevention and secondary prevention. The intervention was α-linolenic acid (ALA), eicosapentaenoic acid (EPA), and docosahexaenoic acid (DHA). The comparators were diet and oils containing non-omega-3 fatty acids. The outcome was all-cause mortality, myocardial infarction, stroke, or sudden death. 2. Develop a search strategy.  The next step is to develop a search strategy. This includes establishing a cutpoint (stop date) for the lit- erature search and identifying relevant databases and other sources of literature (e.g., MEDLINE, PreMEDLINE, Embase, Cochrane Library, Biological Abstracts, Commonwealth Agricultural Bureau of Health, reference lists of reviews/primary articles, suggestions from domain experts). 3. Identify inclusion/exclusion criteria.  The next step is to develop inclusion/exclusion criteria for the studies. For the example of omega-3 fatty acids and CVD, the criteria were (1) literature pub- lished in English; (2) both experimental and observational studies; (3) studies with original outcome data; (4) studies that evaluated all potential sources of omega-3 fatty acids in the diet; (5) studies with at least five subjects; and (6) study duration of at least 1 year. 4. Retrieve and screen relevant literature.  This is a critical, time- consuming step in the process. For omega-3 fatty acids and primary prevention cohort studies alone, 7,464 studies were identified and their abstracts screened, 768 papers were retrieved (eliminating papers that did not meet the predetermined inclusion criteria), and 118 were identified as being potentially relevant to CVD outcomes. In the end, 39 studies uniquely filled all of the inclusion criteria. 5. Grade studies.  Studies are graded on methodological quality, appli-

42 THE DEVELOPMENT OF DRIs 1994–2004 cability, and overall effect. For methodological quality, an A would be given to studies that have the least bias with valid results (e.g., placebo-controlled, blinded, randomized controlled trial); a B study might be a study that is susceptible to some bias, but not sufficient to invalidate the results (e.g., a study where fish oil was given, but not a placebo); and a C study would be one where there was significant bias that could potentially invalidate the results. With respect to ap- plicability, the criteria are determined by the characteristics of the population studied (e.g., [1], sample represents target population, includes both genders, wide age range, and other features of the target population [diet]; [2], sample represents a relevant subgroup of the target population [females], but not the entire population; and [3], sample represents a narrow subgroup of subjects only and is of limited applicability to other subgroups [females between 20 and 30 years of age]). In terms of the overall effect, the categories generally are (1) ++, clinically meaningful, beneficial effect demonstrated; (2) +, clinically meaningful, beneficial trend exists but is not conclusive; (3) 0, clinically meaningful effect not demonstrated or unlikely; and (4) –, harmful effect identified or likely. 6. Extract/summarize data.  The data are extracted from the literature and summarized in tables. Table 2-2 gives an example of a table generated. 7. Present report.  The final step is to present the report to the group requesting the SEBR. Differences Between SEBR for Clinical Medicine and SEBR Needed for DRI Development The data available to answer clinical medicine questions tend to be more straightforward than issues related to nutrient requirements. For drugs (e.g., statins), in the simplest case, one group is given the active drug and the other group a placebo, and outcomes are assessed. Primary and sec- ondary outcomes are defined ahead of time, and an answer is obtained. Questions for DRI development tend to be more complex and nu- merous. The difference from clinical medicine is that an answer must be reached in spite of a high degree of uncertainty. In the medical literature, there would be a cumulative meta-analysis, with one study adding on to the next one until there was enough power to make a determination, then a clinical recommendation would be made. In the case of DRIs, there is a high degree of variability among nutrition studies (e.g., for omega-3 fatty acids, some studies use fish oil supplements, some use fish, etc.; in many cases, the outcome measures or characteristics of the populations were different). The result is that there may be a considerable number of data on a specific

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 43 nutrient, but cumulatively the data do not lend themselves to merging for increased statistical power. Special Considerations for Evaluating Evidence for DRI Development A number of realities about the nature of the currently available evi- dence in the field of nutrition are important to DRI development overall and specifically to SEBR. Information about the background diet (e.g., fish and omega-3 fatty acids) has already been mentioned. Another is the nutrient status prior to the intervention. If one is supplementing the diet, the effect is going to be different depending on whether the person starts out deficient or nutrient adequate. Examples from the literature include iron and chromium. Adequacy of nutrient stores also needs to be evaluated because someone who is nutrient replete responds differently to supplemen- tation than does someone who lacks adequate stores (e.g., vitamin A). Fur- thermore, changes in body weight can confound the outcome; these changes commonly occur when the research protocol manipulates fat, carbohydrate, and protein intake and does not adjust the calorie intake. One needs to consider the bioequivalency of different forms of nu- trients. There can also be altered bioavailability due to the co-ingestion of different foods. Other concerns are altered bioavailability from food processing, drug–nutrient interactions, and nutrient–nutrient interactions. Other issues to be considered are altered absorption efficiency due to ha- bitual intake, physiological status, and nonfood contribution of nutrients. Multiple effects of a single nutrient and one nutrient potentially masking the effects of deficiency of a second nutrient are also concerns. Differ- ent nutrient bioavailabilities from food and synthetic forms are becoming more important with the high rates of supplementation and nutrients being added to foods or drinks. Finally, we are just learning how to deal with genetic polymorphisms in nutrient metabolisms, as well as how to ad- dress nutrient requirements for essential versus nonessential nutrients and energy-containing versus non-energy-containing nutrients. These are likely to require different approaches to the SEBR process. Implications SEBRs represent a rigorous process of systematically compiling scien- tific evidence. They minimize bias through comprehensive and reproducible searches for and selection of articles. They provide rigor by assessing the methodological quality of the included studies and the overall strength of the body of evidence. They enhance transparency through detailed docu- mentation of decisions. They provide useful inputs into program and policy decision-making processes. There is tremendous potential for SEBR to aid

44 THE DEVELOPMENT OF DRIs 1994–2004 TABLE 2-2  An Example of a Table Summarizing Data Meeting Inclusion Criteria Table 1. Secondary Prevention Randomized Controlled Trials of Omega-3 Fatty Acid Supplements on Various Cardiovascular Disease Outcomes* Omega 3 Fatty Acid Control All-Cause Mortality Control Author Group Year Type Event Country Dose Type Duration Rate RR (ref) N Ratio N Dose (Year) (%) 95% CI ALA vs. EPA + DHA Singh 120 Mustard 118 Non-oil 1 – nd 1997 Oil placebo India (16) ALA 2.9 g/d 122 EPA + DHA (1:1) 1.8 g/d EPA + DHA Marchioli 5665 EPA + 5658 Control 3.5 9.8 0.79a 2002 DHA ±Vit E 0.66–0.93 Italy (12) (1:2) 0.85 g/d ±Vit E Nilsen 150 EPA + 150 Corn oil 1.5 7.3 1.0 2001 DHA 1.7 g/d 0.45–202 Norway (1:2) (13) 1.7 g/d Leng 60 EPA 60 Sunflower 2 5.0 1.0 1998 0.27 g/d seed oil 0.21–4.8 Scotland 3 g/d (14) Sacks 31 EPA + 28 Olive oil 2.4 3.6 0.3 1995 DHA 0.01–7.1 U.S. (15) (3.2) 4.8 g/d *Abbreviations: N = number of subjects; RR = relative risk; CI = confidence interval; g/d = grams per day; nd = no data. aAdjusted for main confounders as reported in article. NOTE: References cited are not included in the reference list at the back of the report. They may be found in the original source. SOURCE: Wang et al. (2004).

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 45 Cardiac Death Sudden Death Non-Fatal MI All Strokes Control Control Control Control Group Group Group Group Event Event Event Event Rate RR Rate RR Rate RR Rate RR (%) 95% CI (%) 95% CI (%) 95% CI (%) 95% CI Quality 22 0.61 6.6 0.25 25 0.59a – nd C 0.34–1.1 0.05–1.1 0.35–1.0 0.52 0.24 0.52 0.29–0.95 0.05–1.1 0.3–0.9 5.4 0.65a 2.7 0.55a 4.1 0.91 1.4 1.2 B 0.51–0.82 0.39–0.77 0.70–1.2 0.81–1.9 5.3 1.0 – nd 10 1.4 – nd B 0.39–2.6 0.75–2.6 – nd – nd 6.7 0.75 1.7 3.0 A 0.18–3.2 0.32–28 non-fatal 3.6 0.3 – nd 7.1 0.45 0 2.7 B 0.01–7.1 0.04–4.7 0.12–64

46 THE DEVELOPMENT OF DRIs 1994–2004 future IOM DRI study committees in formulating and revising DRI values. The exact role they will play is difficult to predict. It is likely to vary de- pending on the specific nutrient of interest. It is hoped that SEBR will be used as a tool to facilitate the process of developing and revising the DRI values. RISK ASSESSMENT: IS IT A RELEVANT ORGANIZING STRUCTURE? Presenter: Elizabeth A. Yetley This presentation considers whether it would be useful to extend the risk assessment framework from its current use as an approach for deriving ULs to future use in deriving the EARs as well as the AIs and AMDRs. Risk assessment is not a specific methodology, but rather an organiz- ing framework for scientific assessments. It is the scientific arm of a triad of functions that constitute risk analysis (Figure 2-9). Another arm, risk management, reflects those tasks carried out by the users and sponsors of the risk assessment outcomes. There is also a risk communication arm. Risk Management Risk Assessment Users Scientists Sponsors Scientific Process Risk Communication FIGURE 2-9  Risk assessment as part of the risk analysis triad. 2-9.eps

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 47 Risk Assessment: A Systematic Process The conceptual underpinnings of risk assessment stem from a 1983 National Research Council (NRC) publication on how scientific delibera- tions should be organized to assess risk in a manner that meets user/sponsor needs while maintaining the scientific integrity of the assessment (NRC, 1983). The risk assessment framework includes guidance on managing interactions between the sponsors or users of the risk assessment and the scientists conducting the risk assessment. The current ULs issued by the IOM used a nutrient risk assessment framework. The United Kingdom and European Commission (EC) used a similar approach to derive their upper level reference values for nutrients. The approach evolved from applications originally developed for chemi- cal contaminants and subsequently modified for application to microbial pathogens. Several characteristics associated with risk assessment are described in the 1983 and subsequent NRC documents. First, no decision is often not a viable option from the perspective of protecting public health. It was deemed better to have an informed decision based on the best scientific expertise, even if not perfect, than no decision that by default provided no guidance for evaluating the status quo. This is often also true for essential nutrients. Second, as it developed the risk assessment framework, the NRC rec- ognized that it would usually have incomplete data, and that uncertainties would need to be dealt with through documentation and use of expert scientific judgment. This need for dealing with evidentiary uncertainties is also true in deriving nutrient reference values. Third, the NRC focused on the needs of users/sponsors in developing the risk assessment framework. Users need science that addresses their information needs and is also presented in a manner that allows them to readily integrate results into program or policy initiatives. This requires emphasis on a mutual understanding between sponsors and risk assessors of user information needs and on transparency and documentation of the series of decisions made in a risk assessment. At the same time, the NRC wanted to protect the scientific reviewers from undue stakeholder pressure by ensuring independence of the scientific evaluations. The NRC developed a systematic process (Figure 2-10) that goes through a series of evaluations and decision steps. Within each of these steps is an articulation of the basis and rationale for each type of decision. This helps the user understand the rationale for decisions and therefore enhances usability for a broad range of applications. One component of the risk assessment process defines the rules of engagement between the sponsors/users of the risk assessment and

48 THE DEVELOPMENT OF DRIs 1994–2004 1. Hazard Identification (Literature Review) 3. Intake Assessment (Prevalence of Intakes Outside Ref. Value) 2. Dose–Response Assessment (→Ref. Value) 4. Risk Characterization (Public Health Implications) FIGURE 2-10  Steps in risk assessment. NOTE: Dark arrows represent major pathways; light arrow represents a pathway used less often, but feasible. 2-10.eps the scientists conducting the actual assessment (Figure 2-11). The risk assessment itself is pictured in the central area of this figure. The sci- entific assessment would be equivalent to a DRI study committee. The risk assessment process differentiates between the roles and responsi- bilities of the risk assessment study committee and the sponsors who have requested the risk assessment. There is also emphasis on ensuring that the results of the scientific assessment are presented in a manner that enhances their usefulness to sponsors and other users. The sponsors are responsible for defining questions that need to be addressed before the risk assessment process starts. This is called problem formulation. For example, in the case of the DRIs, the problem formula- tion statement could specify the populations to be covered (e.g., healthy versus general populations, individuals versus groups). Public input may be solicited during this process. The sponsors, with or without public input, would identify for the risk assessors the questions and issues that the risk assessment should address. At this point, there often needs to be dialogue between the sponsors and risk assessors to ensure that the scientists under- stand the information needs of the users or sponsors and the risk assessors have the opportunity to suggest revision and clarification of the problem formulation questions, if needed. As the scientific assessment is completed, its results need to be expressed

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 49 Risk Manager Risk Assessor Risk Manager Sponsor/User→ Scientific Assessment Sponsor/User → Problem Formulation* Policy, Research 1. Hazard Identification and Educational 2. Hazard Characterization Applications 3. Exposure Assessment Defines: Populations? Applications? 4. Risk Characterization a Prevalences School Lunch Nature of endpoints? Public Health Nutrition Labeling Types of expertise? Science, Not Policy Research Design Public inputs? and Evaluation Patient Counseling Consumer Education FIGURE 2-11  Risk analysis: How to increase usability while maintaining scientific integrity? 2-11.eps aCritical interaction components. in a manner that will enhance its usefulness to sponsors and users. This includes full development of the last step of the risk assessment framework, the so-called risk characterization step. For example, the committee would describe the nature of the risks associated with inadequate or excessive intakes and the percentage of the population exceeding the UL or failing to meet the reference value of adequacy. The risk characterization step also describes the public health implications of these deviations. For example, if 20 percent of children are consuming some nutrient above the UL, is that likely a serious public health problem? However, the risk assessment should not stray into policy recommendations in the risk characterization step be- cause this could cast doubt on the scientific independence and integrity of the risk assessment. The risk assessors would indicate that a certain segment of the population appears to have a public health risk based on the intake and the reference value and would describe the public health implications of this deviation. However, the risk assessment study committee would not recommend public health actions. For example, it would be inappropriate for the risk assessment study committee to recommend that its evaluation suggests the need to fortify the national food supply or change school lunch standards. These results of the risk assessment need to be described and docu- mented in a form that enhances usability to the risk manager, the spon- sor, and the users. Interestingly, the two steps designed to maximize the usefulness of a risk assessment review—the problem formulation and the risk characterization steps—are probably the two steps in the process that

50 THE DEVELOPMENT OF DRIs 1994–2004 have been least developed, although they are critical for bridging the gap between sponsors and scientists. In brief, risk assessment is used to derive science-based evaluations. It is recognized that the evaluations may have public health implications, although they often need to be made with evidentiary uncertainties. Risk assessment focuses on user needs by clarifying the information needs, docu- menting decisions, and describing the public health implications of popu- lation deviations from reference values in the risk characterization step. At the same time, the vertical lines between the groups in Figure 2-11 are designed to protect the scientific integrity of the scientific assessment once user needs are understood by the scientific assessors. Nevertheless, the risk assessors may need to reinitiate dialogue with the sponsors/users oc- casionally during the process because unanticipated challenges may require further clarification. Applying the Risk Assessment Framework to Indicators of Adequacy When a risk assessment framework is applied to a new discipline, the basic conceptual framework generally stays the same, but some adaptations and redefinitions of terms are often needed to ensure relevance to the new disciplinary application. This applies to the use of risk assessment frame- works for indicators of nutrient adequacy. One benefit of a risk assessment framework is that the usability of the reports is enhanced because of the focus on meeting user needs. Another benefit is the enhancement of the quality of the scientific assessments. If, for example, study committees used the same organizing framework to derive both adequate and excessive intakes, it would allow side-by-side comparisons of the evaluations and decisions for both as the scientists go through the decision-making processes. This could help in identifying unintended inconsistencies between decisions resulting from evaluations of adequate and excessive intakes. It could also allow concurrent examination of prevalences above the UL and below the reference value for adequacy within and across life stage groups for a given nutrient. This is potentially important to users who are frequently faced with balancing the conflict- ing needs of low-intake consumers with the potential for excessive intakes among other consumers. In the classic risk curve for the DRIs (Figure 2-12), a two-tailed risk curve illustrates the increased risks of adverse effects associated with both excessive intakes and inadequate intakes. The general risk assessment com- munity is already moving from a one-tailed evaluation of adverse effects associated with toxic levels of intake to concurrently looking at a two-tailed risk curve that examines the potential for unintended risks associated with actions that would reduce access to foods containing toxic contaminants.

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 51 High RISK Low INTAKE FIGURE 2-12  Levels of risk associated with change in intake from inadequate to 2-12.eps excessive intake. graph is low-res bitmap image text & arrows are vectors For example, when considering the risks associated with contaminants in fish or fruits and vegetables, public health considerations may also benefit from a concurrent evaluation of risks associated with decreased consump- tion of these foods. These two-tailed evaluations of risk are called risk/risk assessments or risk/benefit assessments. Either one of these terminologies and concepts could apply to nutrients because risks are associated with both inadequate and excessive intakes. The first step in a risk assessment model is hazard identification (see Figure 2-10). “Hazard” is defined by the World Health Organization as “an inherent property of the substance that causes harm” (IPCS, 2004). This clearly relates to harm associated with excessive nutrient intakes. However, it does not apply to risks associated with nutrient inadequacies because an adverse effect associated with an inadequate nutrient intake is not due to an inherent property of that nutrient. It is simply because the nutrient is lacking. Thus, the terminology would need to be revised to apply risk assessment approaches to derivation of reference values for nutrient adequacy. For example, for nutrient risk assessment purposes, the terminology might be changed to “identification of indicators of adequacy (or inadequacy) and hazard.” As indicated, the systematic risk assessment approach goes through a series of four steps, each with a series of mini-steps and decisions. For each

52 THE DEVELOPMENT OF DRIs 1994–2004 step, the decisions made for ULs are generally similar to the decisions made for adequacy. Thus, the risk assessment framework could easily be adapted to both sides of the equation. The first step, hazard identification, is basically the literature review. In general, the nature of the questions for the indicators of hazard for the ULs is the same as that for the indicators of adequacy (e.g., intake/biomarker, biomarker/effect, intake/effect relationships). Both evaluations are focused on identifying dose–response effects and factors that affect dose–response, and both evaluations need this information across a range of life stage groups. The second step, the dose–response assessment, is the step where the reference values (e.g., EARs, ULs) are derived. In deriving the DRIs, a threshold model of dose–response was assumed for both adequate and ex- cessive intakes. In both cases, the study committees frequently lacked good dose–response data. In the case of the UL, if they lacked dose–response data, they used a NOAEL or a LOAEL as the basis for deriving the UL. In the case of the adequacy evaluations, if the study committees lacked dose–response data, they derived an AI. In both cases, study committees preferred full distributions of dose–response data: On the UL side, it is called the benchmark dose; on the adequacy side, the EAR/RDA distribu- tion curves. For both, there have been questions about whether a threshold model always works. In terms of adjustments to the dose–response relationship, bioavail- ability and bioequivalency issues relate to risks associated with both inad- equate and excessive intakes. For both EAR or AI and UL, the traditional adjustments for bioavailability or bioequivalency for adequate intakes may lack relevance to the UL. For example, the EAR/RDA for iron adjusts for differences in bioavailability from food sources based on dietary intakes of heme/nonheme iron sources. However, with the increasing use of fortified foods and dietary supplements, a more appropriate bioavailability adjust- ment might be a bioequivalency type of adjustment similar to that used for retinol equivalents. Study committees would likely notice these potential incompatibilities if the evaluations for both adequate and toxic intakes were compared in a side-by-side risk assessment framework. Additionally, the same methodological biases in the studies used to evaluate risks associ- ated with both inadequate and excessive intakes likely occur, so a consistent framework for analyzing both makes sense. Uncertainty assessments are a critical component in the dose–response assessment step of a risk assessment framework. Derivations of reference values for both inadequate and excessive intakes must deal with uncer- tainties in the available evidence and describe the nature and seriousness of those uncertainties in their texts. In some cases, an uncertainty factor is used to lower the observed effect level to give a UL. The use of uncer- tainty factors was relatively rare in deriving reference values for adequacy.

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 53 However, at least in the case of vitamin D, the study committee multiplied the observed intake of vitamin D by 2 to raise the AI above the observed dose–response relationship to account for uncertainties in background exposure to sunlight and study design inadequacies. Whether dealing with risk associated with inadequate intakes or excessive intakes, uncertain- ties need to be documented and are generally dealt with in a manner that errs on the side of public health protection. A risk assessment framework identifies the need to deal with uncertainties in the available evidence, but does not specify a methodology to deal with the identified uncertainties. This allows maximum flexibility in applying this organizing framework to different situations. Establishing reference values for both inadequate and excessive intakes also often involves extrapolations from a studied group (e.g., adults) to an unstudied group (e.g., children) because data may be available for some, but not all, of the life stage groups for which DRIs are established. The default for extrapolations for ULs was reference body weight. The default for EARs/AIs was metabolic body weight. There is no acknowledgment of, nor justification for, the use of different defaults for these two types of reference values. If there was a side-by-side common risk assessment frame- work, these types of differences would likely be noted and either justified or changed. The third step, intake (or exposure) assessment, uses population-based intake data to estimate the prevalence of intakes above or below the refer- ence values. Biomarkers of nutrient status, when available, can also be used to estimate prevalence of inadequate or excessive exposures. The same anal- ysis is often used for both types of reference values. The fourth step is risk characterization, which is the most important step from a user perspective. This is where the public health consequences of not meeting an EAR/RDA or AI and exceeding a UL are discussed. Deviations from reference values for special groups are also described in this section. Implications An advantage of using a risk assessment framework is that the science of risk assessment has been moving forward. The DRI development process can benefit from these efforts. For example, risk assessors increasingly have been using probabilistic models to move from qualitative to quantitative risk assessments. They have been working to establish better defined criteria for dealing with different types and sources of uncertainty. They are start- ing to use statistical models to simulate dose–response curves from multiple studies that individually lack sufficient data to produce a dose–response curve. They are also learning how to adjust coefficients of variability to ac- count for altered dose–response curves associated with polymorphisms that alter nutrient requirements or toxicity among population groups.

54 THE DEVELOPMENT OF DRIs 1994–2004 In summary, the risk assessment organizing framework is probably relevant to the development of reference values related to nutrient ad- equacy. It provides a systematic delineation of decision steps that enhances transparency and therefore increases usability. A risk assessment organizing framework could help coordinate the decisions related to adequacy with those related to excessive intake, thus reducing the likelihood of unintended inconsistencies or consequences that create challenges for users. The risk assessment framework offers the flexibility to tailor the approach to dif- ferent types of applications without losing the benefits of the organizing framework. It emphasizes enhanced documentation and transparency and takes advantage of evolving scientific tools. DISCUSSION: SYSTEMATIC EVIDENCE- BASED REVIEW; RISK ASSESSMENT Discussant: Sanford Miller The session moderator, Dr. Stephanie Atkinson, introduced the discus- sant and invited him to offer an opening remark. Discussant Opening Remarks Dr. Miller opened with the general observation that although it seems we are asking the same questions from years ago, we are learning to ask better questions. He noted that it is not surprising that study committees appeared to derive their own approach to the problems they faced given the lack of experience, structure, or formal guidance when the DRI process began. He suggested that risk assessment and SEBR together provide an ex- cellent framework to organize the process and the questions to be addressed as well as structure to allow transparency on how conclusions were reached or the rationale for why a modified approach was used by a particular com- mittee. Dr. Miller then focused on the nature of the relationship between risk and dose–response. The two risk curves associated with nutrients are composed of families of curves, and in turn represent the components of the metabolic regulatory process for absorption and excretion. If there is uncertainty around the curves, they will overlap, suggesting the nutrient is unsafe at the same time that it is required. For this reason, it is critical to carry out basic research focused on the process by which a nutrient is used and regulated in order to reduce the level of uncertainty. General Discussion Drs. Lichtenstein and Yetley joined the discussant on the dais, and a brief group discussion took place. They agreed that the DRI approach was

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 55 a vast improvement over the previous RDA approach, and that these refer- ence values should improve as science evolves and experience is gained. It was noted that the past 10 years of experience have led to a greater under- standing of the breadth of potential uses for the DRIs, which is important when making future revisions. When the discussion was opened to the workshop audience, comments were offered on several topics, including SEBRs and risk assessment. SEBRs One audience member suggested that although SEBRs are important, they may have limitations. They add time as well as cost because the indi- viduals performing the SEBRs are not likely to be volunteers. Furthermore, the development of DRIs inherently requires scientific judgment, which requires a wide range of information about the nutrient. The SEBR focuses on a limited group of questions. A participant responded that individuals commissioned to generate the SEBRs are not charged with offering the sci- entific judgment necessary for deriving the DRIs. The advantage of study committees working with an evidence-based practice group is that once the relevant questions for the targeted review are defined by the study commit- tee, the practice group can examine the evidence in an objective manner. Database limitations for most questions and the number of questions that can be addressed for each nutrient mean that, ultimately, the judgment of the study committee is required. Thus, SEBRs would not be used to derive DRI reference values; rather, they would be used as one source of data for deriving the reference values. A commenter remarked that SEBRs can be carried out by either paid panel members or unpaid volunteers who “work outside of their day jobs.” She then inquired about the professional expertise needed for these SEBR panels as opposed to the DRI study committees, and about the rewards for unpaid volunteers. A participant responded that SEBRs are not work carried out in spare time. They must be done in a consistent manner and require considerable amounts of time, focus, and resources. Regarding why people are willing to take part in these activities, the discussant suggested that those who believe nutrition is fundamental to reducing the risk of disease will feel a responsibility to participate. Another participant noted that SEBRs do not replace the need for a DRI study committee, but instead serve as a tool to help document, collate, and synthesize the scientific evidence. This tool could lessen the burden of the study committees and allow them to focus on the challenges of defining DRI values. Nor is the SEBR competing with the risk assessment frame- work. The first step in the risk assessment approach is a literature review; if SEBR were used, it would feed into the larger risk assessment activi- ties. For this type of review, the study committee would help to define the

56 THE DEVELOPMENT OF DRIs 1994–2004 inclusion/exclusion criteria for the literature, the endpoints to be reviewed, and other considerations. In fact, the use of a risk assessment framework to help organize activities could make more efficient use of staff time and volunteer time. Outside help with the literature review could relieve the study committee members—who are volunteers—of the burden and allow them more time for other needed deliberations. Additional Comments Other comments on risk assessment as well as SEBRs included the fol- lowing: the risk assessment framework requires that uncertainties be dealt with, but the methods to be used are unspecified and can be determined case by case; the process of SEBR is robust and not limited to a particular kind of study design; SEBRs can expose gaps in knowledge; and not every question addressed by a study committee would require SEBR. With respect to the DRI framework, an audience member suggested that the EAR/RDA is related to measures of central tendency whereas the UL is not. He postulated that the UL is more analogous to the AI in that the AI is above the amount needed while the UL is below the amount to be avoided. Furthermore, it would be possible to define a level for adequacy in a manner similar to that for developing ULs. He said the value of doing so and whether the data would support it are important discussion points. One person noted that death from disease had not been mentioned as a marker for chronic disease risk in the DRI process, even though there may be a reduction in death from disease associated with some nutrients. A par- ticipant responded that in the case of DRIs, death is probably not a prefer- able measure as compared with appropriately validated biomarkers for the advent of the disease state. The final comment of the discussion related to the value of testing intake recommendations as they are being developed. An audience member used the example of a reasonableness check for iron recommendations and its ability to better inform the process and thus lead to better outcomes. PANEL DISCUSSION: IN WHAT WAYS COULD THE CONCEPTUAL FRAMEWORK BE ENHANCED? Panel Members: Cutberto Garza, Mary L’Abbé, Irwin Rosenberg, Barbara Stoecker (later joined by Janet King and George Beaton) The session moderator, Dr. Stephanie Atkinson, introduced the panel members and began the discussion by asking each panelist to offer an open- ing remark.

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 57 Panelist Opening Remarks Dr. Garza highlighted three conclusions that he drew from the day’s discussions. One was the need to “keep it simple.” At the same time, he emphasized that we need to be more sophisticated so that finding the simple solution does not result in the wrong answer. He cautioned that this sophis- tication would be needed especially if we move from focusing on preventing deficiency and diet-related diseases to focusing on enhanced performance. In this case, it may not be reasonable to expect (simplistically) that the frameworks best equipped to deal with deficiency, chronic disease, and enhanced performance will be the same. Second, he suggested harmonizing approaches for deriving the EAR and the UL to allow greater transparency globally and to enhance the rigor of the process, regardless of the degree of precision needed. He made the analogy to the hazard analysis and critical control points approach used to ensure food safety, where control points are identified in the process. Third, the dynamic nature of the field needs to be recognized, and the DRI framework should reflect that dynamism. He emphasized that the dynamism will dictate the type of evidence collected, the criteria for deciding when the numbers need to be revised, and even the format in which the DRIs are published. Dr. L’Abbé touched on the importance of the underlying theme of transparency, specifically from the perspective of a government agency that uses the DRIs in a number of applications (e.g., food fortification, product evaluations, standards setting). She then underscored the need for DRIs to be relevant to public health risk. Finally, she pointed out that to apply the values effectively, regulators and government agencies need to understand the process and the approach to decision making used by the study commit- tees. Conversely, sponsoring government organizations bear the responsibil- ity of defining the general questions to be answered through the process of DRI development if the end result is to be useful. Dr. Rosenberg remarked that realizing the conceptual framework that we seek holds considerable challenges. Essential to our success will be a consensus on the overall goal of the DRIs. If the goal is to sustain the health of the North American population, it must be recognized that this is not the only sustaining pillar of public health. Others include the dietary guidelines and relevant reports from the Office of the Surgeon General. He cautioned that there is risk in using DRI values to cross into dietary guidelines; in turn, this can spawn some confusing concepts, such as semi-quantitative AMDRs for nonessential nutrients. Moreover, despite recent assertions to “change” our paradigm to include chronic disease prevention, the goals for the refer- ence values issued by the NRC and then the IOM have remained remark- ably stable since 1941. These dietary recommendations have always been more than minimal allowances and have by implication included prevention

58 THE DEVELOPMENT OF DRIs 1994–2004 of chronic disease as part of the definition of good health maintenance. As a last point, Dr. Rosenberg addressed the issue of multiple endpoints. The current approach for DRIs uses different endpoints for different population groups (children, pregnant women, sometimes the elderly). However, the argument that study committees should issue, and users choose, different endpoints for the same group would lead to misunderstandings and under- mine the integrity of the process. Dr. Stoecker noted that the public has a false sense of confidence about the knowledge available for setting the DRIs. She commented that, of course, the data on many nutrients are scarce. She noted particularly that dose–response data at intakes near the probable EAR are needed, but may be difficult to obtain. Dr. Stoecker supported the use of the risk assess- ment approach as an organizing structure and agreed that SEBRs organize, document, and encourage transparency of the process. Furthermore, she suggested that nutrient requirements and chronic disease prevention could be dealt with in separate reports because of the multiple factors that af- fect the chronic disease endpoints compared with the nutrient requirement endpoints. She agreed with the conclusion that a single endpoint should be used for age/gender groups. General Discussion Cross-Panel Discussion The cross-panel discussion covered several topics. It began with several comments on endpoints. Endpoints  One participant suggested that a variety of endpoints are typi- cally expected to be considered using all of the emerging science. Then, based on clear criteria, the endpoint to serve as the basis for the reference value would be selected. However, the criteria for selecting an endpoint have not been clear. Rather, endpoints seem to have been determined pri- marily on the basis of data availability, which does not necessarily reflect health significance. Another participant noted that she found comfort in discovering that often the same “ballpark” value could be derived using any one of a number of endpoints. She also suggested that chronic disease protection might result in a higher reference value. Another panel member responded that although there is likely to be substantial variation among nutrients, it is not clear that the amount of a nutrient required to reduce the risk of chronic disease is necessarily going to be higher than the amount to achieve some other endpoint. In response to a comment on the apparent inconsistency of the severity or the public health significance of the endpoints used for ULs, a participant

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 59 noted that in setting food fortification limits in Canada, information in the text of the DRI reports was used to elucidate the relative severity of the adverse effect and the margin of safety between the RDA/AI and UL. She further noted that challenges were presented by nutrient substances such as saturated fats or trans fats, which do not fit the classic threshold models for a UL. Another participant said we often fail to look at “population- attributable benefits” and asked: Is there a situation in which a UL could be set too low because a population-attributable benefit of greater public health significance than the mild physiological discomfort used to establish the UL was not taken into account? One participant suggested that from his experience with ULs, the problem was the need to adapt a toxicological model appropriately for nutrient considerations, but there is nonetheless considerable opportunity for parallelism. He asserted that consistency is important, but should not always be expected because key considerations may vary by nutrient and need to be addressed in different ways. Another participant commented that we should not be hobbled by consistency, and it should never preempt scientific rigor. Precision  A panel member pointed out that if we are clear about the vari- ous uses of the reference values, we can better assess the degree of precision needed. That is, we are too often driven by an obsession for the precision that our training requires, but that the use does not demand. Assuming this hurdle is passed, the “biology of the nutrient” is the next component to consider, because the inability to specify the biological workings of the nutrient would be limiting in establishing meaningful reference values. From this point, the instruments and organizing approaches we have at our disposal to address the tasks become the focus. Ranking evidence  In response to the suggestion that ranking evidence was a considerable leap for study committees, a panel member said study committee members did discuss the criteria for judging the evidence and did reject studies, so ranking evidence was not necessarily a challenge. Rather, a major shortcoming in the past was the failure to document discussions and the decision-making process. Open Discussion The cross-panel discussion was followed by a wide-ranging discussion between audience members and the panel on topics such as the appropriate- ness of AIs, limited data, updating DRIs, and the interest in harmonization. An audience member suggested a focus on terminology, asking participants to consider terms such as “critical effect,” as used in toxicology.

60 THE DEVELOPMENT OF DRIs 1994–2004 Appropriateness of AIs  A participant asked if the AI system should be retained and, if not, whether an EAR should be approximated in some fashion. Another said it should be eliminated. An audience member asked whether any advances were experienced in either the application or com- munication of the DRIs by incorporating the AI, and whether the AI be- longs within the DRI framework. For some nutrients, an EAR could have been derived if a physiological function of the nutrient had been used as the criterion rather than a chronic disease endpoint. A participant reported that study committees were dissatisfied with the advent of the AI because they had been given the task to derive a value based on an endpoint, and they did not feel confident that an AI was appropriate given this charge. In terms of the evolution of the AI, another participant noted that it was used initially to describe recommended intakes for infants, specifically infants that are exclusively breastfed and are thus the easiest population for which to measure dietary intake. In the case of the breastfed baby, the AI values are probably more solid than for most other groups. Limited data  The discussion turned to considering the “no decision is not an option” component of DRI development. One participant expressed concern that numbers developed in the face of limited data appear to take on the same level of significance and credibility as other more well-founded reference values. He recalled that when AIs were first discussed, there was some mention of adding table footnotes or color codes or using faint print as a way of communicating the level of confidence associated with the numbers. He expressed his opinion that, in the end, it seemed that once a value was listed in a table, no one read the footnotes or went back to read the reports. Another participant commented that we should be clear on how the values will be used, then make decisions with respect to how the data are presented in the table. In the case of ULs, there was considerable agreement on the need to cre- ate a reference value if the data supported doing so because in the absence of such a value, various misinterpretations have occurred, including the conclusion that there is no risk. One participant suggested that it would be helpful to set out explicitly the disagreements that occur, indicating the level of confidence in the values offered. Others agreed that the approach used to arrive at the ULs should be described more explicitly. There was concern that reaching far beyond the available data to establish a UL is undesirable; there is a distinction between inadequate data and limited data. Other discussion focused on obtaining data on the distribution of re- quirements in order to enhance the DRI process. One participant suggested that although it would be prohibitively expensive to explore the nature of the distributions in detail, we need at least general information such as

CONCEPTUAL FRAMEWORK FOR DRI DEVELOPMENT 61 breadth and skew. In turn, the study committees should be charged with providing such information. Users of the DRIs may prefer to use a value along the distribution curve rather than the RDA for a certain application. Furthermore, it was suggested that the variance of the requirement distribu- tion is not critical to the application except when individuals are concerned. The default CV of 10 or perhaps 20 percent may be adequate in terms of needed precision for practitioners. Updating DRIs  With respect to the future DRI process, one participant asked how we can ensure corrections are made to “mistakes” recognized after the study committee disbands. Another participant responded that the framework should address this, not only to correct mistakes, but also because new science will inevitably dictate changes in reference values. A panel member further suggested that the ability to issue DRIs in the format of a downloadable loose-leaf-type notebook is important so that any needed changes can be addressed and made accessible without having to engage in an entire review. It was noted that using SEBR as part of the process would facilitate any updating. One participant remarked that when some of the research questions listed in the report were addressed, this information could serve as a trig- ger for review. Another participant countered that a nutrient should be re- viewed when the nature of the outcome will be important to public health. An audience member asked about the nature of guidelines for prioritizing the nutrients to be updated. A panel member suggested that setting criteria for revision and setting criteria for prioritization were different issues and that criteria for revision would be addressed later in the workshop. Harmonization  A discussion clarified that the harmonization referred to by Dr. Garza was a harmonization of approach for deriving the numbers rather than of the specific reference values. Dr. Garza suggested that the only values needed globally are the equivalent of the EAR and the UL, and presumably good science could be brought to bear on deriving these. If the approach for deriving these could be harmonized, different countries could, within the context of their own public health protection considerations, derive their own relevant reference values. Also with respect to harmonization, an audience member suggested that more international expertise should be included in the DRI process so countries could learn from each other, share information, and reduce costs. The benefit to countries not able to mount such a process was also high- lighted. Another participant remarked that the EC was working to create a framework for nutrient reference values and harmonize nutrient recom- mendations across Europe. He suggested benefits in collaboration given the

62 THE DEVELOPMENT OF DRIs 1994–2004 apparent lack of an overall framework for the DRIs. While recognizing the value of collaboration, participants disagreed with the intimation that there was no overall framework for the North American DRI process. Rather, the day’s discussions demonstrated that there was a framework, but it may not have been structured or communicated as well as possible.

Next: 3 Criteria for Scientific Decision Making: Session 2 »
The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary Get This Book
×
Buy Paperback | $60.00 Buy Ebook | $47.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

In what ways can the process for developing Dietary Reference Intakes (DRIs) be enhanced? The workshop entitled "The Development of DRIs 1994-2004: Lessons Learned and New Challenges" offered a valuable window into the issues and challenges inherent in the development of nutrient reference values. The dialogue—carried out under the auspices of the Institute of Medicine (IOM), Food and Nutrition Board (hereafter referred to jointly as the IOM)—was enriched by the 10 years of experience in deriving the expanded set of values known as the DRIs, plus the decades of experience that grounded the earlier Recommended Dietary Allowances for the United States and the Recommended Nutrient Intakes for Canada. The lessons learned and the knowledge gained will guide decisions about the next phase of the DRIs. To paraphrase one participant, we are now asking better questions.

In 2006, the IOM, with support from the United States and Canadian governments, undertook an effort to synthesize the research needs identified during the 10 years of DRI development. While the workshop summarized here was predicated on the fact that the development of DRIs is improved by better data, its focus was different. Its goals were to examine the framework and conceptual underpinnings for developing DRIs and to identify issues important for enhancing the process of DRI development.
The workshop was designed to use the existing framework for DRI development as a basis for the discussions and to consider the components of the framework in sequence. Consideration of the pros and cons of the current conceptual underpinnings of the framework opened the workshop, followed by the general "road map" for decision making and the needed scientific criteria. Next, the challenges associated with providing guidance for users were explored. The Development of DRIs 1994-2004: Lessons Learned and New Challenges: Workshop Summary explains an array of issues germane to the future process for developing DRIs, including strategies for updating and revising existing DRIs and opportunities for stakeholder input.
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!