3
Criteria for Scientific Decision Making: Session 21

During the planning phase of the workshop, Session 2 participants were requested to take into account the same general questions asked of Session 1 participants (see Box 2-1). However, they were specifically asked to each address different decision-making criteria important to the development of Dietary Reference Intakes (DRIs). These criteria are components of the “road map” for DRI development as described earlier (see Chapter 1). General questions were asked of each participant: Can we provide more specific guidance to study committees on scientific decision making to help clarify the concepts and tasks and to promote consistency across study committees? Can we provide guidance to study committees on the use of scientific judgment in the face of limited data that would allow such judgment to be more transparent and better documented?

The second session was moderated by Dr. Robert Russell of Tufts University. Dr. Irwin Rosenberg, also of Tufts University and former chair of the Food and Nutrition Board (FNB), opened the session with a talk on the selection of endpoints. Dr. Susan Taylor Mayne, a professor in the Division of Chronic Disease Epidemiology at the Yale School of Public Health, then spoke on the options available in the face of limited dose–response data.

Dr. Stephanie Atkinson, a professor in the Department of Pediatrics at McMaster University, discussed the challenges in addressing extrapolations and interpolations for unstudied groups. Dr. Hildegard Przyrembel, from

1

This chapter is an edited version of remarks presented by Drs. Rosenberg, Mayne, Atkinson, Przyrembel, Subar, and Garza at the workshop. Discussions are composites of input from various discussants, presenters, moderators, panelists, and audience members.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 63
3 Criteria for Scientific Decision Making: Session 21 During the planning phase of the workshop, Session 2 participants were requested to take into account the same general questions asked of Session 1 participants (see Box 2-1). However, they were specifically asked to each address different decision-making criteria important to the develop- ment of Dietary Reference Intakes (DRIs). These criteria are components of the “road map” for DRI development as described earlier (see Chapter 1). General questions were asked of each participant: Can we provide more specific guidance to study committees on scientific decision making to help clarify the concepts and tasks and to promote consistency across study committees? Can we provide guidance to study committees on the use of scientific judgment in the face of limited data that would allow such judg- ment to be more transparent and better documented? The second session was moderated by Dr. Robert Russell of Tufts Uni- versity. Dr. Irwin Rosenberg, also of Tufts University and former chair of the Food and Nutrition Board (FNB), opened the session with a talk on the selection of endpoints. Dr. Susan Taylor Mayne, a professor in the Division of Chronic Disease Epidemiology at the Yale School of Public Health, then spoke on the options available in the face of limited dose–response data. Dr. Stephanie Atkinson, a professor in the Department of Pediatrics at McMaster University, discussed the challenges in addressing extrapolations and interpolations for unstudied groups. Dr. Hildegard Przyrembel, from 1 This chapter is an edited version of remarks presented by Drs. Rosenberg, Mayne, Atkinson, Przyrembel, Subar, and Garza at the workshop. Discussions are composites of input from vari- ous discussants, presenters, moderators, panelists, and audience members. 

OCR for page 63
4 THE DEVELOPMENT OF DRIs 1994–2004 the Federal Institute for Risk Assessment in Berlin, spoke on the challenges in addressing adjustment for data uncertainty. Dr. Amy Subar, a research nutritionist at the National Cancer Institute (NCI), gave a presentation on the implications of estimating dietary intake for DRI development. Finally, Dr. Cutberto Garza, provost and dean of faculties at Boston College and former chair of the FNB, closed the session with some highlights of physiological, genomic, and environmental factors that are important to the DRI process. Discussions and comment periods were held throughout the session. SELECTING ENDPOINTS: WHAT ARE THE ISSUES AND WHAT ARE THE OPTIONS FOR CRITERIA? Presenter: Irwin Rosenberg Endpoints play a pivotal role in the DRI process. They are the skeletal structure on which the Estimated Average Requirements (EARs) and toler- able upper intake levels (ULs) are draped. In essence, they are an expres- sion of the targets or goals of the DRI development process. They should be related to quantifiable or measurable attributes that relate to the overall public health goal of the project. The key concerns from the perspective of selecting endpoints are “adequacy for what ends?” with respect to the EARs and “adverse effects as reflected by what?” for the ULs. Experience in Selecting Endpoints Since the 1941 National Research Council (NRC) report (1941), the selection of endpoints for nutrient reference values has evolved in response to changes in nutrition science. These advances sometimes revealed associa- tions between an endpoint and diet and at other times identified possible endpoints through better understanding of metabolic and physiological states. Moreover, approaches for endpoint selection have been variable across the study committees responsible for the reference values. This is to be expected, given the differences in the biology and functions of essential nutrients. Throughout the experience of developing reference values, limited data have often precluded the identification of the most appropriate endpoint for any given age/gender category. This situation in many cases results in the need to extrapolate knowledge about the endpoint used for one group that is better studied (e.g., adults) to a less well-studied group (e.g., children). This is one area of work that needs further exploration (see presentation by Dr. Atkinson in this chapter). Importantly, limited data on dose–response relationships have always

OCR for page 63
 CRITERIA FOR SCIENTIFIC DECISION MAKING made it difficult to compare, consider, and prioritize endpoints for the pur- poses of establishing a reference value. The reliance on studies that exam- ined dose ranges not relevant to adequacy considerations is not a desirable solution. Although meta-analysis studies offer some promises and newer strategies are being developed to deal with limited data (see next presenta- tion by Dr. Mayne), the ideal situation is to have better data. As we have experienced during the past 10 years, endpoints for specific chronic diseases are especially challenging. Although it is desirable to have chronic diseases as the targets for our requirements and thereby reference values, that was possible in only a few instances. However, we need to recognize that the use of chronic disease as a basis for reference values is not a new paradigm. Throughout the history of the Recommended Dietary Allowances (RDAs), chronic disease has been an implicit part of trying to set reference values that were above those necessary to prevent deficiency. The idea of achieving the health of the population—and thereby including the risk of chronic disease as an endpoint—has always been present at some level within the process. Whether this can be done explicitly, as was done for some macronutrients in the last series, will require further discussion. Finally, I would like to make a few quick points on the lessons we are now considering. First, one question raised to workshop participants is the issuing of multiple reference values based on multiple endpoints for a single age/gender group. This is not the issue of study committees considering multiple endpoints before they select one to serve as the basis for refer- ence values, but assigning them the task of issuing values for the various endpoints. Specifying multiple endpoints for a nutrient within a given age/ gender group is not useful or appropriate; in fact, it could be very confus- ing. Rather, a single endpoint for the age/gender group should be selected. Second, the question of whether reference values—EARs, RDA, ULs—are to address essential nutrients only or be expanded to nonessential nutrients, such as fiber and carbohydrate, needs to be considered, particularly in light of our understanding about the interface between the DRI process and food-based dietary guidance. Selecting Endpoints In the past, a number of endpoint types have served as the basis for reference values. These have included clinical signs, measures of develop- mental abnormalities in children, biochemical measures, balance study out- comes, body pool measures, functional measures, and measures of chronic disease risk. A 1994 Institute of Medicine (IOM) document (1994) lists the types of evidence that have been used in establishing RDAs. These include

OCR for page 63
 THE DEVELOPMENT OF DRIs 1994–2004 • biochemical measurements that assess the degree of tissue saturation or adequacy of molecular function in relation to nutrient intake; • nutrient depletion and repletion studies in which subjects are main- tained on diets containing marginally low or deficient levels of a nutrient, and then the deficit is corrected with measured amounts of that nutrient; • balance studies that measure nutrient status in relation to intake; • epidemiological observations of populations in which the clini- cal consequences of nutrient deficiencies are corrected by dietary improvement; • extrapolation from animal experiments (although applying animal data to human studies is difficult); and • nutrient intakes observed in apparently normal, healthy people, which was one way of arriving at an Adequate Intake (AI). If one views the stages of nutrient insufficiency as a series or cascade of events that describe the temporal sequence of deficiency of, for example, a given vitamin, the initial stages could be called “subclinical deficiency,” or findings that would occur before symptoms or signs of disease (e.g., low circulating levels of nutrient, decreased tissue levels or desaturation of body pools, and metabolic disruption). The more advanced stages of deficiency, which could be called “clinical deficiency,” encompass the symptoms and/or signs of disease (e.g., reversible changes in the skin and irreversible changes or cell death). An emerging area important to the criteria for selecting an endpoint is the ability to use a biomarker or surrogate as an endpoint reflective of the functional or clinical response of interest. I will conclude my remarks by reviewing the case of vitamin D. The vitamin D case is an interesting example because circulating levels of 25-hydroxyvitamin D have been shown to be related to intake of vita- min D. Although this is complicated by synthesis in the skin as a result of sun exposure, it is generally a good measure of absorption of vitamin D. However, data may be emerging that relate levels of 25-hydroxyvitamin D to measures of bone density, skeletal disease risk (as in the case of os- teoporotic fracture), and other disease risk (as in the case of extraskeletal cancer and even some immune dysfunctions). Moreover, evidence that 25- hydroxyvitamin D is related to the absorption fraction for calcium suggests that 25-hydroxyvitamin D values have the potential to serve as a target endpoint for an important function and may demonstrate certain conver- gence with other observations—for example, a lower risk of several kinds of cancer, at least in some intervention studies. A regression meta-analysis reported by Bischoff-Ferrari et al. (2005) shows that in a number of studies, a significant decrease in relative risk

OCR for page 63
 CRITERIA FOR SCIENTIFIC DECISION MAKING FIGURE 3-1 The effects of vitamin D supplementation on hip fracture and non- vertebral fracture. NOTE: CI = Confidence Interval. SOURCE: Bischoff-Ferrari et al. (2005). Copyright © (2005), American Medical Association. All rights reserved. of hip fracture is observed in the area of 75–85 nmol/L (Figure 3-1). This raises the question as to whether it is possible to find biological markers or endpoints of this kind that will show a convergence of effects, where multiple goals of preventing fracture and perhaps contributing to the pre- vention of chronic disease can be embodied quantitatively in an endpoint. Many avenues need to be pursued to better specify the selection of endpoints for reference values. This presentation has elaborated on some that may be useful and suggested that certain paths will be more fruitful than others. However, we must remember that one set of criteria or even an algorithm is unlikely to be “one size fits all” because there may need to be different approaches for different nutrients and types of reference values. This process will be an evolution that must be carefully planned. General Discussion A participant commented that the RDA and UL values are frequently close together because a UL is often established using an endpoint that

OCR for page 63
 THE DEVELOPMENT OF DRIs 1994–2004 occurs at a low level of intake for public health safety purposes, whereas endpoints selected for adequacy-based reference values tend to be those that occur at higher levels of intake. In response to the participant’s question of whether this should be done in the future, Dr. Rosenberg replied that there should be even more collusion in the process of setting EARs/RDAs and ULs, especially because awareness of the margin between the two values is important. The decision should be driven by scientific data rather than the rote conclusion that the ULs must be as low as possible and the EARs/RDAs as high as possible. An audience member remarked that it would be useful if the DRI study committees considered endpoints more comprehensively within their reports: For example, vitamin C at level X prevents scurvy and at level Y impacts another endpoint of interest. There would be different endpoints for the same nutrient, but endpoints more important in societies other than North America would not be neglected. An audience member commented on Dr. Rosenberg’s pessimism about using disease risk reduction for certain recommendations given the im- portance of reducing the risk of disease as an overall health goal. It was stated that there is a numerical relationship between fiber intake and the onset of cardiovascular disease (CVD). Another participant commented that measures of fiber intake from observational data can be a marker for other dietary and behavioral patterns and therefore may be problematic as a basis for setting DRIs. In response, Dr. Rosenberg noted that there will be instances when there is a direct relationship between dietary intake and a chronic disease response, but in many cases it will be difficult due to lack of specificity and confounding factors. He emphasized the importance of focusing on intermediate or surrogate markers predictive of disease out- come as a way of ensuring a focus on chronic disease risk reduction. A brief discussion took place regarding the process for validating biomarkers for disease. Dr. Rosenberg emphasized the need for sound science and clear validation. DOSE–RESPONSE DATA: ARE THERE OPTIONS FOR DEALING WITH LIMITED DATA? Presenter: Susan Taylor Mayne A more challenging aspect of the DRI process is dealing with limited data on dose–response relationships. The DRI process depends on dose– response data for both EARs and ULs. Even if there are extremely limited data on dose–response for many nutrients, DRI study committees need to establish numeric values. As a consequence, some DRI values are “softer” in reality than what might be expected. This is well illustrated using the

OCR for page 63
9 CRITERIA FOR SCIENTIFIC DECISION MAKING example of the dose–response data that were available in establishing the EAR for selenium. Dose–Response Data and the Selenium Estimated Average Requirement (EAR) The study committee considered several possible endpoints or biomark- ers for selenium status, ranging from disease endpoints (e.g., Keshan disease and cancer) to blood or plasma selenium levels to plasma selenoprotein concentration as a biomarker of selenium status. The study committee ultimately chose plasma selenoprotein concentration maximization as the biomarker. Two studies that evaluated maximization of plasma selenoproteins in response to supplemental selenium were available. One was a study of 52 men and women from New Zealand (Duffield et al., 1999), and the second was a study of 45 men from China (Yang et al., 1987). Both populations had low selenium intake. In the New Zealand study, the baseline selenium intake of the subjects averaged 28 µg/day (for comparison, U.S. intakes are about 100 µg/day). Groups were given five different levels of selenium per day for 5 months: 0, 10, 20, 30, or 40 µg/day. The endpoint being monitored was plasma selenium-dependent glutathione peroxidase. All of the groups receiving ad- ditional selenium were found to have increased glutathione peroxidase, but they could not be distinguished from one another due to large variations in response. Because the variation was so large, a dose–response could not be calculated. Instead, the investigators decided that the lowest added intake, 10 µg/day, may be sufficient, so they set an EAR of 38 µg/day, which is the baseline intake of 28 µg/day plus 10 µg/day. In the Chinese study, the baseline selenium intake of the subjects was even lower, 11 µg/day. Groups were given five different selenium doses for 8 months: 0, 10, 30, 60, or 90 µg/day. Although it was difficult to determine a dose–response based on the limited sample size, it was estimated that average maximization was achieved at the added intake of about added 30 µg/day. This gave an EAR of 41 µg/day when combined with the baseline intake of 11 µg/day. With weight adjustment to reflect North American body size, the EAR was increased to 52 µg/day. The IOM study committee simply averaged these two numbers (38 and 52 µg/day), resulting in an EAR of 45 µg/day. As the variation data were difficult to calculate, a coefficient of variation of 10 percent was assumed, and the Recommended Dietary Allowance was set at 55 µg/day. As discussed above, the EAR for selenium was based on fewer than 100 subjects. Dose–response data anywhere in the world were very limited. The only available data were obtained from selenium-deficient populations

OCR for page 63
0 THE DEVELOPMENT OF DRIs 1994–2004 from outside North America. Important questions are: How relevant is this EAR to the United States and Canada? Are there alternative techniques that we should be employing to try to characterize dose–response using more relevant and statistically powerful data? Solutions to the problem of limited dose–response data can be grouped into two general approaches. The first is the statistical or modeling ap- proach, which applies various models to try to characterize dose–response, such as in relation to chronic disease or mortality (e.g., a large cancer pre- vention trial in the United States with 35,000 men randomized to selenium supplementation or a placebo). The second approach is the biological ap- proach. Both approaches are described below. The Statistical Approach The advantage of the statistical approach is that many studies with large sample sizes are available (both observational and clinical trials). One disadvantage is that the intake data in these large population studies are often susceptible to measurement error. This is nutrient specific; for ex- ample, the intake data are not of good quality for vitamin E and selenium. However, in many of these same studies, we can examine plasma nutrient status as a biomarker for chronic disease risk to estimate the dose–response, which can then be related to intake data using metabolic or other relevant studies. Different statistical approaches are used to analyze nutrients in rela- tion to chronic disease risk. The traditional single-study approach is where one examines nutrient intake or status in relation to a chronic disease end- point. The typical approach is to quantile the intake or status data, then examine the relationships across these quantiles and test for linear trends using statistical testing. Nutrient intake or status can also be examined as a continuous variable. The relationship between intake or status of nutrient X and disease Y can be modeled using regression. Both of these approaches typically assume a linear relationship, which may or may not be a valid assumption. An example to highlight this is found in work from Ulrich (2007) relat- ing folate status to breast cancer risk. Although some studies are finding protective effects with higher folate status, other studies are finding sugges- tions of adverse effects or at least no benefit. Ulrich (2007) has suggested this is because the relationship between folate and breast cancer risk is nonlinear (Figure 3-2). The linearity of a relationship depends on the part of the dose–response curve in which it lies (see dotted and dashed lines in Figure 3-2). This implies that one must be aware of the likelihood that many dose–response associations involving nutrients and chronic disease may be nonlinear.

OCR for page 63
1 CRITERIA FOR SCIENTIFIC DECISION MAKING Hypothetical nonlinear Breast Cancer Risk relationship between folate status and breast cancer risk Relationship for area of dose–response studied Relationship for area of dose–response studied Folate Status FIGURE 3-2 Hypothetical nonlinear relationship between folate status and breast 3-2.eps cancer risk as compared with relationships for different areas of the dose–response curve. SOURCE: Modified from Ulrich (2007). One alternative to linear models is restricted cubic spline models, also known as piecemeal polynomial curves. Spline models allow for the ex- amination of nonlinear effects of continuous variables (e.g., nutrient in- take or concentration) in relation to disease risk. Some advantages of this approach are that no functional form needs to be specified; it is available in standard statistical packages (SAS, BMDP); and it can reveal nonlinear dose–response relationships. An example of the use of restricted cubic spline models is from Wright et al. (2006), who examined the relationship between serum vitamin E and all-cause mortality (Figure 3-3). When the best model is fit to the data, as serum vitamin E concentrations rise, there is apparently a reduction in the risk of dying in this cohort up to a particular point; after that, it appears there is no additional benefit and, if anything, the possibility that the risk may start to increase. We might choose a serum vitamin E concentration associated with the minimum risk based on this curve, then determine the nutrient intakes required for half the population to achieve this plasma vitamin E concentration. Combining data from multiple studies and using the data to estimate dose–response relationships are also possible. One standard approach is to

OCR for page 63
2 THE DEVELOPMENT OF DRIs 1994–2004 Percentile 1 10 20 30 40 50 60 70 80 90 99 1.5 1.4 1.3 1.2 Relative risk 1.1 1.0 0.9 0.8 0.7 0.6 0.5 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 Cholesterol-adjusted serum α-tocopherol (mg/L) 3-3.eps FIGURE 3-3 Cubic spline regression for total mortality according to cholesterol- adjusted serum a-tocopherol concentrations. —, Predicted relative risks; ---, 95% confidence interval. The reference value (9.1 mg/L; relative risk = 1.00) corresponds to the median value of the first quintile of serum a-tocopherol concentrations. To convert cholesterol-adjusted serum a-tocopherol concentrations from mg/L to µmol/L, multiply by 2.322. SOURCE: Wright et al. (2006). Am. J. Clin. Nutr. (2006; 84; 1200–1207), Ameri- can Society for Nutrition. take data across multiple, randomized nutrient supplementation trials and perform a systematic review and meta-analysis. Meta-analysis was origi- nally developed for clinical trials to see if an effect is present or not (e.g., do statins reduce CVD risk?). Meta-analysis can also be used to characterize dose–response using data from different trials with different nutrient doses and different achieved plasma concentrations. In Figure 3-1 (page 67), from a meta-analysis looking at vitamin D supplementation and its effects on hip fracture and nonvertebral fracture, the authors performed a meta-regression to fit a linear regression to the data on the relative risk for a chronic disease endpoint as a function of achieved plasma 25-hydroxyvitamin D concentrations. Although they fit

OCR for page 63
 CRITERIA FOR SCIENTIFIC DECISION MAKING a linear model to these data, a nonlinear model may fit better, especially for hip fracture. We could have used these data to fit a nonlinear function, identify a plasma concentration at which lowest risk is observed, and then relate that level back to intake data. Meta-analysis is also used for observational epidemiological studies of nutrients and chronic disease risk, but it was not designed for observational studies, and therefore its application is much more problematic. The dose that corresponds to high intake in one population may be very different from that in another population and in different parts of the dose–response curve (see Figure 3-2). The dose–response meta-analysis across categories can be done, with the caveat already mentioned. An example from the literature is a meta-analysis looking at observational studies on selenium intake and prostate cancer risk (Etminan et al., 2005). The investigators plotted studies of selenium intake (with lowest intake as the reference group) and risk of prostate cancer (Figure 3-4). Finding any dose–response data in this type of study is difficult because of the nonquantitative nature of the data. Another approach to estimate dose–response is to combine data from multiple studies into a pooled analysis, where the original data from mul- tiple studies are obtained and reanalyzed together. The assumption is that intake data across the studies are similarly (quantitatively) assessed, which is an assumption whose validity can be challenged. Validity is nutrient spe- cific, depending on the ability to estimate intake of that nutrient accurately across populations. An example of a pooled analysis is from Hunter et al. (1996), who examined the relationship between percentage of energy from fat in the diet and breast cancer risk (Figure 3-5) and concluded there was no association. However, it is assumed, perhaps not correctly, that when data are pooled from multiple cohort studies that use different dietary instruments, fat intake (along with energy intake) can be measured precisely and similarly across the studies. The statistical approach can also apply to ULs. Instead of risk of inadequacy, risk of excess is modeled (e.g., the risk of hip fracture with high vitamin A intake). Similar approaches as described previously can be applied to ULs (e.g., spline models, meta-analysis, meta-regression), and the nutrient concentrations or intake levels at which risk of adverse effect begins to increase can be evaluated. In terms of using chronic disease endpoints for dose–response estima- tion, although chronic disease data are widely available from U.S. and Canadian populations, causality and confounding (e.g., correlated nutrients from the same foods) are difficult to address. The use of plasma biomarkers is desirable to examine dose–response, but it does not solve the confound- ing problem.

OCR for page 63
 THE DEVELOPMENT OF DRIs 1994–2004 Methods for Estimating Intake: Self-Report Instruments It should be noted that the goal for all applications of dietary intake estimation is an estimate of usual intake, which is the theoretical long-run average daily intake of a dietary component. Three main types of self-report instruments are used to collect such data: 24-hour recalls, food diaries or records, and food frequency questionnaires (FFQs). Twenty-Four-Hour Recalls Twenty-four-hour recalls can vary in many ways. Training of the in- terviewers and standardization of probing questions (i.e., questions that follow after someone reports eating a particular food, such as what kinds of fats were added to foods) can vary from study to study. Most 24-hour recalls are collected by some sort of standardized computerized approach, but some studies use pencil-and-paper administration with later coding of the data. Some recalls are done in person, others by telephone. Different kinds of portion size models or measurement aids are used to estimate por- tion size. The 24-hour recall has various strengths. The intake data can be quan- tified in detail. In theory, it should not affect human eating behavior be- cause the respondents are asked to report what they ate yesterday, intake that would have occurred before they knew they would have to report such intake. There is lower sample selection bias than for other methods because the recall does not require literacy and the respondent burden is low. It is generally agreed that this is the most reliable method for dietary assessment. Furthermore, usual intake distributions can be estimated from as few as two dietary recalls. One weakness is that recalls rely on memory. Also, 24-hour recalls are costly to develop and administer because highly trained interviewers are needed. In addition, because recipes and preparation methods vary for many foods, default recipes and hence nutrient values are used, and these may not accurately capture the level of nutrients consumed. Underreporting of foods and amounts eaten is also common, especially among those who are overweight or obese. Finally, at least 2 days and statistical modeling are required to obtain usual intake estimates. Food Diaries or Records Food diaries or records are, in general, less standardized than dietary recalls. Respondents do not have to be trained, but the diaries may or may not obtain comprehensive data, and the coding of those data is highly vari- able from study to study. The use of technology to collect real-time dietary

OCR for page 63
9 CRITERIA FOR SCIENTIFIC DECISION MAKING data has been a research topic of great interest, with technology such as personal digital assistants, cell phones to take pictures, and voice recogni- tion being explored. If done correctly, a food diary or record can provide quantified and detailed intake information. It can be relatively accurate, and it is done in real time so in theory should not rely on memory. The biggest weakness of a food record is that it is reactive and hence biased. Because respon- dents know they have to record, they may change what they eat because it is difficult to record, or they may undereat. The food record requires literacy, and it has a high respondent and investigator burden. There is a high sample selection bias because only certain people are willing to keep records. The longer people keep records, the worse the data quality is. Al- though it should be real time, people often record the data at the end of the day. Underreporting is typical, and worse with those who are overweight or obese. Food Frequency Questionnaires In the often self-administered FFQs, people are asked a series of questions—usually hundreds—about how often they usually consumed a particular food in a given time period; what preparation methods were used; and what the typical portion size was. These components vary among FFQs, as do procedures to determine the food list and the nutrient composi- tion assigned to each food. One strength of the FFQ is that the respondent burden is relatively low because the questionnaire is filled out only once. The focus is generally usual intake and the total diet. An FFQ should not be biased by changes in eating behavior because intake in the past is queried. Another benefit of the FFQ is the low cost associated with administering the instrument and processing the data. One weakness of the FFQ is that it lacks detail because it contains a finite list of foods and details are not generally collected. It is cognitively complex for respondents to report what they ate over the past year, for example. It requires literacy. Different FFQs can produce different results in the same population, whereas the same FFQ can produce different results in different populations. There is severe measurement error when looking at absolute intakes. To reduce this bias, epidemiologists rank individuals and adjust the models for energy intake. In general, outcome findings are attenuated by the amount of error in the FFQs. Methods for Estimating Intake: Biomarkers Certain so-called “biomarkers of intake” may be used to assess dietary intake. A recovery biomarker is one in which there is a 1:1 relationship

OCR for page 63
90 THE DEVELOPMENT OF DRIs 1994–2004 between what is consumed and the biomarker value. Such biomarkers provide very accurate data on what individuals are consuming, but few of these can be used: doubly labeled water, urinary nitrogen, and possibly urinary potassium. Concentration biomarkers reflect a direct biological response to what someone consumes. It is more of a correlated response, and it is affected by other characteristics of the individuals (e.g., whether they smoke or pos- sibly their body weight). Therefore, it cannot be used to assess the amount consumed, and it may reflect short- or long-term intakes. In general, it is difficult to use such biomarkers to evaluate direct dietary intake for pur- poses of DRIs. There are also homeostatically controlled biomarkers, which have no direct relationship to intake. Challenges and Sources of Error/Bias First, underreporting occurs in all of the self-report dietary assessment methods described above. The percentage of energy underreported based on a review of doubly labeled water studies was up to 58 percent for food records, 38 percent for FFQs, and 26 percent for 24-hour recalls (Trabulsi and Schoeller, 2001). Underreporting can vary by gender, age, and BMI. In general, underreporting tends to increase as body weight increases. For example, results from NCI’s Observing Protein and Energy Nutrition (OPEN) study, conducted with about 500 men and women using doubly labeled water and urinary nitrogen, show that energy underreporting oc- curs for both 24-hour recall and FFQ, and is greater for FFQ (Figure 3-10). The results also show that underreporting varies by BMI for the FFQ and 24-hour recall (not shown). Second, data on dietary supplements may not be collected in many studies. We have to assume that measurement error is present in assess- ing self-reported dietary supplement intake. However, not accounting for supplement intake leads to substantial underestimation of total nutrient intake. When supplement intake is included, this results in highly skewed intake distributions, which present challenges for describing usual intake distributions. Another source of error in all self-report dietary data relates to the nutrient database. Analytical methods for nutrient composition change and improve, and, just as importantly, the composition of finished food products is constantly changing. Therefore, the database that we use needs to be updated and to match the time period of the study. Obviously it is impossible to observe long-term or usual intakes. Rather, the approach is to acquire estimates based on statistical modeling using

OCR for page 63
91 CRITERIA FOR SCIENTIFIC DECISION MAKING - 3-10.eps FIGURE 3-10 Results from the Observing Protein and Energy Nutrition (OPEN) Study: Energy intake underestimation bitmap image low-res by 24-hour recall and food frequency ques- tionnaire compared with total energy expenditure. SOURCE: Subar et al. (2003). short-term, self-reported data. Early in the evolution of dietary intake esti- mation, we used a single day of intake and called it usual intake based on recalls from national surveillance studies. Then we realized we needed at least the average of a few single-day measurements to improve estimates. Next, we became more sophisticated and used statistical modeling: first the NRC method, then the Iowa State University (ISU) method, and more recently the NCI method. Given that the assumptions involved are taken into account, these statistical models remove day-to-day variability from the 24-hour recall so that a better estimate of usual intake is obtained. This is illustrated in Figure 3-11. The probability is plotted against the usual intake of energy; 2,200 calories is the cutpoint. If 1 day of intake is used, the distribution would be long and skewed to the right side. When statistical modeling is applied—removing some of the variability—a more normal distribution of intake is obtained, as would be expected in the population as a whole. This statistical treatment of the data is important, and methods continue to be developed to establish usual intake distributions. The NCI method builds on the NRC/ISU methods to estimate usual nutrient intake distributions. It can also handle episodically consumed dietary constituents, such as vitamin A, and it can be applied to foods and dietary supplements. It also provides greater power to conduct subgroup analyses within the same model.

OCR for page 63
92 THE DEVELOPMENT OF DRIs 1994–2004 Usual Intakes One-day Intakes 3-11.eps FIGURE 3-11 Probability of consuming above or below cutpoint (dashed line): low-res bitmap image One-day versus usual intake distributions. dotted rule, arrows, & some type vector objects Implications Dietary exposure (or intake) assessment for DRI development ideally would be based on usual intake distributions estimated from some multiple days of intake and statistical modeling. Sometimes there is interest in using intake data from observational studies. However, we have to be careful, given the amount of error that can occur in FFQs and other methods used in such studies. The starting point for DRIs is the available clinical and metabolic data concerning requirements, health outcomes, and adverse events; DRIs are not derived (AIs excepted) from estimates of usual intake. Therefore, it is understandable that DRIs, even when developed using the best available scientific data, might be disparate from estimated intakes from dietary sur- veillance data. A clear understanding of the strengths and limitations of the dietary intake estimates allows those responsible for DRI development to put the scientifically derived DRI values in the context of current estimated intakes and, in turn, advise users of DRI values about differences between values and estimated intakes and possible reasons for them; it also identifies avenues for further research.

OCR for page 63
9 CRITERIA FOR SCIENTIFIC DECISION MAKING General Discussion An audience member questioned whether progress can be made as long as we rely on people to report dietary intake information. Dr. Subar emphasized that the data are not all poor and that newer advances have shown considerable promise for ensuring good-quality estimates of intake. She pointed out that even though there is some level of underreporting, better ways to adjust the data are likely to be developed. The key point is that existing data need to be used appropriately, with an understanding of their limitations. Dr. Subar commented that biomarkers of intake would be very helpful. An audience member commented that doubly labeled water appears to quantify underreporting. However, she asked about the valida- tion of this technique and expressed concern about whether known dietary intake is actually underreported to the extent currently suggested by doubly labeled water studies. Dr. Subar indicated that the doubly labeled water methodology is well established as a measure of true energy expenditure in individuals, but she did not know if the intake matches the estimation in a steady state. Another participant suggested that statistical modeling depends on the assumptions used. The assumption that a yearly intake reflects usual intake may be appropriate in some cases but not others, specifically in developing countries. Dr. Subar commented that the usual intake distribution is based on usual intake in the population. The participant suggested that in the United States, the intake does not vary much with the seasons, but in other countries seasons have considerable impact. One question was raised about using the usual intake distribution when dealing with ULs. Dr. Subar was unfamiliar with any studies or delibera- tions intended to explore this particular issue. Another question was asked about the trustworthiness of the nutrient values on nutrition labels. Dr. Subar responded that others with expertise in this area would be better suited to answer the question. HIGHLIGHTS OF OTHER IMPORTANT ISSUES: PHYSIOLOGICAL, ENVIRONMENTAL, AND GENOMIC FACTORS Presenter: Cutberto Garza Physiological, environmental, and genomic issues relate to the DRI conceptual framework as well as to the applications of the DRIs. This pre- sentation first outlines some general principles to provide a context, then focuses on examples of challenges that physiological, environmental, and genomic issues present.

OCR for page 63
94 THE DEVELOPMENT OF DRIs 1994–2004 General Principles The governing principle in any expanded consideration of physio- logical, environmental, and genomic issues is the definition of nutritional health. It is helpful to think about nutritional health in terms of a progres- sive overlapping continuum, moving from the bottom to the top of the trapezoid shown in Figure 3-12. The bottom of this continuum focuses on essential food components that, when lacking, give rise to unambiguous pathology related to a specific deficiency; or, if they are in excess, to an adverse effect. The single-agent, single-outcome paradigm governs this part of the continuum. Moving up along this continuum, there is a greater focus on primary and secondary prevention of nutrition-related chronic diseases. The top of the continuum is increasingly attentive to enhanced performance through improved nutrition. Not surprisingly, uncertainty increases as we progress through this continuum from bottom to top. These uncertainties are due to decreases in basic knowledge (shown to the left of the trapezoid), reflecting the need for more research as we move from basic pathology and specific deficiency to concerns such as enhanced performance. There is also growing complexity of underlying biological mechanisms as we move toward enhanced perfor- mance. All this requires some broadening in the use of our tools. There is MORE MORE LESS Complexity of Underlying Mechanisms Enhanced performance and Environmental Conditions Responsiveness to Behavior Knowledge Primary and secondary prevention of diet-related chronic diseases Avoiding classic deficiency disease MORE LESS LESS FIGURE 3-12 Nutritional health continuum. 3-12.eps

OCR for page 63
9 CRITERIA FOR SCIENTIFIC DECISION MAKING also rising sensitivity to a wide range of behaviors and environmental condi- tions as we move from bottom to top. The significance of physiological, ge- nomic, and environmental factors will differ along this continuum in ways likely to be specific to individual nutrients and life stages (Figure 3-12). Two principles will help determine when such expanded considerations are appropriate. The first is that the anticipated benefit of modifying a ref- erence value on the basis of any factor—physiological, environmental, or genomic—must be qualitatively significant to either individual or popula- tion health and well-being, somewhat analogous to, but the mirror image of, hazard characterization. Second, the equivalent of an individual- or population-attributable benefit must be quantitatively significant. These seemingly straightforward statements beg the question of what triggers quantitative and qualitative significance. Criteria for determining qualitative significance are not independent from criteria for quantitative, and neither are likely to be determined purely on an objective basis. Assess- ments of both will be influenced by culturally or socially bound values and the ability to use the information. Physiological Factors Physiological factors include gender, age, reproductive status (including lactation), and body size. Considerations of body size are generally limited to expressions of nutrient needs per kilogram of body weight. Body size also incorporates elements of body composition to the extent that these two variables are related in a given population. Four challenges exist with respect to physiological factors, recognizing that historically nutrient-based dietary recommendations have historically excluded nonhealthy populations: 1. The prevalence of obesity and overweight 2. The aging of the North American population 3. The increasing understanding of long-term risks associated with intrauterine growth retardation (slow-for-gestational-age infants) 4. High rates of prematurity, the health consequences of this condition, and increasing technological capabilities that enable survival at pro- gressively lower gestational ages, which will bring special pressures to the DRI process The IOM undoubtedly will be faced with including one or more of these conditions in the future DRI process; there may be a need to develop an ancillary effort to consider these groups beyond the brief paragraphs that have been included in the sections of the DRI reports labeled “special considerations.” In addition, metabolic and other common morbidities that

OCR for page 63
9 THE DEVELOPMENT OF DRIs 1994–2004 accompany overweight, obesity, aging, intrauterine growth retardation, and/or prematurity likely will influence recommended intakes, at least for some subgroups. The most salient example is the growing prevalence of Type II diabetes, which will be more difficult to ignore. As challenging as these projections may seem, they pale when compared with the implications of considering environmental and genomic issues. Environmental Factors The framework that we have been using generally ignores environmen- tal influences, with the possible exception of energy requirements. How- ever, on an international level it is not uncommon, for example, to at least consider higher rates of endemic infectious diseases in the determination of nutrient requirements. Such conditions are often environmentally driven. Perhaps it is time that we too consider somewhat analogous environmental issues within our North American context. Two examples illustrate this point. The first is the food environment. The millions of North Americans categorized as overweight or obese did not plan to develop these conditions. For the most part, overweight or obe- sity happens. Although one has to think intentionally about being healthy, consumers do not have to be as intentional about becoming overweight. Are there inherent biological reasons why health could not also happen to people as unintentionally as overweight or obesity appears to occur? Consumers experience free market forces related to food to a much greater degree than we appear to tolerate other areas of public health and safety. For example, given the perils of unsafe highways and cars, we do not rely solely on educating the public so that they can become better drivers: We engineer safer highways and cars. Are there analogous roles the DRIs can or should play to help safeguard nutritional health, such as modify- ing the width of the Acceptable Micronutrient Distribution Range, or do the DRIs make sense at all without greater specificity in terms of the type of fat? A second dimension of the food environment is our increasing ability to manipulate nutrient intake through fortification, genetic engineering, and supplements. The potential for adverse nutrient interaction merits contin- ued close attention. Perhaps the most salient example of the importance of such considerations is higher than initially projected levels of folate intake and their potential adverse impact on individuals and groups with inad- equate vitamin B12 intakes and/or impaired vitamin B12 uptake capabilities or the progression of early cancer. The second example relates to environments that either enable or dis- courage physical activity. Although we think of physical activity primar-

OCR for page 63
9 CRITERIA FOR SCIENTIFIC DECISION MAKING ily in terms of weight status, physical activity also influences the risk of other chronic conditions. Heightened consideration probably will focus on whether nutrient needs are modified by diverse levels of physical activity if chronic disease risk reduction is among the desired outcomes. Genomics In terms of issues that fall under the broad category of genomics, cur- rent considerations for DRI development are limited to genomic variability, which specifically takes the form of including body size (to the degree that size is genetically controlled) in estimating requirements. The broad con- siderations of interindividual variability may be considered, and there may be attempts to address a few specific polymorphisms. Until recently, other than for folate, no other adjustments were made for well-known polymorphisms. Generally, nutrient needs modified by groups of specific polymorphisms were viewed as condition requirements. Among the most salient examples of these are vitamin D-dependent rickets, hemochromatosis, and phenylketonuria, conditions that either increase or decrease appropriate levels of nutrient intake. What about the future? For the most part, complex traits that ac- count for diet-related chronic diseases appear to be influenced by multiple polymorphisms that individually have only modest adverse or beneficial effects on risk, but collectively appear to have significant influence. A study reported recently in the New England Journal of Medicine (Rosenzweig, 2007) that used genomic scanning techniques to assess coronary disease risk supports this view. The value of such work in improving the definitions of risk, enhancing mechanistic understanding, and generating potential interventions for future investigation is acknowledged. For the moment, however, results of such studies appear to have limited immediate impact on specific preventive measures. We also have to recognize that work such as that from Waterland and others (e.g., Waterland and Jirtle, 2003; Waterland et al., 2006) points to the complex epigenetic effects of some nutrients in determining phenotype. There is little doubt that greater understanding of environmental–genomic interactions and the influence of genomic context will result in improved definitions of risk and mechanistic underpinnings. Also, based on what we know now, it is likely that improved under- standing of these relationships eventually will result in better individualized care. What is less clear is how this type of information will help in designing strategies that target populations, particularly as North American societies become more ethnically diverse.

OCR for page 63
9 THE DEVELOPMENT OF DRIs 1994–2004 Implications Future approaches for DRI development are likely to be increasingly more sophisticated in their inclusion of an array of physiological, environ- mental, and genomic characteristics. The interplay of these factors in deter- mining the prevalence of various phenotypes will need to be recognized, and the interpretation of the special nutrient needs imposed by this interplay will require an expanded DRI process. This increased sophistication will impose important challenges to further address knowledge gaps, mechanistic com- plexity, and the present inadequate understanding of interactions among diverse environmental conditions and individual behavioral choices. Finally, an improved understanding of genomic influences on health will cause us to rethink the use of DRIs in designing strategies to promote individual and population health. General Discussion An audience member commented that genomic variability and the presence of polymorphisms will undoubtedly play an increased role in DRI development. However, after describing the example of methylenetetrahy- drofolate reductase polymorphism, he suggested that the changes involved may not be dramatic. Dr. Garza added that we often forget that these poly- morphisms were positively selected. At some point in our evolution, they must have played some beneficial role. In some context, they may increase risk, whereas in other contexts, they may be protective. Another participant addressed the issue of environmental influences, noting that Dr. Garza had mentioned infectious diseases as pertinent to nutrient reference values for persons in developing countries. Given that inflammation is shown to play a role in the pathogenesis of chronic dis- eases and may be relevant to the aging North American population, the participant questioned whether inflammation should be added to the list as either a physiological or environmental factor to be considered. Dr. Garza responded that aging is germane, and the physiological adjustments and metabolic abnormalities that accompany aging, are relevant to the deriva- tion of future DRIs.