avoided the need for specifying a distribution altogether. In general, the different approaches are unlikely to yield dramatically different results when the data are approximately normally distributed with constant variance, which is the case in most of the MeHg epidemiological studies.

SOME SPECIFIC CONSIDERATIONS FOR MeHg

Aside from the general issues discussed above, several specific issues further complicate the application of benchmark-dose methods for MeHg. Foremost among these issues is the existence of three studies of comparable quality that lead to seemingly conflicting results in terms of the association between MeHg and adverse developmental or neurological outcomes. Previous chapters have discussed in-depth some of the possible explanations for this conflict (e.g., unmeasured confounders, co-exposures, and variations in population sensitivity). Another possibility is that the differences are due to random chance. Indeed, study results have been presented and summarized largely in terms of p values based on statistical tests of the association between exposure and outcome. Only recently have several papers focused on dose-response modeling and benchmark-dose calculations. When the focus is on statistical testing rather than modeling, it is common to encounter apparent contradictions, wherein one study will yield a statistically significant association at p < 0.05, and another one does not. To assess study concordance more fully, it is useful to consider the statistical power1 that each has to detect effects of the magnitude observed.

For simplicity here, suppose that all confounders have already been accounted for, so that we can consider the power that a study will have to detect a true non-zero slope based on a simple linear regression (Yi = a0 + a1X i + εi, where Yi, X i, a0, a1, and εi are as defined above). It is straightforward to compute the power to detect specific values of the dose-response parameter a1, but comparing such calculations across studies is complicated, because the computed power depends on the distributions of exposure levels and outcomes within each study (see Zar

1  

Statistical power refers to the probability of correctly rejecting the null hypothesis of no association when, in fact, a true association exists (see Zar 1998).



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement