helps to rule out the possibility that the positive results are due to random error, bias, or confounders.
The committee’s process of reaching conclusions about deployment to a war zone and its potential for adverse health effects was collective and interactive. Once a study was included in this review because it met the committee’s criteria, there were several considerations in assessing the strength of associations. They were patterned after those introduced by Hill (1971) and include presence of a temporal relationship, strength of the estimated association, presence of a dose-response relationship, consistency of the association, and biologic plausibility.
If an observed association is real, exposure must precede the onset of disease by at least the duration of health effect induction. The committee considered whether a health effect occurred within a period after deployment that was consistent with current understanding of the natural history of the health effect. The committee interpreted the lack of an appropriate time sequence as evidence against association but recognized that insufficient knowledge about the natural history and pathogenesis of many of the health effects under review limited the utility of this consideration. Without a temporal relationship being established between exposure and outcome, other evidence to support the association becomes useless.
The strength of an association is usually expressed as the magnitude of the measure of effect, for example, relative risk or odds ratio. Generally, the higher the relative risk, the greater the likelihood that the exposure-effect association is causal and the lower the likelihood that it is due to undetected error, bias, or confounding (discussed above). Measures of statistical significance, such as p-values, are not indicators of the strength of an association.
Small increases in relative risks that are consistent among studies, however, might be evidence of an association, whereas some forms of extreme bias or confounding can produce a high relative risk. The statistical power of a study was important for it had to be able to detect effects of an unspecified magnitude, especially important for negative results. This factor explains the committee’s inclusion criteria regarding statistical power.
The existence of a dose-response relationship—that is, an increased strength of association with increasing intensity or duration of exposure or other appropriate relation—strengthens an inference that an association is real. However, the lack of an apparent dose-response relationship does not rule out an association. If the relative degree of exposure among several studies can be determined, indirect evidence of a dose-response relationship may exist. For example, if studies of presumably low-exposure cohorts show only mild increases in risk whereas studies of presumably high-exposure cohorts show larger increases in risk, the pattern would be consistent with a dose-response relationship.