before evaluation of study subjects. The committee required an exposure-free interval specifically for effects that might be reversible (such as headache, light-headedness, poor coordination, rash, or cough) but not for irreversible effects (such as cancer).
The committee gave less weight to ecologic or toxicologic studies. Toxicologic studies had a small role in the committee’s assessment of association between the putative agents and health outcomes. Like previous committees, this one used evidence from toxicologic studies to assess biologic plausibility in support of epidemiologic data rather than as part of the weight of evidence to determine the likelihood that an exposure to a specific agent causes a long-term outcome. That is because toxicologic studies can inform about disease processes (for example, cancer) but are less informative about specific diseases (for example, esophageal cancer).
Studies that the committee might exclude or consider as support (that is, they carry less weight than primary studies) are studies of self-reported exposure, multiple exposure, or exposure to specific agents that cannot be assessed; studies whose outcomes are considered “subclinical” (that is, of altered functioning consistent with later development of a diagnosis but without clear predictive validity); studies with a lack of specificity of outcomes (for example, those with a broad range of International Classification of Disease (ICD) codes that refer to all diseases of the respiratory or nervous system); and studies without an exposure-free interval for reversible effects.
The committee’s process of reaching conclusions about the various agents and their potential for adverse health outcomes was collective and interactive. Once a study was included in this review because it met the committee’s criteria, there were several considerations in assessing the strength of associations. They were patterned after those introduced by Hill (1971) and include strength of the evidence of an association, presence of a dose-response relationship, presence of a temporal relationship, consistency of the association; specificity of the association; and biologic plausibility.
The strength of an association is usually expressed as the magnitude of the measure of effect, for example, relative risk or odds ratio. Generally, the higher the relative risk, the greater the likelihood that the exposure-disease association is causal and the lower the likelihood that it is due to undetected error, bias, or confounding (discussed below). Measures of statistical significance, such as p values, are not indicators of the strength of an association. Small increases in relative risks that are consistent among studies, however, might be evidence of an association, and some forms of extreme bias or confounding can produce a high relative risk. The statistical power of a study was important for it had to be able to detect effects of a certain magnitude, especially important for negative results.
Thus, studies were evaluated for their rigor and analyses. Greater weight was given to studies that were conducted in a manner that reduced sources of error, bias, and confounding. More weight was given to studies in which there was independent assessment of exposure, either