Biological plausibility reflects knowledge of the biological mechanism by which an agent can lead to a health outcome. This knowledge comes through mechanism-of-action or other studies from pharmacology, toxicology, microbiology, and physiology, among other fields, typically in studies of animals. Biological plausibility is often difficult to establish or may not be known at the time an association is first documented. The committee considered factors such as evidence in animals and humans that exposure to the agent is associated with diseases known to have similar biological mechanisms as the disease in question; evidence that certain outcomes are commonly associated with occupational or environmental exposures; and knowledge of routes of exposure, storage in the body, and excretion that would suggest the disease is more likely to occur in some organs rather than others (IOM, 1994b).
It is also important to consider whether alternative explanations might account for the finding of an association. The types of studies described earlier in this chapter are often used to demonstrate associations between exposures to particular agents and health outcomes. The validity of an association, however, can be challenged by error due to chance, bias, and confounding in assembling the study populations (which are more or less representative samples from the entire relevant populations). Since these sources of error may represent alternative explanations for an observed association, they must be ruled out to the extent possible. These sources of error are important for interpreting the strength and limitations of any given study and for understanding the criteria used by the committee to evaluate the strength of the evidence for or against associations.
Chance is a type of error that can lead to an apparent association between an exposure to an agent and a health effect when none is actually present. An apparent effect of an agent on a health outcome may be the result of random variation due to sampling when assembling the study populations, rather than to the agent under study. Standard methods using confidence intervals or tests of statistical significance allow one to assess the role of chance variation due to sampling. A statistically significant finding is one in which there is little chance (usually less than 5 percent) of observing an apparent association when none really exists. A confidence interval (for a relative risk, odds ratio, or other measure of association) is centered at the estimate of the measure of interest and its range depends on the amount of variability in the sample. Although it is possible to calculate a confidence interval for any coverage probability, a 95 percent confidence interval is commonly used. If 95 percent confidence intervals were constructed for repetitions of the experiment (i.e., many different samples were drawn from the population of interest under the same circumstances), 95 percent of these intervals would contain the true value.