Assessing the Effect of Chance

Statistical analysis provides a means to assess the degree to which an observed measure of association (such as RR, OR, SMR, SIR, and PMR) derived from a study reflects a true association. Using statistical analysis, one can assess the probability (sometimes called the p value) that an association as large or larger than the one actually observed could have been observed even if no true association exists, that is, could have arisen by chance. The magnitude of the p value is used as an aid to researchers in interpreting the results of a study. Typically, a p value of less than 0.05 is taken to be indicative that such a result would be “unlikely” if no true association existed and consequently provides evidence of a real association. A relative risk close to 1 indicates that there is little appreciable difference in risk (rates) and that there is little evidence of an association between the exposure and the outcome. In statistical terminology, a result is said to be “statistically significant” if the p value is smaller than 0.05. Lower (more stringent) p values may be used when multiple comparisons are being made. It is important to note that this preset value is arbitrary and is influenced not only by the size of the association but also by the size of the study sample. For example, if the sample is very large, even associations that are very small may be found to be “statistically significant.” In contrast, a large association observed in a study with very few subjects might not be “statistically significant,” primarily because of the sample size. Thus, in interpreting the results of statistical tests, it is critical to take into account not only the magnitude of the observed effect but also the size of the study sample. As a result, the committee decided not to rely on p values when evaluating the role of chance and did not identify studies as being “statistically significant.” Instead, the committee focused on confidence intervals (CIs) as a more appropriate measure for assessing the association of interest.

A CI is the most likely range of values of the association in question and is based on the observed value of the association, its estimated variability if the study were to be repeated many times, and a specified “level of confidence.” The confidence attached to the interval is actually in terms of the approach that is used rather than in the results themselves. Typically 95% CIs are presented. Thus, one interprets a 95% CI to mean that if the study were replicated 100 times (that is, if 100 samples were chosen from the same population, an association were measured, and a CI were constructed), 95 of the 100 CIs would contain the true value of the association. The width of the CI is influenced by the variability in the study data and by the sample size. Greater variability will increase the range, and increasing the sample size will result in a smaller range. CIs are the most appropriate way of presenting the results of epidemiologic studies because both the magnitude of the association and an assessment of the variability of the findings are provided.

Assessing the Effect of Bias

Various types of bias that are inherent in such studies may compromise the validity of epidemiologic study results. The biases may arise as a result of the choice of study subjects or of the way information on exposure or outcome is assessed. It is important that issues of bias be carefully examined in any review of evidence. In evaluating published studies, the committee considered the likelihood of bias and the possible magnitude and direction of effect of bias on the results.

Bias is a systematic error in the estimation of association between an exposure and an outcome that can result in deviation of the observed value from the true value. Bias can



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement