of interest need to be analyzed only for the cases and the selected controls rather than for the entire cohort, and this saves time and money.

Experimental Studies

Experimental studies in humans are the most reliable means of establishing causal associations between exposure to an agent of interest and human health outcomes. Key features of experimental studies are their prospective design, their use of a control group, and their random allocation of exposure to the agent under study. The randomized controlled trial is considered the most informative type of epidemiologic research design for the study of medications, surgical practices, biologic products, vaccines, and preventive interventions. The main drawbacks of randomized controlled trials are their expense, the time needed for completion, and the common practice of systematically excluding many groups of individuals, which limits the conclusions to a relatively small and homogeneous subgroup. Randomized trials are virtually non-existent as a means of determining the health risks of insecticides and solvents.

Measures of Association

The relationships that are examined in each type of epidemiologic study are quantified with statistical measures of association. In cohort studies, the measure of association between exposure to the agent of interest and outcome is the relative risk (RR), computed by dividing the risk or rate of developing the disease or condition over the followup period in the exposed group by the risk or rate in the unexposed group. An relative risk greater than 1 suggests that exposed subjects are more likely to develop the outcome than unexposed subjects, that is, it suggests a positive association between exposure to the putative agent and the disease. Conversely, a relative risk of less than 1 indicates that the agent might protect against the occurrence of the disease. A relative risk close to 1 indicates that there is little appreciable difference in risk (rates) and that there is little evidence of an association between the agent and the outcome.

In occupational cohort mortality studies, risk estimates are often standardized for comparison to general population mortality rates (by age, sex, race, time, and cause) because it can be difficult to identify a suitable control group of unexposed workers. The observed number of deaths among workers (from a specific cause, such as lung cancer) is compared with the expected number of deaths in an identified population, such as the general US population, accounting for age, sex, and calendar year. The ratio of observed to expected deaths produces a standardized mortality ratio (SMR). The SMR is usually a good estimator of relative risk; an SMR greater than 1 generally suggests an increased risk of dying in the exposed group. Less common, but identical in calculation, is a standardized incidence ratio (SIR). Incidence is a measure of new cases of a disease; mortality is the number of reported deaths. SIRs are calculated less often than SMRs, because disease incidence, as an end point, is often more difficult than death for investigators to identify and follow up. For disease mortality, death certificates are often used; they are easier to obtain than registry data that would indicate the incidence of a disease.

A proportionate mortality ratio (PMR) study relates the proportions of deaths from a specific cause in a specified time period between exposed and nonexposed subjects. That makes it possible to determine whether there is an excess or deficit of deaths from that cause

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement