ciation that is observed in epidemiologic data, although the probability may be extremely small.

Having judged that an association in a population under study cannot be demonstrated to have occurred because of error or bias, an investigator computes a measure of association that takes into account any relevant differences between the exposed and the unexposed group. Also it is usual to quantify the uncertainty in a measured association by calculating an interval of possible values for the true measure of association. This confidence interval describes the range of values most likely to include the true measure of association if the statistical model is correct. It always is possible that the true association lies outside the confidence interval either because the model is incomplete or otherwise in error or because a rare event has occurred (with rare defined by the probability level, commonly 5%).

Another step in assessing whether radiation exposure may be the cause of some disease is to compare the results of a number of studies that have been conducted on populations that have been exposed to radiation. If a general pattern of a positive association between radiation exposure and a disease can be demonstrated in several populations and if these associations are judged not to be due to confounding, bias, chance, or error, a conclusion of a causal association is strengthened. However, if studies in several populations provide inconsistent results and no reason for the inconsistency is apparent, the data must be interpreted with caution. No general conclusion can be made that the exposure is a cause of the disease.

An important exercise is assessing the relation between the dose of exposure and the risk of disease. There is no question that radiation exposure at relatively high doses has caused disease and death (NRC 1990; UNSCEAR 2000b). However, at relatively low doses, there is still uncertainty as to whether there is an association between radiation and disease, and if there is an association, there is uncertainty about whether it is causal or not.

Following is a discussion of the basic elements of how epidemiologists collect, analyze, and interpret data. The essential feature of data collection, analysis, and interpretation in any science is comparability. The subpopulations under study must be comparable, the methods used to measure exposure to radiation and to measure disease must be comparable, the analytic techniques must ensure comparability, and the interpretation of the results of several studies must be based on comparable data.

COLLECTION OF EPIDEMIOLOGIC DATA

Types of Epidemiologic Studies

Research studies are often classified as experimental or observational depending on the manner in which the levels of the explanatory factors are determined. When the levels of at least one explanatory factor are under the control of the investigator, the study is said to be experimental. An example is a clinical trial designed to assess the utility of some treatment (e.g., radiation therapy). When the levels of all explanatory factors are determined by observation only, the study is observational. If treatment is assigned by a random process, the study is experimental. The majority of studies relevant to the evaluation of radiation risks in human populations are observational. For example, in the study of atomic bomb survivors, neither the conditions of exposure nor the levels of exposure to radiation were determined by design.

Two basic strategies are used to select participants in an observational epidemiologic study that assesses the association between exposure to radiation and disease: select exposed persons and look at subsequent occurrence of disease, or select diseased persons and look at their history of exposures. A study comparing disease rates among exposed and unexposed persons, in which exposure is not determined by design, is termed a “cohort” or a “follow-up” study. A study comparing exposure among persons with a disease of interest and persons without the disease of interest is termed a “case-control” or “case-referent” study.

Randomized Intervention Trials

Intervention trials are always prospective—for example, subjects with some disease are enrolled into the study, and assignment is made to some form of treatment according to a process that is not related to the basic characteristics of the individual patient (Fisher and others 1985). In essence, this assignment is made randomly so that the two groups being studied are comparable except for the treatment being evaluated. Random is not the same as haphazard; a randomizing device must be used, such as a table of random numbers, a coin toss, or a randomizing computer program. However, random assignment does not guarantee comparability. The randomization process is a powerful means of minimizing systematic differences between two groups (“confounding bias”) that may be related to possible differences in the outcome of interest such as a specific disease. Further, blinded assessment of health outcome will tend to minimize bias in assessing the utility of alternative methods of treatment. Another important aspect of randomization is that it permits the assessment of uncertainty in the data, generally as p-values or confidence intervals. Intervention trials related to radiation exposure are conducted with the expectation that the radiation will assist in curing some disease. However, there may be the unintended side effect of increasing the risk of some other disease.

Although a randomized study is generally regarded as the ideal design to assess the possible causal relationship between radiation and some disease in a human population, there are clearly ethical and practical limitations in its conduct. There must be the expectation that in the population under study, radiation will lead to an improvement in health



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement