Causality and Evidence
The concept of causality is of cardinal importance in health research, clinical practice, and public health policy. It also lies at the heart of this committee's charge: to make causal inferences about the relation between vaccines routinely administered to children in the United States and several specific adverse health outcomes. Despite its importance, however, causality is not a concept that is easy to define or understand (Kramer and Lane, 1992). Consider, for example, the relation between vaccine x and Guillain-Barré syndrome (GBS). Does the statement ''Vaccine x causes GBS'' mean that (1) all persons immunized with vaccine x will develop GBS, (2) all cases of GBS are caused by exposure to vaccine x, or (3) there is at least one person whose GBS was caused or will be caused by vaccine x?
The first interpretation corresponds to the notion of a sufficient cause; vaccine x is a sufficient cause of GBS if all vaccine x recipients develop the disease. Vaccine x is a necessary cause of GBS if the disease occurs only among vaccine x recipients (second interpretation above). Although the idea that a "proper" cause must be both necessary and sufficient underlies Koch's postulates of causality (see Glossary in Appendix c), it is now generally recognized that for most exposure-outcome relations, exposure (i.e., the putative cause) is neither necessary nor sufficient to cause the
outcome (third interpretation above). In other words, most health outcomes of interest have multifactorial etiologies.
A good example is coronary heart disease (CHD). It has been amply demonstrated that smoking, high blood pressure, lack of exercise, and high serum cholesterol levels are all causally related to the development of CHD. Nonetheless, many people with one or more of these risk factors do not develop CHD, and some cases of CHD occur in people without any of the risk factors. Most of the adverse events considered by the committee have multifactorial etiologies.
Types of Causal Questions
The causal relation between a vaccine and a given adverse event can be considered in terms of three different questions (Kramer and Lane, 1992):
Can It? (potential causality): Can the vaccine cause the adverse event, at least in certain people under certain circumstances?
Did It? ("retrodictive" causality): Given an individual who has received the vaccine and developed the adverse event, was the event caused by the vaccine?
Will It? (predictive causality): Will the next person who receives the vaccine experience the adverse event because of the vaccine? Or equivalently: How frequently will vaccine recipients experience the adverse event as a result of the vaccine?
Each of these causality questions has a somewhat different meaning, and for each, there are different methods of assessment. In the section below, each question will be discussed in turn, with reference to how it relates to the committee's charge and how the committee attempted to answer it.
The committee has been charged with answering the Can It? causality question for the relations between vaccines routinely administered to children and several specific adverse events. The question is conventionally approached through controlled epidemiologic studies. (The term epidemiologic studies is used throughout this report in its broad sense to denote studies of disease and other health-related phenomena in groups of human subjects. The term thus includes many clinical studies but excludes animal and in vitro studies on the one hand and individual case reports on the other. See below the section Sources of Evidence for Causality for a more detailed description of epidemiologic studies.) Can It? is generally answered in the affirmative if the relative risk (the ratio of the rate of occurrence of the adverse event in vaccinated persons to the rate in otherwise comparable
unvaccinated persons) is greater than 1, provided that systematic error (bias) and random error (sampling variation) can be shown to be improbable explanations for the findings. In other words, if a statistically significant relative risk has been obtained in an epidemiologic study (or a meta-analysis of several epidemiologic studies) and is unlikely to be due to systematic bias, Can It? causality can be accepted.
Much of the epidemiologic literature on causality has focused on Can It?, and a widely used set of criteria has evolved for Can It? causality assessment (Hill, 1965; Stolley, 1990; Susser, 1973; U.S. Department of Health, Education, and Welfare, 1964). These criteria are as follows:
Strength of association: A relative risk (or odds ratio) of 1.0 indicates no association between the vaccine and the adverse event. Relative risks of between 1.0 and 2.0 are generally regarded as indicating a weak association, whereas higher values indicate a moderate or strong association. In general, the higher the relative risk, the less likely the vaccine-adverse event association is to be entirely explained by one or more sources of analytic bias.
Analytic bias: Analytic bias is a systematic error in the estimate of association between the vaccine and the adverse event. It can be categorized under four types: selection bias, information bias, confounding bias, and reverse causality bias. Selection bias refers to the way that the sample of subjects for a study has been selected (from a source population) and retained. If the subjects in whom the vaccine-adverse event association has been analyzed differ from the source population in ways linked to both exposure to the vaccine and development of the adverse event, the resulting estimate of association will be biased. Information bias can result in a bias toward the null hypothesis (no association between the vaccine and the adverse event), particularly when ascertainment of either vaccine exposure or event occurrence has been sloppy; or it may create a bias away from the null hypothesis through such mechanisms as unblinding, recall bias, or unequal surveillance in vaccinated versus nonvaccinated subjects. Confounding bias occurs when the vaccine-adverse event association is biased as a result of a third factor that is both capable of causing the adverse event and associated with exposure to the vaccine. Finally, reverse causality bias can occur unless exposure to the vaccine is known to precede the adverse event.
Biologic gradient (dose-response effect): In general, Can It? causality is strengthened by evidence that the risk of occurrence of an outcome increases with higher doses or frequencies of exposure. In the case of vaccines, however, dose and frequency tend to be fixed. Moreover, since some of the adverse events under consideration by the committee could represent hypersensitivity or another type of idiosyncratic reaction, the absence of a dose-response effect might not constitute strong evidence against a causal relation.
Statistical significance: Might chance—that is, sampling variation—be responsible for the observed vaccine-adverse event association? The magnitude of the P (probability) value (or the width of the confidence interval) associated with an effect measure such as the relative risk or risk difference is generally used to estimate the role of chance in producing the observed association. This type of quantitative estimation is firmly founded in statistical theory on the basis of repeated sampling. No similar quantitative approach is usually possible, however, for assessing nonrandom errors (bias) in estimating the strength of the association.
Consistency: Can It? causality is strengthened if the vaccine-adverse event association has been detected in more than one study, particularly if the studies employed different designs and were undertaken in different populations.
Biologic plausibility and coherence: The vaccine-adverse event association should be plausible and coherent with current knowledge about the biology of the vaccine and the adverse event. Such information includes experience with the naturally occurring infection against which the vaccine is given, particularly if the vaccine is a live attenuated virus. Animal experiments and in vitro studies can also provide biologic plausibility, either by demonstrating adverse events in other animals that are similar to the ones in humans or by indicating pathophysiologic mechanisms by which the adverse event might be caused by receipt of the vaccine.
Although Can It? causality is usually addressed from epidemiologic studies, an affirmative answer can occasionally be obtained from individual case reports. Thus, if one or more cases have clearly been shown to be caused by a vaccine (i.e., Did It? can be answered strongly in the affirmative), then Can It? is also answered, even in the absence of epidemiologic data. In several circumstances, for example, the committee based its judgment favoring acceptance of a causal relation solely on the basis of one or more convincing case reports.
In this regard, however, it must also be added that the absence of convincing case reports cannot be relied upon to answer Can It? in the negative. If a given vaccine has an extremely long history of use and no cases of occurrence of a particular adverse event have been reported following its administration, doubt is inevitably cast on a possible causal relation. Given an extremely rare adverse event and the notorious problems of underreporting in passive surveillance systems, however, the absence of such reports is insufficient to reject a causal relation. The committee acknowledges that that which has not been reported might indeed have occurred.
Instead, the committee relied on epidemiologic studies to reject a causal relation. On the basis of the combined evidence from one or more con-
trolled epidemiologic studies of high methodologic quality and sufficient statistical power (sample size), failure to detect an association between a vaccine and a particular adverse event was judged as favoring rejection of a causal relation.
Even though the committee was not specifically charged with assessing the causal role of vaccines in individual cases, such assessments can be useful in evaluating Can It? causality. For many of the vaccine-adverse event associations under consideration, no epidemiologic studies have been reported, and individual case reports provide the only available evidence. As discussed above, if that evidence strongly suggests that the vaccine did cause the adverse event in one or more cases, then it is logical to conclude that it can cause the event.
In fact, many of the associations that the committee was charged with examining were first suggested because one or more cases of adverse events were found to occur following receipt of the vaccine. Some of these originated from case reports in the published medical literature; others originated from reports by physicians, nurses, parents, or vaccine recipients who observed the adverse event following exposure to the vaccine. The arousal of one's suspicions that a vaccine might be the cause of an adverse event that occurs within hours. days, or weeks following receipt of the vaccine is natural and understandable. But the mere fact that B follows A does not mean that A caused B; inferring causation solely on the basis of a proper temporal sequence is the logical fallacy of post hoc ergo propter hoc (literally, "after this, therefore because of this").
Many factors go into evaluating the causal relation between vaccine exposure and adverse events from individual case reports. Much of the literature in this area has come from postmarketing surveillance programs that monitor adverse drug reactions, such as those programs maintained by the U.S. Food and Drug Administration and comparable agencies in other countries (Venulet, 1982). Such passive, "spontaneous reporting" programs have been shown to have problems with both false-negative and false-positive results; that is, many of the reported cases are probably not caused by exposure to the drug or vaccine, whereas many drug- or vaccine-caused events go unreported (Faich, 1986; Péré, 1991; Tubert et al., 1992).
The information from case reports that is useful in assessing causality can be considered under the following seven headings (Kramer, 1981):
Previous general experience with the vaccine: How long has it been on the market? How many individuals have received it? How often have vaccine recipients experienced similar events? How often does the
event occur in the absence of vaccine exposure? Does a similar event occur more frequently in animals exposed to the vaccine than in appropriate controls?
Alternative etiologic candidates: Can a preexisting or new illness explain the sudden appearance of the adverse event? Does the adverse event tend to occur spontaneously (i.e., in the absence of known cause)? Were drugs, other therapies, or diagnostic tests and procedures that can cause the adverse event administered?
Susceptibility of the vaccine recipient: Has he or she received the vaccine in the past? If so, how has he or she reacted? Does his or her genetic background or previous medical history affect the risk of developing the adverse event as a consequence of vaccination?
Timing of events: Is the timing of onset of the adverse event as expected if the vaccine is the cause? How does that timing differ from the timing that would occur given the alternative etiologic candidate(s)? How does the timing, given vaccine causation, depend on the suspected mechanism (e.g., immunoglobulin E versus T-cell-mediated)?
Characteristics of the adverse event: Are there any available laboratory tests that either support or undermine the hypothesis of vaccine causation? For live attenuated virus vaccines, has the vaccine virus (or a revertant) been isolated from the target organ(s) or otherwise identified? Was there a local reaction at the site at which the vaccine was administered? How long did the adverse event last?
Dechallenge: Did the adverse event diminish as would be expected if the vaccine caused the event? Is the adverse event of a type that tends to resolve rapidly regardless of cause (e.g., a febrile seizure)? Is it irreversible (e.g., death or a permanent neurologic deficit)? Did specific treatment of the adverse event cloud interpretation of the observed evolution of the adverse event?
Rechallenge: Was the vaccine readministered? If so, did the adverse event recur?
Three ways to assess Did It? causality from case reports could be applied to reports of adverse events following receipt of vaccines. The most common is global introspection (Lane, 1984). The assessor attempts to take the relevant aforementioned factors into account and to weigh them appropriately in arriving at an overall decision, which is usually expressed as "yes" or "no." Although causality in individual cases is occasionally obvious, it may be difficult or impossible to consider and properly weigh all the relevant facts simultaneously, let alone to possess those facts (Kramer, 1986).
A second method for assessing Did It? causality is based on the construction of algorithms (branched logic trees) (Venulet, 1982). Such algorithms have been shown not only to improve the reproducibility and validity
of causality assessments but also to make those assessments more accountable (Hutchinson and Lane, 1989). In other words, it is easier to see how the assessment methods were used to reach the conclusions. Most algorithms are presented in the form of a flowchart or a questionnaire, which asks a series of questions and assigns a score on the basis of the assessor's answers to those questions. The score is then used to assign a categorical probability rating such as definite, probable, possible, or unlikely.
The third approach is Bayesian analysis (Lane et al., 1987). It is based on Bayes' theorem and calculates the posterior probability of vaccine causation (the probability that the event was caused by the vaccine) from estimates of the prior probability (the probability that the vaccine caused the adverse event prior to observing the particular facts of the individual case) and a series of likelihood ratios for each pertinent element of the observed case. Each likelihood ratio is calculated by dividing the probability of observing what actually occurred, under the hypothesis that the vaccine was the cause, by the probability of observing the same occurrence given nonvaccine causation. The Bayesian approach not only provides a direct estimate of the Did It? probability for a given case but it is also accountable in terms of documenting the component estimates that go into calculating the posterior probability. The prior probability relates to the first two headings of information from cases reported above and is often based on epidemiologic data, when available, whereas individual case information is used to construct the likelihood ratios for the third through seventh headings. Full Bayesian analyses are often complicated and time-consuming. Moreover, because the data necessary to estimate the component prior probabilities and likelihood ratios may be unavailable, quantitative expression of the assessor's uncertainty is often highly subjective, even if based on expert opinion.
In evaluating the case reports available to the committee, the committee adopted an informal Bayesian approach. The main elements of the case reports used in the committee's assessments included the individual's medical history, the timing of onset of the adverse event following vaccine administration, specific characteristics of the adverse event, and follow-up information concerning its evolution. Each relevant piece of case information was assessed for its strength of evidence for vaccine versus nonvaccine causation. When such information (particularly concerning timing) was unavailable, the committee usually found it difficult or impossible to infer causality for that case.
The individual's medical history was taken into account in considering the role of alternative etiologic candidates (which affects the prior probability of vaccine causation). For example, a history of abnormal neurologic development or seizures prior to receipt of a vaccine reduces the probability that the encephalopathy or residual seizure disorder that developed after vaccination was caused by the vaccine.
The committee attempted to establish objective criteria for the expected timing of onset for each type of adverse event under consideration. For example, data on experimental acute demyelinating encephalomyelitis and postinfectious GBS were used to establish a time window of 5 days to 6 weeks for the likely occurrence of a vaccine-caused case of GBS, with those cases occurring 7 to 21 days postvaccination judged as being especially likely to be caused by the vaccine. In the absence of reliable age- and sex-specific background (i.e., in the absence of vaccine exposure) incidence rates for GBS, however, the mere occurrence of a case of GBS 2 weeks after receipt of a vaccine becomes interpretable only when compared with the background number of cases that would be expected to occur in individuals of that age and sex in the absence of vaccination. Because of the rather diffuse time window and the lack of reliable descriptive epidemiologic information, therefore, appropriate timing of onset, in and of itself, is insufficient to infer causality for an individual case. A useful contrast is provided by anaphylaxis, which is caused by exposure to a foreign antigen or drug. Given the occurrence of a clinically and pathologically typical case of anaphylaxis within minutes of receipt of a vaccine, it is very difficult to blame anything else.
The characteristics of the adverse event can also be helpful. Thus, the committee tried to ensure that cases of GBS or anaphylaxis met established clinical and laboratory criteria for those conditions. But mere confirmation that a case is "true GBS," although necessary, is insufficient to infer vaccine causation, because such cases do not differ from background cases that occur after a viral infection or spontaneously. On the other hand, clinical and pathologic findings consistent with the diagnosis of anaphylaxis are helpful in distinguishing sudden collapse or death caused by anaphylaxis from sudden collapse or death caused by myocardial infarction, stroke, or some other sudden catastrophic event.
Dechallenge, that is, discontinuing the suspected vaccine or reducing its dose, rarely contributes useful information. Unlike drugs, vaccines are administered at a single point in time, and their immunologic effects tend to persist well after the vaccine antigen(s) has been eliminated. Thus, the evolution of the adverse event is often not helpful in assessing vaccine causation.
Rechallenge is unusual, because physicians are unlikely to readminister a vaccine previously associated with an adverse event. When rechallenge does occur, however, the recurrence or nonrecurrence of the adverse event will often have a major impact on the causality assessment.
The Will It? causality question refers to how frequently a vaccine causes a specific adverse event and can relate to either individuals or populations.
For individuals, the question refers to the probability that a given vaccine recipient will experience the adverse event because of the vaccine. For populations, Will It? refers to the proportion of vaccinees who will experience the adverse event as a result of the vaccine. For either individuals or populations, the answer to Will It? is best estimated by the magnitude of the risk difference (attributable risk): the incidence of the adverse event among vaccine recipients minus the incidence of the adverse event among other otherwise similar nonrecipients. This entity is often confused with the etiologic fraction, probably because the latter is also referred to as the population attributable risk.
The risk difference depends on both the background incidence of the adverse event (i.e., among nonrecipients of the vaccine) and the relative risk of its occurrence in vaccine recipients versus nonrecipients. Thus, even when the relative risk is high, the risk difference will be low if the event is extremely rare.
Will It? causality assessments are essential for risk-benefit considerations, because the risk difference expresses the probability of the risk of an adverse event caused by the vaccine. But Will It? depends on Can It?; if the evidence is insufficient to conclude whether a vaccine can cause a given adverse event, then it is also insufficient to conclude whether it will. Moreover, when an affirmative answer to the Can It? question is based only on case reports rather than epidemiologic studies, no quantitative estimate of Will It? is possible.
Even though the Will It? question was not part of the committee's specific mandate, estimates of the risk difference (attributable risk) are provided, whenever possible, for those associations for which the committee judged the evidence to favor acceptance of a (Can It?) causal relation and for which epidemiologic data provide information on the incidence of the adverse event among nonvaccinees and the relative risk of its occurrence among vaccinees.
SOURCES OF EVIDENCE FOR CAUSALITY
The sources of evidence for causality examined by the committee include demonstrated biologic plausibility, reports of individual cases or series of cases, and epidemiologic studies. In an epidemiologic study, the investigators measure one or more health-related attributes (exposures, outcomes, or both) in a defined sample of human subjects and make inferences about the values of those attributes or the associations among them (or about both the values and associations) in the source population from which the study sample originates. Epidemiologic studies can be either uncontrolled (descriptive) or controlled (analytic), observational (survey) or experimental (clinical trial). These sources of evidence are discussed in greater
detail below in the same order in which they will be considered within each of the vaccine- and adverse event-specific chapters.
All of the vaccine-adverse event associations assessed in this report have some biologic plausibility, at least on theoretical grounds. That is, a knowledgeable person could postulate a feasible mechanism by which the vaccine could cause the adverse event. Actual demonstration of biologic plausibility, however, was based on the known effects of the natural disease against which the vaccine is given and the results of animal experiments and in vitro studies. Only demonstrated biologic plausibility was considered by the committee in reaching its causality judgments.
Case Reports, Case Series, and Uncontrolled Observational Studies
The committee obtained reports of individual cases of adverse events following receipt of vaccine through the published medical literature as well as from passive, spontaneous surveillance systems established by the vaccine manufacturers, the U.S. Food and Drug Administration, and the Centers for Disease Control and Prevention. These include the Monitoring System for Adverse Events Following Immunization and the Spontaneous Reporting System, as well as the more recent Vaccine Adverse Event Reporting System (VAERS). Appendix B identifies the material from these systems obtained and reviewed by the committee. Chapter 10 includes a discussion of the limitations of passive surveillance systems such as these. as well as an analysis of the data contained within VAERS regarding reports of deaths following vaccination.
Uncontrolled observational studies are usually based on a cohort design, in which an identified group of vaccinees is followed for some period of time to detect the occurrence of one or more adverse events. These studies often incorporate more active surveillance than is the case in the passive, spontaneous reporting systems mentioned above, although a clear distinction from case series emanating from defined population bases is often difficult. Because no nonexposed control group is included in such studies, however, the rates of occurrence of the adverse events under consideration can usually be interpreted only descriptively, and the evidence derived therefrom is rarely helpful in either accepting or rejecting a causal relation. Also included under uncontrolled observational studies are reports of vaccine exposure in a representative group of individuals experiencing the adverse event. Such studies can also overlap with case series, although the authors of case series often attempt to make causal inferences (or hypotheses) concerning exposure to vaccines and/or other factors and, hence,
usually provide considerably more detail about alternative etiologic candidates, the timing of the onset of the adverse event following vaccine administration, and clinical and pathologic descriptions of the adverse event.
Uncontrolled epidemiologic studies do not yield direct estimates of the effect of vaccine exposure on the risk of developing the adverse event. Sometimes, however, the existence of reliable data on the risk in unexposed subjects can form the basis of an external (to the study) control group and, hence, an indirect estimate of the vaccine effect.
Controlled Observational Studies
Controlled observational studies permit a direct estimate of the effect of vaccine exposure on the occurrence of the adverse event. Most are based on either a cohort or a case-control design. In controlled cohort studies, a defined group of individuals exposed to a given vaccine are followed longitudinally for the occurrence of one or more adverse events of interest, and the rate of such occurrence is compared with the rate in an otherwise similar group of nonexposed individuals by using either the ratio of rates (relative risk) or their difference (risk difference). In many populations, however, exposure to vaccines is virtually universal; exposure can then be defined within a rather narrow time window; that is, the rate of occurrence of an adverse event within 2 weeks of vaccine administration can be compared with the rate of occurrence of an adverse event several weeks or months thereafter. In case-control studies, rates of prior exposure to the suspected vaccine between individuals with (the cases) and without (the controls) the adverse event are compared. No direct calculation of relative risk or risk difference can be made from a case-control study, but the exposure odds ratio (the odds of exposure among the cases divided by the odds of exposure among the controls) can be shown to be a very good estimate of the true relative risk when the adverse event is rare. In fact, the case-control design is often the only feasible epidemiologic research design for rare events (e.g., GBS, transverse myelitis, optic neuritis, and Stevens-Johnson syndrome). As with cohort studies, the time window of exposure (prior to the occurrence of the adverse event) should be defined narrowly to reflect the biologic latent period corresponding to the pathogenesis of the suspected adverse event.
Other types of controlled epidemiologic studies can also provide useful information. In ecologic studies, for example, the rates of a given adverse event are compared among otherwise similar regions or countries with different policies for administering a suspected vaccine. Such studies assess the vaccine-adverse event association at the population level, and therefore provide only indirect evidence of the association among individuals.
Controlled Clinical Trials
The epidemiologic study designs discussed up to this point are all observational. Allocation of exposure (receipt or nonreceipt of a given vaccine) was decided either by the vaccine recipients, by their parents, or by their physicians—not by the study investigators. The investigators merely attempted to observe the effect of vaccine exposure; they did not control who did or did not receive the vaccine. This absence of control over who gets exposed is what makes observational studies differ from experimental studies, which are also called clinical trials. In a controlled clinical trial of a vaccine, outcomes are compared in subjects who are allocated by the investigator to receive or not receive the vaccine. The controlled clinical trial design provides the strongest scientific evidence bearing on the causal relation between a vaccine and an adverse event, particularly when exposure versus nonexposure to a vaccine is assigned in a random fashion. The study design is then referred to as a randomized clinical trial. As with observational cohort studies, the effect of vaccine exposure on the occurrence of the adverse event is usually expressed as the relative risk or risk difference. Unfortunately, many of the adverse events under consideration by the committee are so rare that even large, multicenter randomized trials would be too small to detect differences in the incidences of a rare adverse event.
Combining the Evidence
When two or more epidemiologic studies that bear on a given vaccine-adverse event association were located by the committee (particularly when they shared a similar design), the committee used meta-analysis to pool the results from those studies and thereby gain both increased statistical power and enhanced generalizability (Dickersin and Berlin, 1992). Even a meta-analysis of epidemiologic studies, however, does not help in combining the evidence from different sources of evidence. Because no generally accepted rules exist for combining such evidence, the committee adopted its own operational criteria.
Although randomized clinical trials are generally accepted as providing the most scientifically valid assessment of causal relations, most have been too small to contribute any useful evidence bearing on the vaccine-adverse event associations under consideration by the committee. Thus, case reports, case series, and uncontrolled observational studies and controlled observational epidemiologic studies were often the main basis for the committee's judgment. As mentioned above, only epidemiologic studies were used to conclude that the evidence favored rejection of a causal relation. In the absence of epidemiologic studies favoring acceptance of a causal rela-
tion, individual case reports and case series were relied upon, provided that the nature and timing of the adverse event following vaccine administration and the absence of likely alternative etiologic candidates were such that a reasonable certainty of causality could be inferred (as described above) from one or more case reports. The presence or absence of demonstrated biologic plausibility was also considered in weighing the overall balance of evidence for and against a causal relation. In the absence of convincing case reports or epidemiologic studies, however, the mere demonstration of biologic plausibility was felt to constitute insufficient evidence to accept or reject a causal relation.
Acceptance and rejection of a causal relation between any exposure and outcome are inherently asymmetric. Very strong evidence in favor of such a causal relation can be said to establish a causal relation, although 100 percent ''proof,'' in the mathematical sense, is never possible. It is almost never possible, however, to be as sure about rejecting such a causal relation because even the largest population-based epidemiologic studies have insufficient statistical power to detect extremely rare causes of an outcome (e.g., an excess risk of 1 per 1 million population). Hence, the categories in which the committee has summarized the evidence for causality (see below) reflect this essential asymmetry.
Despite the committee's attempts at objectivity, the interpretation of scientific evidence always retains at least some subjective elements. Use of such "objective" standards as P values, confidence intervals, and relative risks may convey a false sense that such judgments are entirely objective. However, judgments about potential sources of bias, although based on sound scientific principles, cannot usually be quantitated. This is true even for the scientific "gold standard" in evaluating causal relations, the randomized clinical trial.
For each vaccine-adverse event association under consideration, the committee started from a neutral position, presuming neither the presence nor the absence of a causal relation between the vaccines and the adverse events under consideration. Each category of evidence was then assessed and weighted (as described above) to arrive at an overall judgment as to whether the balance of evidence favored acceptance or rejection of a causal relation between the vaccine and the adverse event. To enhance scientific accountability, the committee's judgment of causality for each vaccine-adverse event association considered is accompanied by an explanation of the evidentiary basis for that judgment.
SUMMARIZING THE EVIDENCE FOR CAUSALITY
The committee attempted to build on the methods and procedures used by the Committee to Review the Adverse Consequences of Pertussis and
Rubella Vaccines (Institute of Medicine, 1991). The pertussis and rubella vaccine committee summarized the evidence bearing on those vaccines using the following five categories: (1) no evidence bearing on a causal relation, (2) evidence insufficient to indicate a causal relation, (3) evidence does not indicate a causal relation, (4) evidence is consistent with a causal relation, and (5) evidence indicates a causal relation. They then assigned each vaccine-adverse event association under their consideration to one of these five categories.
Because some confusion has arisen over the meaning of the category descriptions used by the pertussis and rubella vaccine committee, despite extensive explanation both in footnotes and the text, the Vaccine Safety Committee adopted some minor modifications in wording intended to help in the interpretation of the present report. To facilitate reading by those familiar with the report of the previous committee, the present committee maintained both the number of categories (five) and the order of those categories but modified the wording in an attempt to clarify its meaning.
The names and descriptions of the categories used in this report are as follows:
1. No evidence bearing on a causal relation.
Putative associations between vaccine and adverse events for which the committee was unable to locate any case reports or epidemiologic studies were placed in this category. Demonstrated biologic plausibility alone was considered insufficient to remove a given vaccine-adverse event association from this category.
2. The evidence is inadequate to accept or reject a causal relation.
One or more (in some instances there were many) case reports or epidemiologic studies were located by the committee, but the evidence for a causal relation neither outweighed nor was outweighed by the evidence against a causal relation. The presence or absence of demonstrated biologic plausibility was considered insufficient to shift this balance in either direction.
3. The evidence favors rejection of a causal relation.
Only evidence from epidemiologic studies was considered as a basis for possible rejection of a causal relation. Such evidence was judged as favoring rejection only when a rigorously performed epidemiologic study (or a meta-analysis of several such studies) of adequate size (i.e., statistical power) did not detect a significant association between the vaccine and the adverse event. The absence of demonstrated biologic plausibility was considered supportive of a decision to reject a causal relation but insufficient on its own to shift the balance of evidence from other sources.
4. The evidence favors acceptance of a causal relation.
The balance of evidence from one or more case reports or epidemiologic
studies provides evidence for a causal relation that outweighs the evidence against such a relation. Demonstrated biologic plausibility was considered supportive of a decision to accept a causal relation but insufficient on its own to shift the balance of evidence from other sources.
5. The evidence establishes a causal relation.
Epidemiologic studies and/or case reports provide unequivocal evidence for a causal relation, and biologic plausibility has been demonstrated.
Dickersin K, Berlin JA. Meta-analysis: state-of-the-science. Epidemiological Reviews 1992; 14:154-176.
Faich GA. Adverse drug reaction monitoring. New England Journal of Medicine 1986;314:1589-1592.
Hill AB. The environment and disease: association or causation. Proceedings of the Royal Society of Medicine 1965;58:295-300.
Hutchinson TA, Lane DA. Assessing methods for causality assessment of suspected adverse drug reactions. Journal of Clinical Epidemiology 1989;42:5-16.
Institute of Medicine. Adverse Effects of Pertussis and Rubella Vaccines. Washington. DC: National Academy Press; 1991.
Kramer MS. Difficulties in assessing the adverse effects of drugs. British Journal of Clinical Pharmacology 1981;11:105S-110S.
Kramer MS. Assessing causality of adverse drug reactions: global introspection and its limitations. Drug Information Journal 1986;20:433-437.
Kramer MS, Lane DA. Causal propositions in clinical research and practice. Journal of Clinical Epidemiology 1992;45:639-649.
Lane D. A probabilist's view of causality assessment. Drug Information Journal 1984;18:323-330.
Lane DA, Kramer MS, Hutchinson TA, Jones JK, Naranjo C. The causality assessment of adverse drug reactions using a Bayesian approach. Journal of Pharmaceutical Medicine 1987;2:265-268.
Péré J-C. Estimation du numérateur en notification spontanée. In: Bégaud B, ed. Analyse d'Incidence en Pharmacovigilance: Application à la Notification Spontanée. Bordeaux, France: ARME-Pharmacovigilance Editions; 1991.
Stolley PD. How to interpret studies of adverse drug reactions. Clinical Pharmacology and Therapeutics 1990;48:337-339.
Susser M. Causal Thinking in the Health Sciences. New York: Oxford; 1973.
Tubert P, Bégaud B, Péré JC, Haramburu F, Lellouch J. Power and weakness of spontaneous reporting: a probabilistic approach. Journal of Clinical Epidemiology 1992;45:283-286.
U.S. Department of Health, Education, and Welfare. Smoking and Health: Report of the Advisory Committee to the Surgeon General. PHS Publication No. 1103. Washington, DC: U.S. Public Health Service, U.S. Department of Health, Education, and Welfare; 1964.
Venulet J, ed. Assessing Causes of Adverse Drug Reactions with Special Reference to Standardized Methods. London: Academic Press; 1982.