Charged with assessing the epidemiologic, clinical, and biological evidence regarding the causal relationship between specific vaccines and specific adverse events, the committee drew upon previous reports by committees of the Institute of Medicine (IOM, 1991, 1994, 2001a,b, 2002a,b, 2003a,b, 2004a,b), other vaccine safety researchers (Halsey, 2002; Loke et al., 2008; WHO, 2001), general epidemiologic principles (Hill, 1965), and other systematic reviews in clinical medicine and public health (Liberati et al., 2009; Owens et al., 2010; Schunemann et al., 2011; Stroup et al., 2000; USPSTF, 2008). The committee adopted, with one exception,1 the wording for the categories of causal conclusions used by the Institute of Medicine (IOM) committees in the past. The categories used previously were considered appropriate and the benefits of consistency were deemed compelling enough to extend the categories to this report.
Two streams of evidence from the peer-reviewed literature support the committee’s causality conclusions: (1) epidemiologic evidence derived from studies of populations (most often based on observational designs but randomized trials when available), and (2) clinical and biological (mechanistic) evidence derived primarily from studies in animals and individual humans or small groups. Some studies provide evidence relevant to both epidemiologic and mechanistic questions. Drawing from both lines of evidence to support causal inference is well established in the literature. When confronted with epidemiologic and mechanistic evidence suggesting—however
1 As described in a subsequent section, previous IOM committees described the strongest evidence as establishing a causal relationship; this committee uses the term convincingly supports.
strongly or however weakly—that a vaccine is associated with an adverse event, one asks, “Does this make sense given what is known and generally accepted about the biological response to the natural infection, to the vaccine, and what is known about the pathophysiology of the adverse health outcome?”
As described in Chapter 1, the committee was tasked to assess the relationship between a specific adverse health outcome and a specific vaccine. A professional medical librarian conducted three waves of comprehensive literature searches of the published, peer-reviewed biomedical literature using MEDLINE (1950–present); EMBASE (1980–present); BIOSIS (1969– 2005); Web of Science, consisting of the Science Citation Index (1900–present) and the Social Science Citation Index (1956–present); and search terms specific to each vaccine–adverse event relationship under study. Appendix C contains the search strategies used. The first wave of searches included the earliest date of the database to the date of the first search. Follow-up searches were conducted in August 2010 and late December 2010 to ensure that articles published after the initial search were not missed. On occasion, specialized searches were conducted to supplement the general searches. Also, review of the reference list of an article sometimes revealed studies not captured by the general search. These studies were retrieved.
Titles and abstracts, where available, were reviewed to screen out articles that did not address one of the potential vaccine adverse events to be reviewed or that were not primary research articles. See Figure 2-1. For example, the committee did not assess review articles. The committee restricted its review to those vaccines used in the United States, even if the study was conducted outside of the United States, with a few exceptions that will be discussed in the vaccine-specific chapters that follow. Articles were retrieved and reviewed again for relevance to the committee charge. Articles written in languages other than English were translated using Google Translate or a professional translation service. The committee did not include in its reviews data presented only in abstract form or in otherwise unpublished formats, with one exception described in Chapter 9, “Human Papillomavirus Vaccine.” An individual report from the Vaccine Adverse Event Reporting System was reviewed only if it had been described in a peer-reviewed research study and the committee wanted additional information. Decisions from the Vaccine Injury Compensation Program were not reviewed, because they are not published in the peer-reviewed medical literature. The committee did not review the conclusions contained in earlier IOM reports. The committee reviewed the data and made conclusions independently.
FIGURE 2-1 Epidemiologic and mechanistic evidence reviewed by the committee.
The committee’s bibliographic retrieval was posted on the project website with a request for public comment regarding missing articles.2 The committee received one submission, which was reviewed. The bibliography was separated into two sections. Section I contained those articles on which the committee focused its initial review. Section II contained those citations for articles that did not meet the committee’s criteria (i.e., original research, vaccine used in the United States, adverse event within the committee’s scope, animal or in vitro studies of relevance).
The committee made three assessments for each relationship reviewed. The first assessment applies to the weight of evidence from the epidemiologic literature; the second applies to the weight of evidence from the biological and clinical (mechanistic) literature. The third assessment is the committee’s conclusion about causality. In assessing the weights of evidence, each individual article (or findings within an article if more than one outcome or vaccine was studied) was evaluated for its strengths and weaknesses. The committee then synthesized the body of evidence of each type (epidemiologic or mechanistic) and assigned a “weight of evidence” for each. These weights of evidence are meant to summarize the assessment of the quality and quantity of evidence. The committee then reviewed the two weight-of-evidence assessments in order to make a conclusion about the causal relationship. The committee’s approach to each of these three assessments will be discussed in the following sections.
Experimental studies (trials) are generally considered more rigorous than observational studies; controlled studies are generally considered more rigorous than uncontrolled studies. A brief description of major study designs and methodological considerations can be found in Appendix A. Surveillance studies were reviewed, but the absence of a control group limited their contribution to the weight of epidemiologic evidence; studies that included individual case descriptions were reviewed for their contribution to the evaluation of mechanistic evidence (discussed in subsequent sections). Small clinical studies that were not controlled for vaccine administration were generally reviewed for contributions to the mechanistic weight of evidence.
Evaluation of Individual Studies
Each epidemiologic study was evaluated for its methodological limitations (e.g., flawed measurement of either vaccine administration or adverse event, failure to adequately control confounding variables, incomplete or inadequate follow-up, failure to develop and apply appropriate eligibility criteria) and for the precision of the reported results (e.g., the width of the 95% confidence interval around an effect estimate, which also reflects the statistical power to detect a significantly increased risk of an adverse event). Studies that were deemed to be very seriously flawed did not contribute to the weight of evidence; they are identified in the text for completeness but are not discussed in depth.
It is important to note that a specific study could be well designed and well conducted but also have very serious limitations for the purposes of this committee’s analysis. A specific study could have fewer limitations for some vaccines or some outcomes than for others. Small clinical studies can be well conducted but the number of subjects may be too small to detect most adverse events. Although most efficacy studies include a safety component, the results are often nonspecific (e.g., “no serious adverse events were detected”). Even some larger safety studies failed to detect an adverse event. Studies in which no cases of a specific adverse event were identified are uninformative for this review, because if the vaccinated cohort does not include enough cases to approximate background rates, the study is underpowered to inform an assessment. The upper limit of the 95% confidence interval will always overlap with the background rate unless the vaccine is protective. Some might use that information as means to approximate an upper limit on risk, but the committee did not see that as its charge (see Chapter 13). Studies such as these were considered to have very serious limitations for the purpose of the committee’s assessment.
The committee was rigorous in assessing the strengths and weaknesses of each epidemiologic study. For many of the conditions and adverse events considered by the committee, the expected incidence and prevalence rates in the general unvaccinated population as well as in unvaccinated but potentially susceptible subgroups may be very low. Assembling a valid standard for comparison (e.g., an unvaccinated cohort of similar demographic composition and followed over a similar time period of risk, or a control group free of the adverse event but otherwise sufficiently similar to cases diagnosed with the adverse event) and objectively verifying the timing and type of vaccination and the details surrounding the onset and diagnosis of the adverse event are complex if not prohibitively expensive research endeavors. Although randomized clinical trials aiming to study vaccine efficacy may provide the most valid, controlled circumstances in which to also study vaccine safety, such trials inevitably enroll too few study par-
ticipants to be able to detect anything but extreme increases in the risks of relatively rare adverse events of potential concern. Some studies, as will be documented in chapters that follow, reviewed are likely the most methodologically sound that can be done given the nature of the exposure and the outcomes, even if the studies have some residual limitation due to the challenges that often attend such research. The reader will see in the summary paragraphs for the epidemiologic studies and, in some circumstances, the causality conclusion the committee’s interpretation of the evidence more fully than can be captured with the formal and consistent wording of the conclusions used in this report.
Evaluation of the Body of Studies
The committee reviewed methodological approaches of other systematic review efforts, but it was unable to identify one approach that incorporated all of the committee’s needs and could be adopted for immediate use. Cochrane reviews, for example, focus on randomized controlled trials, which is an uncommon design in vaccine safety studies. Other efforts focused on evidence for or against a clinical practice or intervention (Guyatt et al., 2008; USPSTF, 2008).
Consequently, the committee adopted key components of these other approaches to develop a summary classification scheme that incorporates both the quality and quantity of the individual studies and the consistency of the group of studies in terms of direction of effect (i.e., is the effect of the vaccine to increase risk, decrease risk, or have no effect on risk). A key concept to these classifications is confidence, which refers to the confidence the committee has that the true effect lies close to that of the estimate of the average overall effect for the body of evidence (i.e., collection of reports) reviewed (Schunemann et al., 2011), and integrates committee evaluation of validity, precision, and consistency. Validity refers to the absence of confounding, selection bias and information or measurement bias (i.e., internal validity), and the generalizability (external validity) of the findings (Rothman et al., 2008b). Precision refers to the width of the confidence interval (e.g., a 95% confidence interval) around an effect estimate, which reflects the sample size of the study as well as the variability of the outcome measurement (Rothman et al., 2008a). The wider the 95% confidence intervals, the less statistical power to detect a difference as significant.
The four weight-of-evidence assessments for the epidemiologic literature are as follows:
- High: Two or more studies with negligible methodological limitations that are consistent in terms of the direction of the effect and taken together provide high confidence.
- Moderate: One study with negligible methodological limitations, or a collection of studies generally consistent in terms of the direction of the effect, provides moderate confidence.
- Limited: One study or a collection of studies lacking precision or consistency provides limited, or low, confidence.
- Insufficient: No epidemiologic studies of sufficient quality found.
Assessments of high and moderate include a direction of effect. These are to indicate increased risk of the adverse event, decreased risk of the adverse event, or no change (“null”) in the risk of the adverse event. Assessments of limited or insufficient include no direction of effect.
The committee does not consider a single study—regardless of how well it is designed, the size of the estimated effect, or the narrowness of the confidence interval—sufficient to merit a weight of “high” or, in the absence of strong or intermediate mechanistic evidence, sufficient to support a causality conclusion other than “inadequate to accept or reject a causal relationship.” This requirement might seem overly rigorous to some readers. However, the Agency for Healthcare Research and Quality advises the Evidence-based Practice Centers that it has funded to produce evidence reports on important issues in health care to view an evidence base of a single study with caution (Owens et al., 2010). It does so due to the inability to judge consistency of results, an important contributor to a strength of evidence, because one cannot “be certain that a single trial, no matter how large or well designed, presents the definitive picture of any particular clinical benefit or harm for a given treatment” (Owens et al., 2010). It is acknowledged by the committee and others (Owens et al., 2010) that policy makers must often make decisions based on only one study. However, the committee is not recommending policy, rather evaluating the evidence using a transparent and justifiable framework.
The committee assessed the mechanisms of vaccine adverse events by identifying and evaluating clinical and biological evidence. First, the committee looked for evidence in the peer-reviewed literature that a vaccine was or may be a cause of an adverse event in one or more persons (from case reports or clinical studies) in a reasonable time period after the vaccination. Then the committee looked for other information from the clinical and biological (human, animal, or in vitro studies) literature that would provide evidence of a pathophysiological process or mechanism that is reasonably likely to cause the adverse event or to occur in response to specific immunization. Chapter 3 contains a discussion of the major mechanisms the
committee invokes as possible explanations of how a given adverse event can occur after vaccination.
The committee identified many case reports in the literature describing adverse events following vaccination. For the purposes of this report, case report refers to a description of an individual patient; one publication could describe multiple case reports. The cases considered by the committee in weighing evidence of mechanisms were not derived from the large epidemiology studies considered above; there was no “double counting.” The committee evaluated each case report using a well-established set of criteria (“attribution elements”) for case evaluation (Miller et al., 2000). At a minimum, for a case to factor into the weight-of-evidence assessment, it had to include specific mention of the vaccine administered, evidence of clinician-diagnosed health outcome,3 and a specified and reasonable time interval (i.e., temporality or latency) between vaccination and symptoms.4 Case descriptions that did not have the three basic elements described above were not considered in the mechanistic weight-of-evidence determinations. As discussed in the next section, however, these three criteria were only necessary but not sufficient to affect the weight of mechanistic evidence. After identifying cases with the three basic elements, the committee looked for evidence in the case descriptions and in other clinical or biological literature of a possible operative mechanism(s) that would support a judgment that the vaccination was related to the adverse event. See Chapter 3 for a description of possible mechanisms identified by the committee.
Rechallenge cases, in which an adverse event occurred after more than one administration of a particular vaccine in the same individual, could influence the weight of evidence. Each rechallenge, however, must meet the same attributes of reasonable latency, documentation of vaccination receipt, and clinician diagnosis of the health outcome. It is possible that one or more of the “challenges” in an individual case patient reporting is related to a coincidental exposure; thus, the committee looked for other information, as described below, that would support a role for the vaccine in each challenge. The value for the committee of rechallenge cases is much greater for monophasic conditions (events that typically happen only once,
3 On occasion, the case report author describes clinical test results or observations but does not proffer a diagnosis. In these cases, the committee assigned the case report to the health outcome it felt appropriate. Some authors of older case reports use a diagnosis appropriate for the time, but by today’s understanding of clinical disease and pathophysiology, the committee offers a different diagnosis and the case report is described within that committee-directed assessment.
4 What constitutes reasonable latency will vary across vaccines and across adverse events. For example, most adverse reactions from live virus vaccines would not be expected to occur within hours of vaccination; rather, time must elapse for viral replication.
e.g., vasculitis) than for relapsing-remitting conditions, such as multiple sclerosis or rheumatoid arthritis.
Another factor that affected the weight of evidence was information in the clinical workup that eliminated well-accepted alternative explanations for the condition, thus increasing the possibility that the vaccine could be associated with the adverse event. For example, Guillain-Barré syndrome (GBS) is known to be associated with specific infections (e.g., Campylobacter). Case reports of GBS following vaccination weighed more heavily in the committee’s assessment if the authors reported that tests for those common infections were negative, thus eliminating some likely causes for the GBS other than vaccination. Another particularly strong piece of evidence in the case description that affected the weight of evidence is isolation of vaccine strain virus from the patient.
The committee follows the convention of previous IOM committees in considering the effects of the natural infection as one type, albeit minor, of clinical or biological evidence in support of mechanisms.5 Other evidence, described above, provided much stronger evidence in support of the mechanistic assessment.
Evidence from animal studies is also informative if the model of the disease is well established as applicable to humans or if the basic immunology of the vaccine reaction is well understood. In vitro studies can also be informative, but such data must be eyed with skepticism regarding their relationship to the human experience. Specific examples of relevant clinical or biological information are discussed in Chapter 3 generally and in the vaccine-specific Chapters 4 through 11.
Evaluation of the Body of Clinical and Biological (Mechanistic) Evidence
The committee reviewed the approach of previous IOM committees addressing vaccine safety (IOM, 1991, 1994, 2001a,b, 2002a,b, 2003a,b, 2004a,b) in evaluating the body of evidence of biological mechanisms. The committee also searched for other appropriate frameworks for evaluating biological evidence as support for causation analyses. The committee developed four categories for the weight-of-evidence assessment. Each category includes consideration of the clinical information from case reports and consideration of clinical and experimental evidence from other sources.
5 The committee relied on standard textbooks of infectious disease or internal medicine for this evaluation; the committee did not review original research to come to this determination. This is consistent with previous IOM committees tasked with reviewing evidence of causality for vaccine safety. Evidence consisting only of parallels with the natural infections is never sufficient to merit a conclusion other than the evidence is inadequate to accept or reject a causal relationship.
The following are the categories for the mechanistic weight-of-evidence assessments:
- Strong: One or more cases in the literature, for which the committee concludes the vaccine was a contributing cause of the adverse event, based on an overall assessment of attribution in the available cases and clinical, diagnostic, or experimental evidence consistent with relevant biological response to vaccine.
- Intermediate: At least two cases, taken together, for which the committee concludes the vaccine may be a contributing cause of the adverse event, based on an overall assessment of attribution in the available cases and clinical, diagnostic, or experimental evidence consistent with relevant biological response to vaccine. On occasion, the committee determined that at least two cases, taken together, while suggestive, are nonetheless insufficient for the committee to conclude the vaccine may be a contributing cause of the adverse event, based on an overall assessment of attribution in the available cases and clinical, diagnostic, or experimental evidence consistent with relevant biological response to vaccine. This evidence has been identified in the text as “low-intermediate.”
- Weak: Insufficient evidence from cases in the literature for the committee to conclude the vaccine may be a contributing cause of the adverse event, based on an overall assessment of attribution in the available cases and clinical, diagnostic, or experimental evidence consistent with relevant biological response to vaccine.
- Lacking evidence of a biologic mechanism: No clinical, diagnostic, or experimental evidence consistent with relevant biological response to vaccine,6 regardless of the presence of individual cases in the literature.
The committee adopted the categories of causation developed by previous IOM committees. Implicit in these categories is that “the absence of evidence is not evidence of absence.” That is, the committee began its assessment from the position of neutrality; until all evidence was reviewed, it presumed neither causation nor lack of causation. The committee then
6 The committee considered the clinical manifestations of the natural infection against which the vaccine is directed to be sufficient for a weight of evidence of weak, rather than lacking. As will be discussed in a subsequent section, a mechanism weight of evidence of weak alone is never sufficient to support a causality conclusion other than the evidence is inadequate to accept or reject a causal relationship.
moved from that position only when the combination of epidemiologic evidence and mechanistic evidence suggested a more definitive assessment regarding causation, either that vaccines might or might not pose an increased risk for an adverse event.
The following are the categories of causation used by the committee:
- Evidence convincingly supports7 a causal relationship—This applies to relationships in which the causal link is convincing, as with the oral polio vaccine and vaccine-associated paralytic polio.
- Evidence favors acceptance of a causal relationship—Evidence is strong and generally suggestive, although not firm enough to be described as convincing or established.
- Evidence is inadequate to accept or reject a causal relationship— The evidence is not reasonably convincing either in support of or against causality; evidence that is sparse, conflicting, of weak quality, or merely suggestive—whether toward or away from causality—falls into this category.8 Where there is no evidence meeting the standards described above, the committee also uses this causal conclusion.
- Evidence favors rejection of a causal relationship—The evidence is strong and generally convincing, and suggests there is no causal relationship.
The category of “establishes or convincingly supports no causal relationship” is not used because it is virtually impossible to prove the absence of a relationship with the same certainty that is possible in establishing the presence of one. Even in the presence of a convincing protective effect of vaccine in epidemiology, studies may not rule out the possibility that the reaction is caused by vaccine in a subset of individuals. Thus, the framework for this and previous IOM reports on vaccine safety is asymmetrical. The committee began not by assuming the causal relationship does not exist, but by requiring evidence to shift away from the neutral position that the evidence is “inadequate to accept or reject” a causal relationship.
The committee then established a general framework by which the two streams of evidence (epidemiologic and mechanistic) influence the final causality conclusion. It is important to note that mechanistic evidence can only support causation. Epidemiologic evidence, by contrast, can support (“favors acceptance of”) a causal association or can support the absence of (“favors rejection of”) a causal association in the general population and in various subgroups that can be identified and investigated, unless or
7 Previous IOM committees used the term establishes instead of convincingly supports.
until supportive mechanistic evidence is discovered or a rare, susceptible subgroup can be identified and investigated. This framework needed to accommodate the reality that for any given causality conclusion one or both of the types of evidence could be lacking, the two types of evidence could conflict, or neither type of evidence might definitively influence the causality conclusion.
The framework also had to accommodate known limitations of both types of evidence. Epidemiologic analyses are usually unable to detect an increased or decreased risk that is small, unless the study population is very large or the difference between the groups (e.g., vaccinated vs. unvaccinated) at risk is very high (e.g., smoking increases the risk of lung cancer by at least 10-fold). Epidemiologic analyses also cannot identify with certainty which individual in a population at risk will develop a given condition. These studies also can fail to detect risks that affect a small subset of the population. Mechanistic evidence, particularly that emerging from case reports, occasionally can provide compelling evidence of an association between exposure to a vaccine and an adverse reaction in the individual being studied, but it provides no meaningful information about the degree of risk to the population or even to other individuals who have the same predisposing characteristics. The occurrence rate of the adverse event or condition in the general population cannot be estimated from case reports,9 nor can one be certain that the risk is homogeneous across potentially vulnerable subgroups within the general population (e.g., the developing fetus and infants under 24 months, immunologically compromised individuals, or individuals with a rare genetic predisposition).
The framework does not accommodate any information regarding the benefit of the vaccine to either population or individual health. The focus of this particular committee is only on the question of what particular vaccines can cause particular adverse effects.
In general, the framework shown in Figure 2-2 illustrates how causality conclusions can be based primarily on epidemiologic evidence, primarily on mechanistic evidence, or on a combination of the two, and that on occasion expert judgment, such as that provided by the complement of expertise represented on the committee, is needed to weigh uncertain or competing evidence.
Evidence Convincingly Supports a Causal Relationship
The framework allows for a causality conclusion of “convincingly supports” based on an epidemiologic weight-of-evidence assessment of high in
FIGURE 2-2 Strength of evidence that determined the causality conclusions.
the direction of increased risk (which requires at least two well-conducted epidemiologic studies).
The framework also allows strong mechanistic evidence, which requires at least one case report in which compelling evidence exists that the vaccine indeed did cause the adverse event, to carry sufficient weight for the committee to conclude the evidence convincingly supports a causal relationship. The committee considered laboratory-confirmed, vaccine-strain virus isolation compelling evidence to attribute the disease to the vaccine-strain virus and not other etiologies. The committee recognizes that vaccine-strain virus can transiently appear in otherwise sterile spaces after vaccination; however, the committee determined that the accurate detection of vaccine-strain virus in symptomatic individuals to be strong evidence that the vaccine caused the symptoms. This conclusion can be reached even if the epidemiologic evidence is rated “high” in the direction of no increased risk or even decreased risk. The simplest explanation in this circumstance is that the adverse effect is real but also very rare. Another way of stating this is that if the vaccine did cause the adverse effect in one person, then it can cause the adverse effect in someone else (IOM, 1994). It might seem that the committee “overvalued” case reports in allowing one case to provide convincing evidence of causation; however, it is a rare case report that is so convincing. For most of the specific causality conclusions in this category, more than one compelling case report existed.
The isolated report of one convincing case provides no information about the risk of the adverse effect in the total population of vaccinated individuals compared with unvaccinated individuals. If the one convincing case has an underlying condition that may increase susceptibility to the adverse effect, it might have no relevance to the otherwise not-susceptible population.
As will be described in subsequent chapters of the report, the committee concluded the evidence convincingly supports 14 specific vaccine–adverse event relationships. In all but one of these relationships, the conclusion was based on strong mechanistic evidence with the epidemiologic evidence rated as either limited confidence or insufficient. When moderate or strong epidemiologic evidence is not available to support the committee’s conclusions favoring causality, it is difficult, if not impossible, to quantify the risk of the adverse event in either the entire population or the susceptible subgroup. See Chapter 13 for a discussion of this issue.
Evidence Favors Acceptance of a Causal Relationship
A conclusion of “favors acceptance of a causal relationship” must be supported by either epidemiologic evidence of “moderate” certainty of an increased risk or by mechanistic evidence of intermediate weight. The
framework requires more than one epidemiologic study or more than one case report (with supporting but not conclusive mechanistic information) in support of this causality conclusion. A weight of mechanistic evidence of low-intermediate was not sufficient, without concurring epidemiologic evidence, to support a conclusion favoring acceptance of a causal relationship. As will be described in subsequent chapters of the report, the committee concluded the evidence favors acceptance of four specific vaccine–adverse event relationships.
Evidence Favors Rejection of a Causal Relationship
The framework allows the committee to “favor rejection” of a causal relationship only in the face of epidemiologic evidence rated as high or moderate in the direction of no effect (the null) or of decreased risk and the absence of strong or intermediate mechanistic evidence in support of a causal relationship. As described above, the committee requires more than one epidemiologic study to merit a conclusion that the evidence favors rejection of a causal relationship.
As will be described in subsequent chapters of the report, the committee concluded the evidence favors rejection of five specific vaccine–adverse event relationships.
Evidence Is Inadequate to Accept or Reject a Causal Relationship
The committee identified two main pathways by which it concludes that the evidence is “inadequate to accept or reject” a causal relationship. The most common pathway to this conclusion occurs when the epidemiologic evidence was of limited certainty or insufficient and the mechanistic evidence was weak or lacking. Another pathway occurs when the epidemiologic evidence is of moderate certainty of no effect but the mechanistic evidence is intermediate in support of an association. The committee analyzed these sets of apparently contradictory evidence and ultimately depended upon its expert judgment in deciding if a conclusion to favor acceptance based on the intermediate mechanistic data was warranted or if the conclusion remained as “inadequate to accept or reject” a causal relationship. The committee required more than one epidemiologic study to conclude other than that the evidence is inadequate to accept or reject a causal relationship.
As will be described in subsequent chapters of the report, the committee concluded the evidence was inadequate to accept or reject the vast majority of specific vaccine–adverse event relationships. See Chapter 13 for a discussion of this conclusion.
As described in Chapter 3, the committee recognized that the risk of an adverse effect of a vaccine can be influenced by host factors, some known and others not yet understood. Where the committee thought the evidence—whether from epidemiologic analyses or from the clinical studies—regarding risks to subpopulations was informative, evidence-based, and biologically sound, it made separate conclusions. For example, the risk of invasive disease following varicella vaccine, a live virus vaccine, is likely much higher in immunocompromised persons than in persons who are immunocompetent. Other subpopulation analyses in the report include age and sex for some specific adverse events.
In their consideration of several adverse events, the committee concluded that the mechanism of injury was likely unrelated to the specific antigenic or other components of the vaccine. The committee concluded that the exposure of concern is not the injected vaccine, rather the injection of the vaccine. The adverse events include syncope, complex regional pain syndrome, and deltoid bursitis. These are covered in Chapter 12.
Guyatt, G. H., A. D. Oxman, G. E. Vist, R. Kunz, Y. Falck-Ytter, P. Alonso-Coello, H. J. Schunemann, and G. W. Grp. 2008. GRADE: An emerging consensus on rating quality of evidence and strength of recommendations. British Medical Journal 336(7650):924-926.
Halsey, N. A. 2002. The science of evaluation of adverse events associated with vaccination. Seminars in Pediatric Infectious Diseases 13(3):205-214.
Hill, A. B. 1965. Environment and disease—association or causation? Proceedings of the Royal Society of Medicine-London 58(5):295-300.
IOM (Institute of Medicine). 1991. Adverse effects of pertussis and rubella vaccines: A report of the committee to review the adverse consequences of pertussis and rubella vaccines. Washington, DC: National Academy Press.
IOM. 1994. Adverse events associated with childhood vaccines: Evidence bearing on causality. Washington, DC: National Academy Press.
IOM. 2001a. Immunization safety review: Measles-mumps-rubella vaccine and autism. Washington, DC: National Academy Press.
IOM. 2001b. Immunization safety review: Thimerosal-containing vaccines and neuro-developmental disorders. Washington, DC: National Academy Press.
IOM. 2002a. Immunization safety review: Hepatitis B vaccine and demyelinating neurological disorders. Washington, DC: The National Academies Press.
IOM. 2002b. Immunization safety review: Multiple immunizations and immune dysfunction. Washington, DC: National Academy Press.
IOM. 2003a. Immunization safety review: SV40 contamination of polio vaccine and cancer. Washington, DC: The National Academies Press.
IOM. 2003b. Immunization safety review: Vaccinations and sudden unexpected death in infancy. Washington, DC: The National Academies Press.
IOM. 2004a. Immunization safety review: Influenza vaccines and neurological complications. Washington, DC: The National Academies Press.
IOM. 2004b. Immunization safety review: Vaccines and autism. Washington, DC: The National Academies Press.
Liberati, A., D. G. Altman, J. Tetzlaff, C. Mulrow, P. C. Gotzsche, J. P. A. Ioannidis, M. Clarke, P. J. Devereaux, J. Kleijnen, and D. Moher. 2009. The prisma statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. PLoS Medicine 6(7).
Loke, Y., D. Price, and A. Hermheimer. 2008. Chapter 14: Adverse effects. In Cochrane handbook for systematic reviews of interventions, edited by J. P. T. Higgins, and S. Green: The Cochrane Collaboration.
Miller, F. W., E. V. Hess, D. J. Clauw, P. A. Hertzman, T. Pincus, R. M. Silver, M. D. Mayes, J. Varga, T. A. Medsger, Jr., and L. A. Love. 2000. Approaches for identifying and defining environmentally associated rheumatic disorders. Arthritis & Rheumatism 43(2):243-249.
Owens, D. K., K. N. Lohr, D. Atkins, J. R. Treadwell, J. T. Reston, E. B. Bass, S. Chang, and M. Helfand. 2010. AHRQ series paper 5: Grading the strength of a body of evidence when comparing medical interventions—Agency for Healthcare Research and Quality and the effective health-care program. Journal of Clinical Epidemiology 63(5):513-523.
Rothman, K. J., S. Greenland, and T. L. Lash. 2008a. Precision and statistics in epidemiologic studies. In Modern epidemiology, 3rd ed., Philadelphia: Lippincott Williams & Wilkins. Pp. 148-167.
Rothman, K. J., S. Greenland, and T. L. Lash. 2008b. Validity in epidemiologic studies. In Modern epidemiology, 3rd ed., Philadelphia: Lippincott Williams & Wilkins. Pp. 128-147.
Schunemann, H. J., S. Hill, G. H. Guyatt, E. A. Akl, and F. Ahmed. 2011. The GRADE approach and Bradford Hill’s criteria for causation. Journal of Epidemiology and Community Health 65(5):392-395.
Stroup, D. F., J. A. Berlin, S. C. Morton, I. Olkin, G. D. Williamson, D. Rennie, D. Moher, B. J. Becker, T. A. Sipe, S. B. Thacker, and M. Grp. 2000. Meta-analysis of observational studies in epidemiology—a proposal for reporting. Journal of the American Medical Asso ciation 283(15):2008-2012.
USPSTF (U.S. Preventive Services Task Force). 2008. U.S. Preventive Services Task Force procedure manual. AHRQ Publication No. 08-05118-EF AHRQ.
WHO (World Health Organization). 2001. Causality assessment of adverse events following immunization. Weekly Epidemiological Record 76(12):85-89.