Factors in Emergence
Six factors in the emergence of infectious diseases were elucidated in a 1992 Institute of Medicine (IOM) report, Emerging Infections: Microbial Threats to Health in the United States. A decade later, our understanding of the factors in emergence has been substantially influenced by a broader acceptance of the global nature of microbial threats. As a result, this report expands the original list, identifying thirteen factors in emergence (see Box 3-1). These thirteen factors are reviewed in turn in this chapter. The chapter ends with a case example—influenza—illustrating the interaction among the factors in the emergence of an infectious disease.
Future scientific discoveries and an increased understanding of the complexity of the emergence of infectious diseases will no doubt add to the list of factors identified in this report. In this light, the committee developed a model for conceptualizing how the factors in emergence converge to impact on the human–microbe interaction and result in infectious disease (see Figure 3-1). This model organizes the various factors into four broad domains: (1) genetic and biological factors; (2) physical environmental factors; (3) ecological factors; and (4) social, political, and economic factors. As we examine the individual factors, envisioning each as belonging to one or more of these four domains may simplify the understanding of the complex dynamics of emergence.
MICROBIAL ADAPTATION AND CHANGE
Microbes live on us and within us and inhabit virtually every available ecological niche of the external environment, and they will expand into new
BOX 3-1 Factors in Emergence
niches that occur as we continue to alter the environment and extend our contact with the microbial world. Most of the microbes that live on or inside humans or exist in the environment do not cause disease in humans (see Box 3-2). These microbes may appear to be unimportant. However, they are often crucial to the human ecosystem. Moreover, microbes that have heretofore not affected humans directly may still represent a potent threat. Microbes that are pathogenic to the animals and plants on which we depend for survival, for example, are an indirect threat to human health. Other microbes live in apparent harmony with animals but can be pathogenic for humans, as evidenced by the number of emerging zoonotic diseases that are transmitted to humans from animals. Microbes are also adept at adaptation and change under selective pressures for survival and replication, including the use of antimicrobials by humans. Microbial adaptation and change continually challenge our responses to disease control and prevention. For example, the influenza virus is renowned for its ability to continually evolve so that new strains emerge each year, giving rise to
annual epidemics and necessitating the ongoing development of new influenza vaccine strains.
When the “germ theory” of disease was born in the late nineteenth century, Robert Koch and his contemporaries were convinced that diseases were caused by invariant, monomorphic microbial species. Early microbiologists dismissed the variants seen in petri dishes as mere contaminants— foreign entities that had floated into the culture medium from the atmosphere. Now, of course, the inherently variable nature of these early microbial species is well known. Microbes have enormous evolutionary potential and are continually undergoing genetic changes that allow them to bypass the human immune system, infect human cells, and spread disease. They may also traverse an alternative pathway that is a symbiotic accommodation to their hosts (see Box 3-2).
Numerous microbes have developed mechanisms to exchange or incorporate new genetic material into their genomes; even unrelated species can exchange virtually any stretch of DNA or RNA. Genomic sequencing of pathogens made possible by technological advances shows that horizontal movement, or lateral transfer, of DNA is common and may be responsible for the emergence of many new microbial species. Lateral transfer can involve the exchange of virulence genes (genes that confer pathogenicity) and/or other genes required for adapting to a particular host or environment. Indeed, the exchange of virulence genes is so pervasive among bacterial pathogens that species-specific chromosome regions containing virulence genes have inspired their own name—“pathogenicity islands” (Ochman and Moran, 2001; Groisman and Ochman, 1996; Hacker et al., 1997; Hacker and Kaper, 2000). Some pathogenicity islands encompass very large genetic regions, as many as 100 kilobases long. The transfer of just a single pathogenicity island in E. coli is sufficient to convert a benign strain into a pathogenic one (McDaniel and Kaper, 1997).
Pathogens have devised other means of adapting rapidly to new circumstances in their environment. RNA viruses, and retroviruses in particular, can mutate at very high rates, allowing them to adapt rapidly to changes in their external environment, including the presence of therapeutic drugs. Because microbes reproduce so quickly—as often as every 10 minutes— even very rare mutations build up rapidly in viral and bacterial populations. Many pathogenic bacteria have short runs of identical bases (“repeats”) in their DNA; very minor changes in these repeats occur commonly and result in changes in gene expression. Moreover, many bacteria and viruses can sense changes in the external environment, and depending on what they sense, their genes can enable virtually instant changes in the regulation of certain sets of other genes, thus allowing the microbe to adapt to the new environment.
The more we learn about microbial genetics, structure, and function, the more we marvel at the sophistication of the survival strategies of microbes. Their mechanisms of survival are many and varied, and specific pathogens are generally tailored to flourish in particular niches. Many viruses and bacteria use our own cellular receptors to attach to and enter human cells; others utilize various human proteins for their own essential needs. Microbes use several means to defend themselves from being disabled or destroyed by the human immune system, including the rapid evolution of new antigenic variants, the masking of crucial surface antigens, inhibition of the immune system, and escape from the immune system by “hiding” inside human cells. Some microbes coat their surfaces with mimics of human tissue to prevent recognition by their human host as “nonself.” As a result, the human immune response is not activated, and the microbe is ignored and left to survive and reproduce at will. Some microbes have evolved mechanisms to downregulate the human innate immune system, which would otherwise serve as the human body’s first line of defense. Others stimulate an immune response that is injurious to the human host; for example, a sustained anti-self response may be triggered by viral or bacterial antigens that are molecular mimics of human antigens leading to chronic inflammation. Other strategies for survival include the ability to cause latent infections that can reactivate years later at a time when the host’s immune responses are blunted. Clearly, pathogens are extraordinarily adept (and successful) in carrying out their game of survival of the fittest.
The development of preventive vaccines and antimicrobial therapies is among the greatest achievements of modern medicine. Unfortunately, the tremendous evolutionary potential of microbes empowers them with adeptness at developing resistance to even the most potent therapies and complicating attempts to create effective vaccines. In some cases, the antimicrobial drug target on the microbe mutates in such a way that binding of the antiviral or antibiotic no longer inhibits the virus or bacteria. For example, one of the major obstacles to the development of an effective vaccine against HIV is the very rapid antigenic change that the viral surface proteins undergo regularly. In fact, their mutation rate is so high that almost every retroviral particle is genetically different from every other particle by at least one nucleotide substitution. In other cases, bacteria have evolved enzymes that modify or destroy the antibiotic before it can reach its target inside the bacterium, or they “pump it back out” before it can do any damage to the microbe. Many genes for resistance can be transferred readily among different bacterial species; resistance can easily spread through multiple populations of species that occupy the same host environment.
Acquisition of genes for resistance is advantageous for the microbe only when it is under attack by therapeutics. The adapted microbe may be slightly less fit in the absence of antimicrobial therapy, and thus the organ-
BOX 3-2 The Microbiome
Medical science is imbued with the Manichaean view of the microbe–human host relationship: “we good; they evil.” Indeed, the ascription of microbes to pathology has pervaded the teaching of biomedical science for over a century and consequently has left us with certain blind spots in our biological perspective of the pathogen–human host relationship. Obviously, microbes do have a knack for making us ill, killing us, and even recycling our remains to the geosphere. Nevertheless, in the long run, microbes have a shared interest in our survival. After all, if a pathogen does too much damage to its host, it will kill off not only the host but itself as well. “Domesticating” the host is a much better long-term strategy, and thus natural selection tends to favor less virulent pathogens that do not cause quite so much harm. Most successful parasites likely travel a middle path with regards to the amount of damage they do to their host; they need to be aggressive enough to enter the body surfaces and toxic enough to counter their host’s defenses, but once established they also do themselves (and their hosts) well by moderating their virulence.
A better understanding of the host–pathogen relationship might be achieved by thinking of the host as a superorganism—or “microbiome”—with the host’s genome and those of all of the host’s indigenous microbes yoked into a chimera of sorts (Lederberg, 2000; Hooper and Gordon, 2001). The microbiome refers to the small biotic community that defines each of us as individuals, as well as the collective set of genomes that inhabit our skin, gut lumen, mucosal surfaces, and other body spaces. For the most part, the microbiome is a poorly catalogued ensemble, of which the majority of entries have yet to be cultivated and characterized, let alone understood with regard to their pathogenicity. Indeed, from a microbiome perspective, the mitochondria—which provide the oxidative metabolism machinery for every eukaryotic cell, from yeast to protozoa to multicellular organisms—can be regarded as the most successful of all human microbes. Mitochondria derive from an ancient lineage within the proteobacteria (Gray et al., 1999) and illustrate just how far the genomic collaboration between a host and a member of its indigenous microbial community can evolve.
Until recently, infectious disease research has given sparse attention to how microbes have evolved adaptations for sustaining themselves as chronic inhabitants or “domesticators” of their human hosts. The evolutionary rate of large, complex multicellulars such as ourselves is, for the most part, simply too slow to evolve their own resistance and keep pace with the rapid evolution of microbes. A year in microbial history matches all of primate, perhaps mammalian, evolution. Not only do microbes evolve much more quickly than humans, but their enormous evolutionary potential is further enhanced by their sheer numbers as well as their many ingenious mechanisms of gene exchange (e.g., conjugation and plasmid interchange).
Microbes can go beyond inhabiting our body space to completely set up genetic shop. Retroviruses, for example, are unable to replicate until they have become integrated into the host DNA; thereafter, their replication involves simply the fairly standard transcription of host chromosomal DNA into RNA copies. Indeed, it appears that some of the so-called HERVs (human endogenous retroviruses), with which the human genome is so heavily populated, have evolved so far as to participate in the physiology of our placenta and in our gustatory behavior. We have no idea what pathways HERVs have used to reach their target, nor can we predict the long-term consequences of their further evolution. But experience has shown we have every reason to expect that our
most notorious retrovirus, HIV, will find a way to lodge itself in the germ line as well. The human genome encodes some 223 proteins with significant homology to bacterial proteins, suggesting that they were acquired from bacterial sources via horizontal transfer (Lander et al., 2001). These apparent insertions from microbial sources serve as further evidence of a historic host–microbe collaboration among the various components of the microbiome.
Our focus on “conquering” infectious disease may deflect from more ambitious, yet perhaps more pragmatic, aims; little consideration has been given to the notion that perhaps we could learn to live with a pathogen instead of being so insistent on getting rid of it. Natural history abounds with infections that have, over the course of evolutionary history, achieved a mutually tolerable state of equilibrium with their host. Genetic variation of the influenza A virus, for example, has remained stable in its wild aquatic bird reservoir, and infected avians often show no sign of disease. Although the recognition of AIDS in 1981 has inspired the most intense biomedical research program in history, the incidence of disease is only increasing. Would this trend reverse if, instead of focusing exclusively on ways to conquer HIV, we were to give equal weight to developing therapeutic measures that nurtured the immune system that HIV erodes?
Indeed, consider that many of the microbes that reside in our gut—such as Lactobacillus spp.—actually serve a protective, not a pathogenic, role. In fact, their protective advantage is currently being exploited in so-called “probiotic” therapy—the administration of live, benign microbes that benefit the host and aid in the treatment of disease (Hooper and Gordon, 2001; IOM, 2002b). Although scientists have known about the health benefits of lactic acid bacteria in particular for more than a century, the broader concept of probiotic therapy is a recent one (IOM, 2002b; Fuller, 1989). In addition to Lactobacillum, other probiotic preparations have contained Bifidobacterium, Streptococcus spp., and E. coli. Thus far, probiotic therapy has proven most beneficial in treating active ulcerative colitis, as well as complications following surgical intervention for that condition (Gionchetti et al., 2000; Rembacken et al., 1999). Probiotic lactobacillus may even prove useful in strengthening immune responses in persons infected with HIV. Normal bacterial flora are altered in HIV infection, as evidenced by the frequency of bacteremia associated with altered gastrointestinal function, diarrhea, and malabsorption; and failure-to-thrive, which is linked to altered gastrointestinal function, is relatively common in congenital HIV infection. Recent studies have shown that the effect of L. plantarum 299v, a specially developed probiotic lactobacillus, has a generally beneficial effect on the immune response in HIV-infected children (Cunningham-Rundles and Nesin, 2000).
The concept of probiotic therapeutics extends even beyond simply introducing a living microbe. Recent studies have demonstrated that genetically engineered gut commensal bacteria can be used as drug delivery platforms to treat infectious disease (Steidler et al., 2000; Beninati et al., 2000; Shaw et al., 2000). Other possible uses of probiotic therapy include using microbial products that target specific disease processes, such as weakened epithelial barriers or reduced activity of the mucosal immune system (Hooper and Gordon, 2001); using microbes that bear relevant cross-reacting epitopes instead of vaccines; and using them as optional food additives (Lederberg, 2000).
The rewards of a microbiomal perspective on infectious disease could be great. Not only would we achieve new insights with regard to how we and the microbes around and within us adapt to each other, and thus how pathogens emerge, but we would likely develop new approaches to preventing and treating infectious diseases.
ism may slowly revert to a sensitive state when therapy is withdrawn. Thus, the frequency of antimicrobial use is key; less use results in less resistance, while more use leads to more resistance. Unfortunately, antibiotics are frequently used when they are not truly needed (see the discussion of inappropriate use of antimicrobials in Chapter 4).
HUMAN SUSCEPTIBILITY TO INFECTION
Many properties of the human body—from its genetic makeup to its innate biological defenses—affect whether a microbe will cause disease. The body has evolved an abundance of physical, cellular, and molecular barriers that protect it from microbial infection, beginning with the skin. Even minor breaks in the skin increase susceptibility to infection. The normal bacterial flora of the gut and inner mucosal surfaces serve a protective role; not only do they occupy receptors to which pathogenic bacteria would otherwise attach themselves, but they produce antimicrobial substances that inhibit the growth of their pathogenic competitors. When these normal bacterial flora are reduced, as happens when a broad-spectrum antibiotic is used to treat an infection or when acidity in the stomach is reduced through various medications, the body is more susceptible to pathogens. Another protective defense mechanism is seen with the enzyme lactoferrin, which is plentiful in breast milk and on mucosal surfaces. Lactoferrin serves a protective role by sequestering iron, thereby making the mineral unavailable to invading pathogens that need it to reproduce. Susceptibility to infection can result when these normal defense mechanisms are altered or when host immunity is otherwise compromised as a result of impaired immune function; genetic polymorphisms; and other factors, such as aging and poor nutrition.
Impaired Host Immunity
The innate or nonspecific immune response is the body’s initial inflammatory reaction to any kind of injury or microbial invasion. Innate immune defenses are believed to have first evolved in insects and other lower organisms that lack the ability to produce antibodies and thus depend entirely on this primitive but effective system for their protection against infection. In humans, more than a dozen different so-called TLRs (TOLL-like receptors)1 have been found on cells that make up the mucosa and skin (including macrophages and dendritic cells), which is where pathogens first encounter their human host. When foreign molecules, such as bacterial DNA
or flagella, bind to TLRs, they trigger a complex set of responses that leads to the production of inflammatory cytokines and local antimicrobial peptides. When the inflammation is inadequate to deal with an injury or microbial invader, the so-called adaptive or acquired specific immune response kicks in. This mechanism encompasses both cell-mediated and humoral responses. The former involves the production of antigen-specific T cells, which, depending on their surface protein makeup, serve a variety of functions, such as influencing the activities of other immune cells; the latter involves the production of antigen-specific B cells, which produce humoral antibodies.
New knowledge about the innate and specific immune responses is being used to develop potential therapies for infectious disease control. For example, the key to a good innate immune system defense is a balanced, regulated production of inflammatory cytokines. Otherwise, microbial infection can provoke such a massive release of inflammatory cytokines as to seriously damage and even kill their host. Researchers are exploring ways to interrupt the TLR pathways in order to either downregulate overly active inflammatory responses or upregulate weak responses. These could be useful strategies in the treatment of infectious diseases for which no otherwise effective specific therapies exist.
J.B.S. Haldane (1949) was among the first to suggest that pathogens serve as potent natural selective forces that have helped shape the evolution of human defenses against infection (Hill, 1998; Weatherall, 1996a; Lederberg, 1999). In particular, Haldane predicted that people who live in historically malaria-laden areas may have evolved genetic polymorphisms—in particular, heterozygous hemoglobinopathies—that increase their ability to survive infection with malaria.
Hemoglobinopathies are a group of diseases caused by or associated with the presence of abnormal hemoglobin in the blood; they are one of the most common single-gene disorders in humans. The hemoglobin gene has several allelic variants, including hemoglobin S, which, if homozygous, causes sickle cell disease. Hemoglobin S heterozygotes have the sickle cell trait and are virtually asymptomatic; however, they exhibit 80 to 95 percent protection against P. falciparum infection (Weatherall, 1996b). Hemoglobin S homozygosity exacts a cost in adverse health effects (as many persons of African descent have sickle cell disease), but clearly the protective power of this particular allelic variant is still under major selective pressure. Indeed, in areas free of malaria, sickle cell trait and sickle cell disease are very rare, or generally found only in lineages who have migrated from malaria-laden areas. Another structural hemoglobin variant, hemo-
globin E, occurs at high frequencies throughout the Indian subcontinent, Burma, and Southeast Asia; in some areas, carriers make up as much as 50 percent of the population. Like hemoglobin S, hemoglobin E protects against P. falciparum (Flint et al., 1993; Weatherall, 1996b). More common than the structural variants hemoglobin S and E are a group of anemias known as thalassemias, which result from a defective production rate of either the alpha or the beta chain of the hemoglobin polypeptide. Again, heterozygotes are usually asymptomatic, and protection against malaria appears to have been the major selective force responsible for the more than 120 different beta thalassemia mutations, as well as the many different alpha thalassemia mutations (Weatherall, 1996b). The biological mechanism underlying the protective power of the heterozygous hemoglobinopathies is still unclear.
The presence of malaria in a population does more than modify hemoglobin. Several other malaria-related balanced polymorphisms, many of which involve the red blood cell structure and metabolism, have likewise evolved in response to the tremendous selective force exerted by the disease. Glucose-6-phosphate dehydrogenase deficiency (an X-linked chromosomal disorder), for example, serves a protective role in heterozygous female carriers and hemizygous males (Ruwende et al., 1995). Heterozygous carriers of a mutation in band 3 of the red blood cell membrane, which in its homozygous state causes the potentially lethal melanesian ovalocytosis, may also have a protective advantage. Finally, different blood group antigens may have evolved in response to past exposure to malaria (Miller, 1994). Racial differences in the distribution of certain red blood cell receptors for malaria parasites have been observed, possibly as a result of evolutionary genetic selection (IOM, 1991; Barragan et al., 2000; Hamblin et al., 2002). The Duffy antigen (a name taken from the hemophilia patient in whom it was first identified) is a parasite receptor on red blood cells that is recognized by certain forms of malaria, including P. vivax and P. knowlesi. Many persons of African descent lack the Duffy gene and therefore cannot be infected by either of these malaria parasites.
Other balanced polymorphisms have apparently evolved in response to malaria. For example, the tumor necrosis factor alpha gene and the HLA-DR class II genes both have polymorphic systems that have been linked to malaria (Hill et al., 1991). The remarkable human genetic diversity that has evolved in response to malaria, and that scientists have only just begun to uncover, suggests that other less common or less studied infections have probably generated extraordinary diversity as well. Fortunately, knowledge gleaned from the human genome project and its technological offshoots is leading to a dramatic explosion in new understandings of polymorphisms in a variety of genes that alter the response to infection. One of the most recently reported links between infection and natural selection is a deletion
in the host-cell chemokine receptor CCR5, which reduces the risk of acquiring HIV infection after exposure (Sullivan et al., 2001). As another example, certain major histocompatibility complex class I molecules have been shown to reduce the risk of dying from HIV infection (Kaslow et al., 1996; Gao et al., 2001). Likewise, several different mutations or polymorphic systems influence the susceptibility to or likelihood of death from meningococcal infection (Read et al., 2000; Nadel et al., 1996; Westendorp et al., 1997). Numerous other examples exist of genetic associations with diseases, including cancers and chronic diseases, and the list is growing rapidly (Hill, 2001; Topcu et al., 2002; Chen et al., 2002a; Calhoun et al., 2002; Helminen et al., 2001; Pain et al., 2001).
Host susceptibility to infection is aggravated by malnutrition. A strong and consistent relationship has been found between childhood malnutrition and increased risk of death from diarrhea, acute respiratory infection, and possibly malaria (Rice et al., 2000). Conversely, infectious processes, especially those associated with diarrhea, drive malnutrition in young children (Mata, 1992; Mata et al., 1977), so that diarrheal illness is both a cause and an effect of malnutrition (Guerrant et al., 1992; Wierzba et al., 2001; Lima et al., 1992). Clinically, malnutrition is characterized by inadequate intake of protein, energy, and micronutrients and by frequent infections or disease (WHO, 2002d). Malnutrition has been associated with 50 percent of all deaths among children worldwide (Rice et al., 2000). In 2000, an estimated 150 million of the world’s children under age 5 were malnourished on the basis of low weight for age (WHO, 2002d). More than two-thirds (70 percent) of these children were in Asia, especially southern Asia. The number of malnourished children living in Africa—26 percent of the world’s malnourished children—has risen as a result of population growth in the region, as well as natural disasters, wars, civil disturbances, and population displacement (WHO, 2000b).
Malnutrition diminishes host resistance to infection through a number of mechanisms. Virtually all bodily processes and physical barriers that keep infectious agents from invading the host are affected. These include the skin, mucous membranes, gastric acidity, absorptive capacity, intestinal flora, cell-mediated immunity, phagocyte function, and cytokine production (Chandra, 1997; Levander, 1997). Although multiple-nutrient deficiencies are much more common than single-nutrient deficiencies, lack of even one vitamin or mineral (e.g., zinc; selenium; iron; copper; vitamins A, C, E, B-6, and folic acid) can impair the immune response. For example, vitamin A deficiency significantly increases the risk of severe illness and death from common childhood infections, such as diarrheal disease and
measles, by diminishing the host’s resistance to infection. For children deficient in vitamin A, the periodic supplying of high-dose vitamin A has reduced mortality by 23 percent overall and by up to 50 percent for those who suffer from acute measles (WHO, 2002d). The relative risk of measles mortality in children younger than 2 years of age has been shown to be significantly reduced when the children’s diets are supplemented with vitamin A for only 2 days (Barclay et al., 1987; West, 2000). Consequently, WHO recommends treating children who have measles, prolonged diarrhea, wasting malnutrition, or other acute infections with vitamin A (IOM, 2002c; WHO, 1997). Furthermore, studies have suggested an association between maternal vitamin A deficiency and an increased risk of vertical HIV transmission from mother to child (Semba et al., 1994; Greenberg et al., 1997). It is not yet clear, however, what role vitamin A supplementation has in the management of HIV infection.
CLIMATE AND WEATHER
Many elements of the physical environment influence the host directly; determine the survival of agents that exist outside the host; and mediate the transmission of agents between hosts, including the movement from animal to human hosts. Viewed in this light, the physical environment takes on considerable importance in determining the epidemiology of infectious diseases (Wilson, 2001). The interactions among vectors, animal reservoirs, microbes, and humans present many opportunities for changes in the physical environment to influence transmission dynamics. Many of the factors that affect the abundance, survival, activity, or feeding behavior of vectors also impact on the reproduction, survival, and abundance of animal reservoirs. For example, elevated rainfall often creates new breeding habitats for mosquitoes, leading to an increase in mosquito population density. Increased levels of precipitation can also lead to decreased marsh salinity, which in turn may increase the survival rates of certain toxic aquatic bacteria. Likewise, these same factors can affect human behavior or exposure to infection by impacting outdoor activities, housing, the quality and quantity of food, and agricultural or other uses of the environment.
Among the numerous elements of the physical environment that influence the emergence of infectious diseases, climate and weather2 have received a great deal of attention in recent years. Many infectious diseases either are strongly influenced by short-term weather conditions or display a seasonality suggesting that they are influenced by longer-term climatic
changes (Patz et al., 2000). Climate can directly impact disease transmission through its effects on the replication and movement (and perhaps evolution) of disease microbes and vectors; climate can also operate indirectly through its impacts on ecology or human behavior (NRC, 2001). To be transported over relatively large distances from one host to another, many microbes must be borne passively through moving air or water. Some pathogenic microbes, such as those causing coccidiomycosis (see Box 3-3), are picked up from the soil and carried by dry, dusty winds (Schneider et al., 1997); some opportunistic human pathogens can apparently survive transoceanic transport in dust clouds (Griffin et al., 2001); and others, such as cryptosporidiosis, may be washed by heavy rains into reservoirs of drinking water (Alterholt et al., 1998) (see Box 3-4). The 1993 hantavirus outbreak in the southwestern United States, due to an El Nino event, is an example of how climatic factors have contributed to the emergence of infectious disease (see the later discussion of hantavirus). Likewise, higher water temperatures in the Pacific Northwest resulting from a 1997 El Nino event provided unusual conditions favorable to the growth of Vibrio parahaemolyticus, which led to a shellfish-associated outbreak of disease (CDC, 1998a).
The fact that local and regional climatic factors clearly influence disease emergence has led scientists to suggest that projected global climate changes will have an impact on infectious disease emergence. Climatologists project upward trends in global temperatures and estimate that by 2100, temperatures will have increased by 1.4–5.8°C (Intergovernmental Panel on Climate Change, 2001a). Climate change has already been detected, and impacts from such change are sure to follow. Arthropod-borne diseases, such as malaria, yellow fever, and dengue, are expected to be affected more readily than other types of diseases by climate change since arthropod transmission patterns are highly sensitive to changes in ambient temperature. Waterborne diseases, such as cryptosporidiosis, may also be affected (Patz et al., 2001). However, to fully assess the effects of climate change, both confounding factors (e.g., drug resistance, crop yields, population migration) and the adaptive capacity of a population must be considered. Many argue that at present, other factors—including human population density and the capacity of the public health system to prevent and control infectious disease outbreaks—affect disease risk more than does global climatic change. Indirectly, if global climatic change were to result in reduced food availability, thereby producing undernourished human populations more vulnerable to disease, its impact on infectious disease could be dramatic (Intergovernmental Panel on Climate Change, 2001b). Likewise, if social disruption, economic decline, and displaced populations were to emerge as a result of reduced food availability due to global climate change,
BOX 3-3 Fungal Threats
In 1994, a major earthquake shook Ventura County, California, generating massive landslides in the Santa Susana Mountains just north of Simi Valley. These landslides resulted in the formation of large dust clouds that were dispersed into nearby valleys by northeast winds. Within the dust clouds were tiny arthrospores of the fungus Coccidioides immitis. As the dust clouds settled over Ventura County, residents of the area inhaled the contaminated air; the result was 203 cases of coccidioidomycosis (also known as Valley fever and San Joaquin Valley fever), including three fatalities associated with this event (Schneider et al., 1997).
The dimorphic fungus Coccidioides immitis that causes coccidioidomycosis, grows in topsoil layers in the southwestern United States, Mexico, and parts of Central and South America. It is transmitted through the air following the disturbance of contaminated soil, as in the case of dust storms, earthquakes, and excavations. It is not transmitted from person to person. Although 60 percent of infected individuals are asymptomatic, the remainder can develop a range of infirmities, from influenza-like illness, to pneumonia, to severe pulmonary and extrapulmonary disease in immunocompromised individuals (CDC, 1994a). Simple environmental measures, such as planting grass and paving roads, can lower the risk of airborne dispersion of C. immitis. As of 2002, however, no practical method for eliminating the organism had been developed. National surveillance for coccidioidomycosis began through the National Electronic Telecommunications System for Surveillance (NETSS) in 1995; the disease is reportable in California, New Mexico, and Arizona.
In addition to dissemination by dust clouds, fungal infections can be a health threat for travelers to regions of the world where these infections are endemic (Panackal, 2002). Travelers have developed fungal infections as a result of a wide range of recreational and work activities. For example, an outbreak of coccidioidomycosis occurred in a church group from Washington State upon returning from Tecate, Mexico, where these members of the congregation had assisted with construction projects at an orphanage. Following the outbreak, Coccidioides immitis-was isolated from soil samples taken from Tecate (Cairns et al., 2000). College students visiting Acapulco, Mexico, were diagnosed with histoplasmosis, an infection caused by the soil-inhabiting fungus Histoplasma capsulatum, after staying at a beach resort hotel (CDC, 2001d). Similarly, Italian spelunkers returning from Mato Grosso, Peru, displayed signs and symptoms consistent with histoplasmosis (Nasta et al., 1997). Other potential mycotic disease threats include blastomycosis, which is endemic to parts of the south-central, southeastern, and midwestern United States, as well as Central and South America and parts of Africa; cryptococcosis, the fungal agent of which has been isolated from soil worldwide, usually in association with bird droppings; aspergillosis; candidiasis; and sporotrichosis (CDC, 2001e).
BOX 3-4 An Outbreak of Cryptosporidiosis
Cryptosporidiosis, a waterborne intestinal infection caused by Cryptosporidium spp., produces potentially life-threatening disease in those who are immunocompromised and mild to chronic diarrhea in others (Fayer and Ungar, 1986). In 1993, an estimated 403,000 crytposporidiosis infections occurred among residents of and visitors to Milwaukee, Wisconsin (MacKenzie et al., 1994). Cryptospordium oocysts in untreated water from Lake Michigan had apparently been inadequately removed by the coagulation and filtration process in a portion of the Milwaukee water treatment plant. The source of the oocysts leading to the outbreak remains speculative. Possible sources include cattle along two rivers that flow into the Milwaukee harbor, slaughterhouses, and human sewage. Various vertebrates (e.g., cows and wild deer) are naturally infected by Cryptosporidium spp. (Navin and Juranek, 1984; Simpson, 1992; Tzipori et al., 1981). Perhaps considerable rainfall, combined with a high concentration of animal runoff near the municipal water supply, triggered this transmission event. Genotypic and experimental infection data may suggest a human rather than bovine source (Peng et al., 1997). In the 2 years following this contamination of the water supply, it was estimated that 54 deaths (85 percent among people with AIDS) may have resulted from the 1993 outbreak (Hoxie et al., 1997). In addition to contaminated drinking water, outbreaks of cryptosporidiosis in the United States and abroad have been linked to chlorinated and unchlorinated recreational water facilites, such as public swimming pools, water parks, lakes, and rivers (Carpenter et al., 1999).
the emergence and spread of infectious disease would likely be substantially impacted. A recent National Research Council (NRC) report, Under The Weather: Climate, Ecosystems, and Infectious Disease, addresses the impact of climate and weather change in further detail (NRC, 2001).
The abundance and distribution of plants and animals can, conversely, impact on components of the physical environment. Forest growth, for example, usually reduces evapotranspiration; cropping often increases local relative humidity; and the development of large urban areas generally leads to an accumulation of atmospheric particulates and warmer air temperatures. Even very minor ecological changes, such as implementing a new farming technique, can confront pathogens with new environments and significantly alter the transmission patterns of infectious diseases. Of course, the pathogens must have sufficient genetic variation to adapt to such ecological changes and new environments. But most pathogenic evolutionary changes that result in a potentially new disease still require an ecological
cofactor for the disease to actually take root (Stephens et al., 1998). In other words, regardless of its genetic prowess, the pathogen still must be able to reach its animal (or human) host or vector.
Given today’s rapid pace of economic development and enormous scale of ecological changes, understanding how environmental factors are impacting on the emergence of infectious diseases has assumed an added urgency. To the pressing issues of environmental conservation, natural resource utilization, population growth, and economic development can be added the need to understand the interplay of these processes with the emergence of infectious diseases. Such environmental and ecological factors are playing an increasingly important role in disease emergence. In general, changes in the environment tend to have the greatest influence on the transmission of microbial agents that are waterborne, airborne, foodborne, or vector-borne, or that have an animal reservoir.
Pathogens transmitted by mosquitoes and their arthropod allies sicken millions of people each year, cause inestimable morbidity in humans and animals around the globe, and remain major barriers to social and economic development in much of the tropical world. Of the ten diseases targeted by WHO for special control programs, seven have arthropod vectors (WHO 2003a). Many of these diseases—for example, dengue, yellow fever, and malaria—which were controlled to a substantial degree, are now resurgent in many formerly endemic areas. Malaria continues to afflict much of the tropical world and causes an estimated 1.5 million to 2 million deaths per year. More than 2.5 billion people are at risk for dengue virus infection; 100 million cases of dengue are estimated to occur annually, and the incidence of dengue hemorrhagic fever is increasing rapidly throughout the tropics. Yellow fever virus has recently caused major epidemics in Africa and South America (Gubler, 2001; Monath, 2001), and sylvatic reservoirs in these areas provide an ongoing threat for its reintroduction into Aedes aegypti–infested metropolitan areas throughout the world. Ae. aegypti is also the principal vector of the dengue viruses. Vector-borne diseases continue to emerge in new areas and/or to resurge throughout the world, even in areas where they were previously controlled. Many newly emerged pathogens and diseases, including Sin Nombre and Andes viruses (and a plethora of other newly discovered hantaviruses), Guanarito, Lyme disease, and ehrlichiosis all have rodent hosts and/or arthropod vectors (Mills et al., 1999; Gubler, 1998; Gratz, 1999). Others, such as the Seoul, dengue, Japanese encephalitis, West Nile, and Rift Valley fever (RVF) viruses (see Box 3-5), have demonstrated their ability to emerge in new or
BOX 3-5 Rift Valley Fever
Rift Valley fever (RVF) provides an excellent example of how ecological conditions determine pathogen transmission. In Saudi Arabia, 453 individuals with suspected hemorrhagic fever required hospitalization from August to October 2000 (WHO, 2000c). The case-fatality rate was 19 percent, with a median age of 47 years and an age range of 1 to 95 years. In Yemen, 1,087 similarly suspected case-patients were identified from August to November 2000; 121 of them died (CDC, 2000c). The mean age of suspected cases was 32.2 years, with an age range of 1 month to 95 years. Symptoms included low-grade fever, abdominal pain, vomiting, diarrhea, and jaundice with liver and kidney dysfunction, often progressing to death. Three out of 4 case-patients reported being exposed to sick animals, handling an abortus, or slaughtering animals in the week before the onset of illness. Using diagnostics, including antigen and antibody detection, polymerase chain reaction, virus isolation, and immunohistochemistry, the CDC confirmed the diagnosis of RVF. Satellite images and aerial surveys revealed numerous areas throughout the coastal plain and adjacent mountains that would be conducive to transmission of the RVF virus. Entomologic studies revealed large numbers of two species of mosquitoes—Culex tritaeniorrhynchus and Aedes caspius—in the flood irrigation farming areas where most of the human cases were reported. The mechanism of virus trafficking is not clear; however, it is thought that animal relocation from Africa may have resulted in introduction of the virus into Saudi Arabia. It is now believed that the RVF virus may be able to establish itself almost anywhere in the world, given the availability of potential permissive vectors and animal reservoirs.
RVF virus was first recognized and isolated as the agent of a zoonotic disease in Kenya in 1930. The disease is now widespread throughout much of the African continent. RVF virus is transmitted mainly by floodwater Aedes spp., which feed primarily on animals. Mosquitoes may be infected transovarially, but domestic ungulates (cattle, sheep, goats, etc.) amplify transmission and become sufficiently viremic to infect other “bridge” mosquitoes (e.g., Culex spp.), which can then infect humans (Wilson, 1994). Virus transmission is linked to periodic heavy rainfalls that fill shallow depressions called dambos, where mosquitoes, such as Ae. macintoshi, lay their eggs. This vector has been associated with vertical transmission of the virus to progeny, thereby providing a mechanism for the virus to survive adverse climatic conditions. When the dambos are flooded, infected mosquitoes can emerge to initiate the transmission cycle. The local abundance of ungulates, their movement while searching for forage, and their proximity to humans are important in the epidemiology of the disease. Thus, complex ecological factors impact on where, when, and with what intensity RVF emerges (Linthicum et al., 1999). Outbreaks outside of sub-Saharan Africa occurred in Egypt in 1977–1978 and again in 1993; the identification of RVF in Saudi Arabia and Yemen in 2000 (Ahmad, 2000) was the first confirmation of its occurrence beyond the African continent.
previously endemic regions, thereby causing significant morbidity and mortality.
Arthropod-borne parasitic diseases, such as malaria, filariasis, onchocerciasis, trypanosomiasis, and leishmaniasis, remain major human threats. An estimated 120 million people suffer from lymphatic filariasis, and approximately 18 million people are afflicted with onchocerciasis, of whom about 340,000 are blind and an equal number visually impaired. More than 10 million people are afflicted with leishmaniasis, and 50,000 to 100,000 individuals die of visceral leishmaniasis each year in India alone. In Latin America, an estimated 20 million people have Chagas disease. In Africa, approximately 45 million people are at risk for African trypanosomiasis, which has virtually precluded domestic livestock production in a geographic region of Africa larger than the United States and now is tragically resurgent in humans in areas in East Africa.
Ecological and environmental conditions are key determinants of the transmission and persistence of vector-borne pathogens. Ecological conditions can increase the risk of infection by altering human exposure to vectors or changing their distribution, abundance, longevity, activity, and habitat associations, and thereby increasing or decreasing the overall potential for the vector population to transmit the pathogen to humans. Mosquito abundance and transmission of pathogens are typically associated with rainy seasons, since juvenile mosquitoes develop in aquatic habitats. Dengue transmission, for example, typically occurs during the rainy season, although the prior month’s temperature has been shown to affect transmission during the rainy season (Focks et al., 1995). Populations of floodwater- and container-breeding mosquitoes (e.g., Ae. aegypti) are dramatically affected by environmental conditions; their abundance is directly linked to rainfall (or snowmelt for temperate-zone mosquitoes), which induces the eggs to hatch. In contrast, transmission of Saint Louis encephalitis virus may be greatest in relatively dry periods after the rainy season (Shaman et al., 2002). The preferred breeding site of the principal vector, Culex quinquefasciatus, is stagnating pools of water with concentrated nutrient materials, and thus mosquito abundance increases during drier conditions, which favor the formation of such breeding sites. The emergence and reemergence of vector-borne pathogens are linked to changes in temperature (which determines how long it takes the parasite to develop), wind speed, and relative humidity (all of which affect vector feeding frequency); the amount and diversity of vegetation; and the presence of alternative hosts (which can alter the rate of blood feeding on humans). In particular, as previously discussed, global warming could theoretically result in dramatic alterations in the incidence and distribution of vector-borne diseases.
The movement of goods and people can also support the movement of vectors, allowing them to become established in new areas (see the later
discussion of international travel and commerce). This is certainly not a recent development. Probably the most notable public health example of such events is the dissemination of Ae. aegypti throughout the world (Tabachnick et al., 1985). After domestication and adaptation to humans and human environments, Ae. aegypti apparently spread to coastal areas of Africa, and was then transported throughout the world in sailing ships. Presumably Ae. aegypti, as well as yellow fever virus, was introduced into the New World on slave ships. Ae. albopictus, the Asian tiger mosquito, likely entered the United States via shipping (Moore, 1999). Aedes spp. eggs can easily be transported in such objects as waterlogged tires to new areas and hatch upon exposure to water. The large shipping containers used to transport so many products provide excellent environments for the transport of mosquito eggs. This presumably occurred recently as well with Ae. japonicus (Fonseca et al., 2001). Adult mosquitoes can be spread much more quickly throughout the world in the cabins or other areas of airplanes (Lounibus, 2002). Traveling mosquitoes can also rapidly introduce new genes, leading to the emergence of epidemiologically important vector phenotypes in new areas. Jet transport has been postulated as a mechanism for the rapid dissemination of an esterase mutation conferring pesticide resistance to organophosphates on Culex pipiens populations throughout the world (Raymond et al., 1998).
Traveling viremic humans can easily disseminate dengue virus to new areas via jet travel, and with the establishment of the mosquito Ae. aegypti in tropical and subtropical areas, can readily introduce new and perhaps virulent virus genotypes into susceptible populations. These same areas are also at risk for introduction of yellow fever virus, which is likewise transmitted by Ae. aegypti. Between 1970 and 2000, seven cases of yellow fever in unvaccinated travelers from the United States and Europe were reported (Monath and Cetron, 2002). It would appear to be inevitable for yellow fever to be introduced into new areas, such as Asia, with potentially catastrophic results.
Reservoir Abundance and Distribution
It has been estimated that 75 percent of all emerging infections are zoonotic, i.e., they can be transmitted from animals to humans (Taylor et al., 2001a). In some cases, the mechanisms of transmission of a pathogen from animals to humans has been identified, but the transmission of the same pathogen between various animal reservoirs remains a mystery (see Box 3-6). Pathogens transmitted to humans directly from rodents (e.g., rodent-borne viral diseases) or maintained in nature by rodents and transmitted to humans by arthropods (e.g., Lyme disease, erhlichiosis, plague)
Box 3-6 Nipah Virus
Between September 1998 and June 1999, an outbreak of a Japanese encephalitis-like illness occurred among people from several pig-farming villages in Malaysia; 265 cases of febrile encephalitis were reported to the Malaysian Ministry of Health, including over 100 deaths (Chua et al., 2000; Goh et al., 2000; WHO, 2001e). The majority of illnesses were characterized by 3–14 days of fever and headache, followed by drowsiness and disorientation that progressed to coma within 24–48 hours. In other cases, however, the infection was mild or inapparent. Most of the affected individuals were adult men who had histories of close contact with swine. During March of the same year, nine similar cases of encephalitis and two cases of respiratory illness that resembled some of what had been seen in the Malaysia outbreak were reported in Singapore; all eleven had handled swine imported from Malaysia. Concurrent with the human cases, many pigs in the regions were also becoming ill and dying.
Tissue culture isolation from human and swine central nervous system specimens resulted in the identification of a previously unknown infectious agent, later named Nipah virus after the village of Sungei Nipah where it is believed the outbreak originated through infected bats that frequent the fruit trees near the pig farms. These fruit bats are distributed across northern, eastern, and southeastern areas of Australia, Indonesia, Malaysia, the Philippines, and some of the Pacific Islands. Infected bats appear to be asymptomatic reservoirs; it is unknown how the virus is transmitted from the bats to pigs. More than 900,000 pigs were culled in response to the outbreak (Uppal, 2000).
Transmission of the Nipah virus to humans is primarily through direct contact with infected pigs or contaminated swine tissue; no evidence has been found for any person-to-person transmission. Although pigs are the only source of human infection identified thus far, they may not be the only one. For example, dogs infected with the Nipah virus have also shown a distemper-like illness, although no epidemiological link has been found between their infection and human disease; likewise, horses have shown serological evidence of infection, which again does not appear to be linked to human disease. The apparent ability of Nipah virus to infect a wide range of hosts and the fact that it causes a fatal and untreatable disease in humans have made this emerging infection an important public health concern.
make up a significant proportion of emerging and resurging diseases (Mills and Childs, 1998). Epidemics of plague, tularemia, relapsing fever, and typhus have all occurred in recent years. Exacerbating the situation is the potential for many of these agents to be weaponized and used intentionally for harm. Rodent-borne viral diseases have been unusually refractory to control or eradication programs and continue to emerge as significant pathogens of human and animal populations. Many newly emerged viruses (e.g., Sin Nombre virus and other hantaviruses, and Guanarito and other arenaviruses) have rodents as primary hosts. The high mortality rates asso-
ciated with these rodent-borne hemorrhagic fevers and hantavirus pulmonary syndrome generate great concern in the medical, scientific, and public health communities—and among the general public. Rodent-borne diseases will undoubtedly continue to emerge and increase in medical significance in many areas of the world.
Ecological and environmental conditions determine the epidemic potential of pathogens transmitted by animal reservoirs. Transmisson of arena-viruses—such as Junin virus (found in the corn mouse, Calomys musculinus) in Argentina, Machupo virus (in Calomys calosus) in Bolivia, and Lassa virus (from multimammate rats [Mastomys spp.]) in Africa—has strong environmental determinants. Transmission of each of these pathogens occurs via contact with rodent urine, feces, or tissues; the stability of the pathogens is influenced by humidity and sunlight. A strong link exists between the density of rodent reservoirs and arenaviral diseases in humans. For example, a longitudinal study of C. musculinus populations in an area in which Argentine hemorrhagic fever was endemic demonstrated a dramatic increase in the density of rodents immediately preceding an outbreak of human disease (Mills et al., 1992). During an outbreak of Bolivian hemorrhagic fever in San Joaquin, nearly 3,000 rodents (C. callosus) (about 10 per household) were removed during a 3-week period, apparently contributing to the rapid decline in new cases (Mercado, 1975). Changes in resources and predators that affect the abundance of rodents, combined with patterns of agricultural production and land use, appear to be the major determinants of risk for many arenaviral diseases.
The emergence of Sin Nombre virus and other hantaviral agents provides a textbook example of the effect of ecological forces on rodent distribution and abundance (Nichol et al., 1993). In 1993, an outbreak of acute respiratory distress disease occurred in the southwestern United States, with the initial cases occurring predominantly among Native Americans. The case fatality rate was approximately 60 percent, causing widespread anxiety and enormous media interest in this newly emerged disease. Sin Nombre virus (genus Hantavirus, family Bunyaviridae) was identified as the etiologic agent of this disease, designated hantavirus pulmonary syndrome (Nichol et al., 1993; Elliott et al., 1994). This finding was unexpected; the only other hantaviruses known in the United States at that time were Prospect Hill virus, which was not known to cause human illness, and Seoul virus, which had been associated with mild renal illnesses in humans in the eastern portion of the country (Glass et al., 1994). Thus, no a priori reason existed to associate the acute respiratory disease outbreak in 1993 with hantavirus infection.
Hantaviruses are found worldwide and are major causes of morbidity and mortality in Asia and Europe (Schmaljohn and Hjelle, 1997). In Eurasia, Hantaan virus infections have been associated with illnesses causing signifi-
cant mortality following acute, systemic disorders characterized by fever, hemorrhagic manifestations, and renal failure. These illnesses have usually been clinically diagnosed as hemorrhagic fever with renal syndrome (HFRS) or Korean hemorrhagic fever. Several other viruses, including Dobrava-Belgrade virus, Puumala virus, and Seoul virus, cause similar diseases. Renal involvement, rather than respiratory symptoms, is the hallmark of these diseases.
Hantaviruses are transmitted directly between rodents and to humans via excreta (dried saliva, urine, feces). Human-to-human transmission of Sin Nombre virus has not been reported, although there is evidence from Argentina for direct human-to-human transmission of Andes virus, which is very closely related to Sin Nombre virus (Padula et al., 1998). The rodent reservoir of Sin Nombre virus has been identified as the deer mouse, Peromyscus maniculatus, one of the most commonly occurring and widely distributed mammals in North America (Childs et al., 1994). Hence, Sin Nombre virus shares a similar widespread distribution, but prevalence rates of the virus in this rodent reservoir can differ temporally and spatially (Mills et al., 1999).
Identification of a hantavirus as the etiologic agent of the hantavirus pulmonary syndrome epidemic prompted increased surveillance for these agents in the Western Hemisphere that has revealed an array of heretofore unrecognized hantaviruses. Cases of hantavirus pulmonary syndrome have now been documented in 31 states, Canada, and Central and South America, and serologic evidence has demonstrated the presence of hantaviral infections in Mexican rodents. More cases of the syndrome are now recognized to occur in South than in North America (Calisher et al., 2002). Of the 39 hantaviruses now recognized (CDC, 2002v), 25 have been identified since 1994, 11 occur exclusively in the United States, and many new ones have been identified in Central and South America (see Figure 3-2). Each hantavirus is associated with a primary rodent reservoir host, suggesting that more hantaviruses will be discovered as new rodent hosts are assayed for these pathogens (Monroe et al., 1999).
Data from the National Science Foundation’s Long-Term Ecological Research Site at Sevilleta in Central New Mexico reveal that P. maniculatus densities increased dramatically beginning in the early 1990s and were highest in 1993 (Yates et al., 2002a). These conditions may have resulted from an El Nino Southern Oscillation event in previous years, which had caused ample rainfall, warm winters, dramatically increased plant productivity, and abundant forage for P. maniculatus. In an area of southwestern Colorado near hantavirus pulmonary syndrome cases, P. maniculatus abundance was estimated to be as high as 50 per hectare, and Sin Nombre virus antibody prevalence rates exceeded 50 percent (Childs et al., 1994). These areas have subsequently been monitored continuously in long-term longitu-
dinal studies of hantaviruses in rodent reservoirs in the southwestern United States as part of a landmark effort by the CDC to understand the environmental and epidemiologic determinants of hantavirus emergence and to develop predictive models for risk assessment (Boone et al., 1998; Glass et al., 2000; Hjelle and Glass, 2000). Rodent population densities and seroprevalence rates have not been as high in Sin Nombre virus–endemic areas since the 1993 outbreak. However, an additional El Nino Southern Oscillation event in 1997 resulted in an increased number of human hantavirus pulmonary syndrome cases and a growth in rodent populations; a more refined model for Sin Nombre virus emergence was subsequently developed (Yates et al., 2002a).
ECONOMIC DEVELOPMENT AND LAND USE
The physical environment is constantly being modified by human activities. Most economic development activities, including the consumption of natural resources, deforestation, and dam building, have some intended or unintended impact on the environment, or both. In the present context, it is important to note that a growing number of emerging infectious diseases arise from increased human contact with animal reservoirs as a result
of changing land use patterns. A recent example of this phenomenon is Venezuelan hemorrhagic fever, a new disease that was identified in 1989 and emerged following the transformation of forest to agricultural land, which provided a highly favorable environment for the probable reservoir host, the cane mouse Zygodontomys brevicauda (IOM, 2002d). Other examples include increases in malaria following the clearing of land for rubber plantations in Malaysia; increases in schistosomiasis, malaria, and other infectious diseases following the Volta River project in Africa; increases in vector-borne diseases after the construction of new transportation routes in Brazil; and the emergence of Lyme disease in the United States after the reforestation of abandoned farmlands in the northeast (Mayer, 2000). Even the emergence of HIV is believed to have been due to increased contact with nonhuman primates infected with the related simian immunodeficiency viruses (SIVs); exposure to infected blood during the hunting and field dressing of animals and the preparation of primate meat for consumption may have led to human infection. Indeed, compelling evidence indicates that SIV counterparts of HIV, specifically SIVcpz from chimpanzees and SIVsm from sooty mangabeys, have been introduced into the human population on multiple occasions, generating HIV types 1 and 2 (HIV-1 and HIV-2), respectively (IOM, 2002b).
Reforestation and Lyme Disease
Lyme disease is a classic example of a microbial threat influenced by multiple environmental determinants. The principal vector in North America is the deer tick, Ixodes scapularis, which must take blood meals as larva, nymph, and adult to survive and reproduce. Wild rodents, especially Peromyscus spp. in the northeastern and midwestern United States and Neotina spp. in the western United States, serve as reservoirs for the bacterial agent of Lyme disease, Borrelia burgdorferi, but white-tailed deer are the definitive hosts for the adult ticks. The emergence of the disease has been linked in part to the reforestation of former farm land, which led to a dramatic increase in the distribution and abundance of white-tail deer populations (Barbour and Fish, 1993). People become infected when they encounter the tick vector, usually during outdoor recreation or near residences in wooded areas. Prior to the reforestation of land formerly cleared for farming, Lyme disease was unrecognized. Lyme disease has increased in incidence and geographic distribution in the past 10 years in the continental United States, and as people continue to build homes and expand their neighborhoods even farther into reforested areas, the number of cases continues to rise. The preservation of vertebrate biodiversity and community composition may help reduce the incidence of Lyme disease (LoGiudice et al., 2003).
Dam Building and Schistosomiasis
Environmental changes resulting in the creation of standing water, such as dam building or the diversion of water by canalization and irrigation, have been implicated in the reemergence of infectious diseases transmitted by mosquitoes and other arthropod vectors. For example, the incidence of Japanese encephalitis, which accounts for approximately 7,000 deaths annually in Asia, is closely associated with the increase in mosquitoes that reproduce as a consequence of flooding fields for rice growing (Morse, 1995). Outbreaks of RVF in some parts of Africa have been associated with increased mosquito density as a result of dam building, as well as periods of heavy rainfall (Monath, 1993).
Environmental changes involving dam-building and irrigation projects appear to be allowing the spread of schistosomiasis to new areas. More than 200 million people worldwide are infected by and up to three times as many are at risk of this parasitic worm disease which causes chronic urinary tract disease and often results in cirrhosis of the liver and bladder cancer (WHO, 1999a). The development of dams in the Senegal River basin, for example, is among the major factors leading to a significantly increased prevalence of schistosomiasis over a period of only 3 years (Gryseels, 1994). Similarly, the Aswan High Dam has been implicated in increased rates of schistosomiasis in Egypt (El Alamy and Cline, 1977; Abdel-Wahab, 1982). The potential for the Three Gorges Project in China to create conditions conducive to schistosomiasis transmission concerns many scientists.
Schistosomiasis is caused by trematode worms or blood flukes of the genus Schistosoma. Transmission is primarily in tropical or subtropical regions and involves certain snail species, primarily of the genera Biomphalaria, Bulinus, and Onchomelonia, that serve as a host for the development of one stage of the parasite’s life cycle. Humans become infected as they work or bathe in water infested with Schistosoma larvae released by snails. These larvae penetrate the skin and travel to internal organs, where they mature to adult worms that mate and reproduce. Human disease is a consequence of reaction to the eggs deposited in tissues by adult worms that live for years inside chronically infected people. The eggs are released when people urinate or defecate, and if they do so into snail-infested waters, more snails become infected, hence perpetuating the cycle. Thus, both people and snails are important to Schistosoma biology, and environmental changes that affect either can increase the risk of infection. The rise in water levels and change in flow rates that result from dam building may increase the contact between snails and parasites, as well as create fertile soil and sand beds that propagate the development of snails.
HUMAN DEMOGRAPHICS AND BEHAVIOR
The opportunity for transfer of a microbe from one human to another has grown with the explosion of the world’s population. People are also rapidly moving to urban settings by choice or circumstance, leading to close contacts conducive to the spread of infection. Increases in life expectancy have also increased the proportion of elderly among the population, who are at greater risk of infection by virtue of the natural decrease in immune function with age. At the opposite end of the age spectrum, the epidemiology of childhood infections has been altered by the numbers of children in day care centers, often the result of maternal employment (see Box 3-7). Yet another factor in the spread of infection is the growing number of immunocompromised individuals due to the use of steroids and other immunosuppressives, cancer chemotherapy, and HIV infection. Finally, these groups, and potentially all humans, place themselves and others at risk of infection through various behaviors.
The explosive growth of the world’s population is illustrated in Figure 3-3. At the beginning of the twentieth century, the world’s population was approximately 1.5 billion. By 1960 it had doubled, and by late 1999 it had quadrupled to 6 billion (United Nations Population Fund, 1999). The world population is growing at an annual rate of 1.2 percent, or 77 million people, per year (United Nations Population Division, 2001); six countries (India, China, Pakistan, Nigeria, Bangladesh, and Indonesia) account for half of this annual growth. International migration is projected to remain high during the twenty-first century. The more developed regions are expected to continue being net receivers of international migrants, with an average gain of about 2 million persons per year over the next 50 years (United Nations Population Division, 2001).
Like the rest of the world, the United States has seen a steady increase in its population; by 2000, the U.S. population had reached 281,421,906 (U.S. Census Bureau, 2002a). During the 1990s, every state gained in population for the first time in the twentieth century (U.S. Census Bureau, 2001). It is estimated that 852,000 more people came into the United States than left between July 1998 and July 1999. The nation’s foreign-born population grew from 10 million in 1970, to 14 million in 1980, and 20 million in 1990. By March 2000, the estimated foreign-born population in the United States was 28 million. The largest-growing share of foreign-born U.S. residents between 1970 and 2000 came from Latin America.
BOX 3-7 The Changing Demographics of Child Care in the United States
Although child day care establishments have existed in the United States since the first known facility opened in Boston in 1828, changes in maternal employment during the past three decades have dramatically increased the percentage of children enrolled in out-of-home child care. This increase has in turn significantly altered the epidemiology of childhood infectious diseases. Among women with children under age 5, the proportion working outside the home increased from 30 percent in 1970 to 75 percent in 2000. Estimates of the number of children attending out-of-home day care in the United States range from more than 5.3 million (Osterholm et al., 1992) to more than 11 million (Klein, 1986). About 65 percent of 4-year-old children attended organized day care or nursery schools in 1995 (Ball et al., 2002).
Several studies conducted during the last 15 years have shown that exposure to other children through nonparental child care arrangements increases the likelihood of contracting an infectious disease, including respiratory illness, ear infection, diarrhea, and skin disease. Of particular interest are epidemics of diarrhea in day care centers, caused by organisms believed to be rare 20 years ago, including the intestinal parasites Cryptosporidium (CDC, 1984) and Giardia lamblia (Sealy and Schuman, 1983).
The risk of infection does not end with the child. Day care providers, family members, and even the community at large are all potentially at risk for infectious diseases that occur in day care centers. In one study, for example, the rates of gastrointestinal infection brought into the home by a child in day care were 26 percent for Shigella infection, 15 percent for rotavirus infection, and 17 percent for G. lamblia infection (Pickering et al., 1981). More recently, 34 percent of reported cases of a 1997 community-wide hepatitis A epidemic were associated with contact with child care centers; people who had direct contact with child care were 6 times more likely to become infected than people who did not have such contact (Venczel et al., 2001).
Many more studies are needed to better evaluate which particular infectious diseases pose an increased risk among children who attend child care centers and how these risks can be decreased. Compulsory hand washing after handling infants, blowing noses, changing diapers, or using toilet facilities is believed to be one of the most important preventive measures (Klein, 1986). Other measures include ensuring that facilities have adequate light and ventilation, space for play and rest, and an appropriate number of toilet and wash areas; that toilet facilities are separated from areas where foods are prepared and eaten; that sinks, soap dispensers, and paper towels are plentiful and appropriately placed; that surfaces, toys, and materials are regularly cleaned; and that staff members are well trained with regard to hygiene. Indeed, it has been recommended that national regulations and standards be developed and enforced to ensure infection control in out-of-home child care centers and homes.
The global population is aging at an unprecedented rate (see Figure 3-4). Lower fertility rates, reduced death rates, and improved health have led to growing proportions of elderly people worldwide. As previously noted, aging increases susceptibility to infection even in the absence of other underlying health conditions. This is likely due to a number of factors, including senescence of gut-associated lymphoid tissue (Morris and Potter, 1997), a reduction in gastric acid secretion (low stomach pH serves as protection from enteric pathogens) (Feldman et al., 1996; Haruma et al., 2000), and diminished cell-mediated immunity and impaired host defenses (Strausbaugh, 2001). The efficacy of immunizations also decreases with advancing age (Bernstein et al., 1999). Some elderly are more vulnerable to infectious disease because of a breakdown in host defenses due to chronic disease, use of medications, and malnutrition.
Virtually all nations are experiencing growth in their elderly populations in absolute numbers (Kinsella and Velkoff, 2001). Developing countries have seen the most rapid increase, accounting for 77 percent of the
world’s net gain of elderly individuals from July 1999 to July 2000 (615,000 people monthly). Despite this increase, Europe remains the region with the highest proportion of population aged 65 and over (15.5 percent in 2000), while sub-Saharan Africa has the lowest proportion (2.9 percent).
Life expectancy has increased enormously in the United States since the beginning of the twentieth century (see Figure 3-5). In developed countries, the average national gain in life expectancy at birth was 66 percent for males and 71 percent for females between 1900 and 1990 (Kinsella and Velkoff, 2001). Increases were more rapid in the first half than in the second half of the century because of the expansion of public health services and infectious disease control programs that greatly reduced death rates, particularly among infants and children in developed countries. Estimates of life expectancy in developing countries in the early part of the 1900s are generally unreliable. Since World War II, changes in life expectancy in developing regions have been fairly uniform. Some exceptions include Latin America and, more recently, Africa as a result of the HIV/AIDS epidemic. In 2000, life expectancy in developing countries ranged from 38 to 80 years, as compared with 66 to 81 years in developed nations.
The mass relocation of rural populations to urban areas is one of the defining demographic trends of the latter half of the twentieth century. The world’s cities are currently growing at four times the rate of their rural counterparts, and at least 40 percent of their expansion is the result of migration rather than natural increase. Each day about 160,000 people move from the countryside to metropolitan areas, and almost 50 percent of the world’s population lives “in town” for significant periods (United Nations Population Fund, 2001; United Nations Population Division, 2002). The movement of people to cities has accelerated in the past 50 years (see Figure 3-6). The world’s urban population was 2.9 billion in 2000 and is expected to climb to 5 billion by 2030. Urbanization is greater in the more developed regions of the world, where 75 percent of the population lived in urban settings in 2000. Although the percentage of urban dwellers in less-developed regions had increased to 40 percent in 2000 from 18 percent in 1950, the level and pace of urbanization differed markedly among the major constituent areas. Latin America and the Caribbean as a whole became highly urbanized, with 75 percent of their populations living in urban settlements in 2000. Conversely, only 37 percent of the populations of Africa and Asia lived in urban areas in 2000; however, this number is expected to increase more than 50 percent for both continents by 2030. With 26.5 million inhabitants, Tokyo was the most populated urban agglomeration in the world in 2001 (see Table 3-1).
Rural-to-urban migration in cities without adequate infrastructure has serious health consequences, not the least of which is the spread of infectious diseases. Many recent arrivals live in dire circumstances and suffer serious environmental health problems due to inadequate infrastructure and poor access to health services (WHO, 2001c). Impoverished rural migrants typically live in unusually crowded living conditions as a result of housing costs and relatively large family sizes, which further contribute to the spread of communicable diseases (United Nations Population Fund, 2001). Infants in poorer and more crowded portions of cities are at least four times more likely than infants in more affluent neighborhoods to die from diseases such as tuberculosis and typhoid. Moreover, many young women who migrate to cities in search of economic opportunity are able to gain economic security only through the commercial sex trade (Asthana and Oostvogels, 1996), and men often travel far from home to seek work in cities, where their reliance on the commercial sex trade increases the risk of HIV and other sexually transmitted diseases. Migrants who contract HIV in urban areas generally return to their villages to be cared for by their families, often perpetuating transmission (Adeyi et al., 2001). Other health concerns associated with increased urbanization include lack of access to clean water and sanitation, absence of adequate shelter (e.g., screens on
windows), and health hazards posed by open sewers and people living in close association with animals.
Urbanization is closely tied to other demographic trends as well. Links among megacities,3 smaller urban areas, and the surrounding rural hinterland are accelerating with the integration of all segments of society into the global economy. This increasing social, economic, and physical mobility has serious epidemiologic consequences. In recent decades, the incidence of several vector-borne diseases has increased dramatically, partly as a result of changing patterns of settlement (Gratz, 1999). Increased transport to centralized markets has helped spread emerging infections between rural and urban areas; an example is the movement of leishmaniasis to urban
TABLE 3-1 World Megacities, 1975, 2000, and (Projected) 2015: Population in Millions
New York (15.9)
Mexico City (11.2)
Sao Paolo (10)
Mexico City (18.1)
Sao Paolo (17.8)
New York (16.6)
Los Angeles (13.1)
Buenos Aires (12.6)
Metro Manila (10.9)
Rio de Janeiro (10.6)
Sao Paolo (20.4)
Mexico City (19.2)
New York (17.4)
Metro Manila (14.8)
Los Angeles (14.1)
Buenos Aires (14.1)
Rio de Janeiro (11.9)
SOURCE: United Nations Population Fund, 2001.
areas in South America (Jeronimo et al., 1994). Ease of transport also allows pathogens and microbial genetic material to travel between regions or continents within human hosts, vectors, animals, sources of food (plants), or cargo (Wilson, 1995).
Advances in medicine, science, and technology have led to an increase in the number of people who are immunocompromised. The number of cancer patients has grown steadily during the past two decades, and cancer patients are surviving longer than ever before. The highly infectious disease-susceptible population of transplant patients has also been increasing. Likewise, the widespread use of potent antiretroviral combination therapy has led to a growing population of people living with HIV, who retain a potentially lifelong risk of spreading this infection to others.
The emergence of fungi, such as Aspergillus spp., and other opportunistic agents that were previously uncommon or unrecognized as human pathogens is related primarily to an increase in infection-susceptible populations, such as those having undergone cancer chemotherapy, organ and tissue transplantation, or having become infected with HIV (Dixon et al., 1996). In some oncology and transplant hospital units more than 10 percent of patients are colonized or infected with vancomycin-resistant enterococci (Morris and Potter, 1997). Moreover, under such conditions of poor host immunity, latent infections that have been lying dormant for decades, such as tuberculosis, leishmaniasis, and histoplasmosis, can be reactivated.
Human behavior, both individual and collective, plays a critical role in disease emergence. Behavioral interventions to stop the spread of disease have a long history in public health. For example, a century before the rat-flea vector of bubonic plague was discovered, European travelers were advised not to wear the fur robes often bestowed upon guests of the Ottoman sultan in Istanbul (De Tott, 1973) because this action was known to cause illness. Even after the availability of effective antibiotics, behavior modification continued to be considered an invaluable strategy in the prevention of infectious disease; for some diseases, it is the only option.
The prevention of new cases of HIV is dependent on human behavior strategies; with respect to HIV, of course, such strategies must exist in a sociopolitical climate often characterized by a taboo on discussion of risky behaviors, including use of illicit drugs and unprotected sex (Coughlan et al., 2001; Holtgrave and Pinkerton, 2000; Hearst et al., 1995). The behaviors that put individuals at greater risk of contracting infection, however, do not exist in a vacuum. A network of actors encompassing conjugal, family, neighborhood, national, and regional relationships defines the economic, social, and political influences on the choices of individuals. Although changes in human behavior are difficult to achieve and maintain, in the case of HIV, such changes are the only effective prevention strategy to date.
Illicit Drug Use
Infectious diseases have a long history of being associated with illicit drug use. In the 1920s, drug users were often infected with malaria and syphilis as a result of sharing unsterilized syringes while injecting heroin or opium. When dealers started diluting heroin with quinine in the 1930s, the problem of malaria among drug users virtually disappeared as a result of
the antimalarial properties of quinine (Frank, 2000). Illicit drug use also increases the risk for infection with hepatitis A, B, and C, as well as Staphylococcus aureus (Lange, 1989; Kapadia et al., 2002; Chambers, 2001; Ribera, 1998; Tuazon and Sheagren, 1974). The introduction of crack cocaine into New York City in 1985 and the increasing intravenous use of heroin played important roles in the emergence of a variety of infectious diseases, notably tuberculosis and HIV (Garrrett, 1998).
HIV has arguably had the greatest impact on awareness of the dangers of illicit drug use with regard to infectious diseases. People who share drug injection equipment, those who have unprotected sex with injection drug users, and children born to mothers who contracted HIV by sharing needles or having unprotected sex with an illicit drug user are at increased risk for HIV infection. In the United States, injection drug use has been responsible directly or indirectly for more than one-third of all AIDS cases since the epidemic began (CDC, 2002f). More than half of all AIDS cases among women in the United States have been associated with illicit drug use, compared with one-third of cases among men.
Noninjection drug use contributes indirectly to increased transmission of STDs, including HIV, when users trade sex for drugs or money. In addition, crack cocaine use and heroin sniffing have been associated with high-risk sexual behaviors among both men and women (Campsmith et al., 2000; Sanchez et al., 2002). Gay and bisexual men engage in riskier sexual behavior after taking popular club drugs such as methylenedioxymethamphetamine (Ecstasy), ketamine (Special K), and volatile nitrates (poppers) (Mattison et al., 2001; Mansergh et al., 2001). The resulting high-risk sexual behaviors have accounted for the greater prevalence of HIV infection among crack smokers and gay men (Edlin et al., 1994; Schwarcz et al., 2002).
The association between illicit drug use and HIV/AIDS is a worldwide problem. In 1992, only 52 countries reported HIV infection associated with illicit drug use. By the end of 1999, that number had jumped to 114. The most affected regions are Southern and Eastern Europe, Central Asia, North America, East Asia, and Latin America. Up to 90 percent of the registered HIV infections in the Russian Federation have been attributed officially to injection drug use (UNAIDS and WHO, 2002). Even Africa, where injection drug use was once believed not to be a problem, is showing an increased incidence of HIV infection associated with illicit drug use. In a 2000 study of drug injection in Lagos, Nigeria, that was presented at the XIIIth International AIDS Conference, 20 percent of 400 interviewed drug users reported having injected either heroin (63 percent) or cocaine (19 percent); 35 percent of the illicit drug users had initiated at least one person into drug injecting within the past 6 months. Among the illicit drug users, 9 percent were HIV-positive, compared with a national average of 5.4 percent. Fur-
thermore, most of the female illicit drug users were commercial sex workers, suggesting an overlap between HIV epidemics among sex workers and illicit drug users. HIV infection associated with illicit drug use has been reported in other areas of sub-Saharan Africa as well, including Mauritius, Kenya, and South Africa.
Unprotected sex is a key factor in the persistence of sexually transmitted diseases as a major public health problem worldwide (see Box 3-8). Today, more than 25 STDs are recognized (McIlhaney, 2000). According to recent reports, 12 to 15 million Americans, including 3 million teenagers, are infected with STDs every year (American Social Health Association, 1998; IOM, 1997).
STDs are a major problem among adolescents. Several national surveys have indicated that sexual activity among American teenagers has not changed dramatically over the past decade; nearly half of all high school students have engaged in sexual intercourse by the time they graduate (CDC, 2002g). While condom use among teenagers increased significantly in the 1990s, about 40 percent still report no condom use during last sexual intercourse. Moreover, teenagers tend to have serial monogamous sexual relationships that are short-lived, thereby increasing their exposure to multiple partners and their risk of contracting STDs (Overby and Kegeles, 1994).
Teenagers are not alone in their attitudes about risky sexual behaviors, however (see Box 3-9). More than 50 percent of men who have sex with men (both HIV-positive and HIV-negative) who reported having anal sex also reported having unprotected anal sex (Ostrow et al., 2002), a trend that appears to have been increasing since 1994 (Chen et al., 2002b; Katz et al., 2002). In another study, nearly one-third of HIV-infected men who were interviewed reported having unprotected vaginal or anal sex within the past year (Simon et al., 1999).
TECHNOLOGY AND INDUSTRY
A wealth of technological advances occurring over the past century— from modern antibiotics to organ transplants to pasteurization of food products—have greatly improved health and well-being, added years to life expectancy, and eliminated many diseases that were prevalent in the nineteenth century, including typhoid, scarlet fever, and brucellosis. However, technological and industrial advances often come at a price. New infectious diseases have emerged as a direct result of changes in technology and industry; these include Legionnaires’ disease (air-conditioning cooling towers),
BOX 3-8 Sexually Transmitted Diseases
Sexually transmitted diseases (STDs) remain a major public health problem in both industrialized and developing nations. If left untreated, these infections can lead to acute illness, infertility, long-term disability, and death, and nearly all STDs increase the likelihood of HIV transmission (Fleming and Wasserheit, 1999). The exact magnitude of the global STD burden is unknown (WHO, 2001d). This is due in part to poor data collection, a large number of infections that are asymptomatic, and only a portion of the symptomatic population seeking medical care are subsequently reported. The World Health Organization estimates that 340 million new cases of STDs—syphilis (12 million), chlamydia (92 million), gonorrhea (62 million), and trichomoniasis (174 million)—occurred worldwide in 1999. The largest number of new infections was seen in South and Southeast Asia, followed by sub-Saharan Africa, then Latin America and the Caribbean. Sub-Saharan Africa experienced the greatest incidence on a per thousand population basis (WHO, 2001d).
Chlamydia. South and Southeast Asia have the most new cases of infection with Chlamydia trachomatis (43 million) followed by sub-Saharan Africa (16 million) and Latin America and the Caribbean (9.5 million) (WHO, 2001d). Studies among pregnant women in India have shown a prevalence rate of 17 to 21 percent (Paul et al., 1999; Rastogi et al., 1999). Reported rates of chlamydia infection have increased steadily in the United States since 1984; in 2001, 278.3 cases per 100,000 people were reported. Increased reports of chlamydia infection during the 1990s reflect the expansion of chlamydia screening activities, use of increasingly sensitive diagnostic tests, increased emphasis on case reporting from providers and laboratories, and improvements in the information systems for reporting. Higher rates in the southern region of the United States likely reflect both an expansion of screening activities in the South and the high burden of disease in this region (CDC, 2002h; IOM, 1997).
Gonorrhea. The estimated number of new cases of infection with Neisseria gonorrhoeae among adults worldwide in 1999 was 62 million. South and Southeast Asia have the most cases (27 million) followed by sub-Saharan Africa (17 million) and Latin America and the Caribbean (7.5 million) (WHO, 2001d). A notable increase in gonorrhea rates has been seen in Eastern Europe, with the highest rates occurring in Estonia, Russia, and Belarus (111, 139, and 125 per 100,000 people, respectively). In the United States, gonorrhea rates peaked in 1975 (467.7 cases per 100,000) and declined following implementation of the national gonorrhea control program in the mid-1970s (CDC, 2001f). The 2001 rate of gonorrhea, 128.5 cases per 100,000 persons, far exceeds the Healthy People 2010 objective of 19 cases per 100,000 persons. Despite a decrease among African Americans, rates in this population remained extremely high (782.3 in 2001); rates were highest for African Americans aged 15 to 24 years (CDC, 2002h); African American women aged 15– 19 years had a gonorrhea rate of 3,495.2 cases per 100,000, 18 times higher than the rate among non-Hispanic white females of similar age.
Syphilis. The estimated global burden of new cases of infection with Treponema pallidum among adults in 1999 was 12 million (WHO, 2001d). As with other STDs, the greatest number of cases occurs in South and Southeast Asia and sub-Saharan Africa (4 million each), followed by Latin America and the Caribbean (3 million). The newly independent states of the former Soviet Union have recently seen a dramatic rise in syphilis rates, from 5–15 per 100,000 people in 1990 to 120–170 per 100,000 of population in 1996 (WHO, 2001d). Although the primary and secondary syphilis rates in the United States declined by 90 percent from 1990 to 2000, the disease remains an important problem in the South and among certain subgroups. In 2001, the rate of primary and secondary syphilis reported among African Americans (11.0 cases per 100,000 people) was nearly 16 times greater than the rate reported among non-Hispanic whites (0.7 cases per 100,000 people) (CDC, 2002h). Recent outbreaks of syphilis among men who have sex with men may indicate an increase in high-risk sexual behavior that places them at risk for all STDs (Wolitski et al., 2001; CDC, 1999a,b; Aral, 1999). Expanding partner notification to include more high-risk populations through social networks and increasing screening among high-risk populations may improve control of inner-city syphilis epidemics (Gunn et al., 1995).
BOX 3-9 A Behavior Paradox
Antiretroviral therapy is a principal factor in prolonging the life of AIDS patients in the United States and in delaying the progression of AIDS in HIV-infected individuals receiving this multidrug regimen. Ironically, however, the role played by antiretroviral therapy in decreasing the death rate from AIDS may now be a factor in the increasing rate of unsafe sexual behaviors—potentially increasing the rate of new HIV infections. For example, men who have sex with men and believe that antiretroviral therapy decreases HIV transmission are more likely to engage in unprotected anal sex (Huebner and Gerend, 2001; Ostrow et al., 2002). Risky sexual behavior has also been associated with the belief that antiretroviral therapy improves health in HIV-positive men (Ostrow et al., 2002). Other studies have revealed that similar attitudes about antiretroviral therapy have led to increased sexual risk taking among both HIV-negative and HIV-positive illicit drug users. The increased rates of unprotected sex that have been associated with the availability of antiretroviral therapy reflect an ongoing trend (Chen et al., 2002b)
BOX 3-10 E. Coli O157:H7
Escherichia coli O157:H7 rapidly emerged within a cattle reservoir to become a public health problem associated with the large-scale production and distribution of ground beef and its role in the occurrence of hemolytic uremic syndrome (HUS), a life-threatening complication that occurs primarily among young children and the elderly (Bender et al., 1997, Elder et al., 2000). E. coli O157:H7 has a global distribution among cattle and a genomic diversification that suggests the pandemic spread of several clones over time, although the mechanisms by which it spread are unknown (Kim et al., 2001). Contact with cattle and environmental exposure to agricultural runoff likely contributes to the increased incidence of E. coli O157:H7 infection in the northern-tier states that have substantial dairy industries. Secondary environmental contamination from manure runoff has resulted in outbreaks associated with produce items including apple cider, lettuce, and sprouts. Outbreaks have also been associated with swimming in a crowded lake, and drinking contaminated unchlorinated municipal water (Chin, 2000).
toxic shock syndrome (super-absorbent tampons), and E. coli O157:H7 infection (mass production of ground meat) (Cohen, 2000) (see Box 3-10). Even the manner in which animals are raised before entering the meat processing industry, such as the use of antimicrobials for growth production, can impact on microbial threats to health.
Animal Husbandry Practices
Animal feeding operations (AFOs) are agricultural enterprises in which animals are kept and raised in confined situations (USDA and EPA, 1999). Approximately 450,000 AFOs exist in the United States. A relatively small number of these are concentrated animal feeding operations (CAFOs). These facilities either have more than 1,000 animal units, have 301 to 1,000 animal units and discharge wastes through human-made conveyances or directly into U.S. waters, or have been designated as CAFOs because of their significant pollution of U.S. waters (USDA and EPA, 1999). The total number of animal units in the United States increased by roughly 4.5 million (approximately 3 percent) between 1987 and 1992. During this same period, however, the number of AFOs decreased, indicating a consolidation within the industry overall and greater production from fewer, larger AFOs (GAO, 1995). In 1992, roughly 6,600 agricultural operations nationwide had more than 1,000 animal units (USDA and EPA, 1999).
Poultry and beef cattle feedlot sizes (animals per feedlot) have increased dramatically. Between 1960 and 1994, chicken production had to meet the
demands of a tripling of chicken consumption from 24 pounds per capita to 72 pounds (Animal and Plant Health Inspection Service, 1999a). The number of farms selling broilers to meet this demand, however, dramatically decreased. Therefore, the consolidation of farms led to greater numbers of chickens per feedlot. By 2000, almost 36 percent of cattle were being fed on farms of 32,000 head or more (USDA, 2000a). Similar trends have been reported in the swine industry, where in 1995 approximately 60 percent of pigs were raised on farms of more than 1,000 head (USDA, 1997).
Serious concerns surround AFOs and CAFOs. The manure and wastewater produced in these facilities have the potential to overwhelm a watershed’s ability to assimilate the nutrients (nitrogen and phosphorus) contained in the waste. Excess nutrients in water can result in or contribute to low levels of dissolved oxygen (anoxia), eutrophication, and toxic algal blooms. These conditions may be harmful to human health and, in combination with other circumstances, have been associated with outbreaks of microbes such as Pfiesteria piscicida (USDA and EPA, 1999; Grattan et al., 1998). Moreover, decomposing organic matter (animal waste) can reduce oxygen levels and cause fish kills. Pathogens in manure can also represent a food safety concern, especially when used as fertilizer or spread on pasturelands. Cryptosporidium, Coccidioides, Giardia, E. coli, Salmonella, Campylobacter, and Listeria have all been linked to human disease from fecal contamination of food and water, some of which has involved antimicrobial-resistant strains.
Antimicrobials are used in food animals for the treatment and prevention of infections, as well as for growth promotion and enhanced feed efficiency (Gorbach, 2001; McEwen and Fedorka-Cray, 2002). Use of antimicrobials in both swine and beef cattle at relatively low concentrations for growth promotion or disease prophylaxis appears to be a fairly common practice in the United States (USDA, 2001, 2000b; Wegener et al., 1999). Use of antibiotics in these animals has led to antimicrobial resistance (Fey et al., 2000; Aarestrup et al., 2001; Usera et al., 2002; White et al., 2001). Poultry growers, for example, use fluoroquinolone drugs for the treatment of Escherichia coli. When a veterinarian diagnoses E. coli in one bird, farmers treat the whole flock by adding the drug to drinking water (FDA, 2001). This usually kills E. coli in the chickens, but may cause drug resistance among other bacteria in the process. Campylobacter—a bacterium commonly found in poultry—does not cause illness in chickens, but can cause severe illness in humans who come into contact with it through undercooked or contaminated meat. The usual therapy for treating human camplyobacter infection—fluoroquinolones—will likely prove ineffective against a drug-resistant campylobacter strain. Even as the FDA deliberates banning the use of fluoroquinolones in poultry, some major fast food com-
panies have announced they will no longer purchase poultry from suppliers who use the drug (Humane Society of the United States, 2002).
Many antibiotics used in animal agriculture are poorly absorbed in the animal gut and have been detected in groundwater near hog waste lagoons along with antibiotic-resistant bacteria (CDC, 1998b; Chee-Sanford et al., 2001). It is estimated that 25 to 75 percent of the antibiotics administered to feedlot animals may be excreted unaltered in feces. Given that the annual production of livestock and poultry waste in the United States is nearly 180 million tons, this waste is a potentially large source of antibiotics released into the environment. Lagoons and pit systems are typically used for waste disposal in animal agriculture. Seepage, runoff due to flooding, and fertilizing with liquid manure can expose fields and waterways to antibiotics, as well as antibiotic-resistant bacteria. Not surprisingly, antibiotics and drug-resistant bacteria have been detected in soil and groundwater in areas with fecal contamination (Hamscher et al., 2002; Esiobu et al., 2002; French et al., 1987; Kelch and Lee, 1978). Since the availability of monitoring data and other information about the fate and toxicity of antibiotics in aquatic areas and soil is limited, it is difficult to determine conclusively the risk posed to humans by antibiotics in the environment (Kummerer, 2000; Jones et al., 2001; Hamscher et al., 2002).
Aquaculture is one of the fastest-growing food production sectors in the world (WHO, 1999c). Fish and shellfish processing increased threefold from 7 million tons in 1984 to 23 million tons in 1996 worldwide. The vast majority of global aquaculture production is in Asia, with developing countries accounting for roughly 87 percent of total production. In the United States, aquacultured species contribute up to 15 percent of the seafood supply (Garrett et al., 1997).
Various antimicrobials are licensed for use in fish and shrimp production worldwide. They are usually administered in feed, having been either added in during manufacture or surface-coated onto the feed pellets. Unfortunately, little information is available on the types and amounts of antimicrobials used in aquaculture, making assessment of associated public health risks more difficult (WHO, 2002e). Plasmid-mediated resistance to antimicrobials has, however, been identified in a number of bacterial fish pathogens (Aoki, 1988; Chandrasekaran and Lalithakumari, 1998; De Grandis and Stevenson, 1985). Because of lessons learned from antimicrobial use in species living on land, some countries have been exploring non-antimicrobial alternatives for some time. Norway, for instance, has been able to diminish antimicrobial use in aquaculture by more than 90 percent in a very
short period of time by changing certain production practices and increasing the use of vaccines (WHO, 2002e).
Advances in Health Care
Advances in health care technology have led to improved survival of vulnerable populations, increased the numbers of invasive procedures performed, and prolonged the use of indwelling catheters and feeding tubes. Such advances have created new vehicles for the transfer of infections. The use of plastic catheters, artificial heart valves, and prosthetic joints carries the risk of organisms adhering to the surfaces of the synthetic materials; such infections are difficult to eradicate. Hepatitis C transmission has been well described in dialysis wards as a result of leakage or backflow of dialy-sate into machines used by multiple patients (Almroth et al., 2002), and the percentage of reported vancomycin-resistant enterococcus in U.S. dialysis centers increased from 12 percent in 1995 to 33 percent in 2000 (Tokars et al., 2002). Because hospitals are typical locations for vulnerable populations to seek medical care, they are perfect breeding grounds for transferring infections among patients (often through health care providers and other staff), and into long-term care facilities and even the community at large upon transfer or discharge (see the later discussions of nosocomial infections).
Blood Product Safety
Each year in the United States, approximately 23 million units of blood components are transfused into patients who have lost blood as a result of burns, injuries, or surgical procedures, as well as patients with sickle cell disease and various other disorders. The number of people who are willing to donate blood is increasing; in 2001, nearly 7.5 million potential donors participated in Red Cross blood drives across the nation, a 6.1 percent increase over the previous year. During the past decade, however, numerous infectious agents worldwide have been identified as potential threats to the blood supply (Chamberland et al., 2001). Of particular concern are the novel hepatitis agents (TT virus [TTV] and SEN virus [SEN-V]) and transmissible spongiform encephalopathies (Creutzfeldt-Jakob disease [CJD] and variant Creutzfeldt-Jakob disease [vCJD]). TTV was first detected in 1997 in sera from three Japanese patients suffering from post-transfusion hepatitis unrelated to hepatitis viruses A to G (Nishizawa et al., 1997). It is now known that TTV may be associated with post-transfusion hepatitis in patients from several parts of the world (Bez et al., 2000; Niel et al., 1999; Poovorawan et al., 1998). SEN-V has also been implicated as a possible cause of transfusion-associated non–A to E hepatitis (Umemura et al., 2001).
In preliminary, limited studies, approximately 2 percent of current and pre-1990 blood donors have tested positive for SEN-V (Chamberland et al., 2001). Testing of serum samples archived at the National Institutes of Health revealed the proportion of cardiac surgery patients with evidence of new infection with SEN-V to be 10 times higher among those who had received blood transfusions (30 percent) than among those who had not (3 percent), and a SEN-V-positive donor could be identified for roughly 70 percent of SEN-V-positive recipients (Chamberland et al., 2001).
Despite screening of all blood products for hepatitis B and hepatitis C, on rare occasions screening tests do not detect infected blood donors, and hence infections have been passed to transfusion recipients. A small number of cases of other diseases, such as malaria, American trypanosomiasis, babesiosis, Rocky Mountain spotted fever, and West Nile encephalitis, have been reported to be due to transfusions believed to have come from donors who have lived in or traveled to disease-endemic areas (Chamberland et al., 1998; CDC, 2002i). Recent concern has focused on the real or potential transmission through blood transfusions of such diseases as Lyme disease and variant Cruetzfeldt-Jakob Disease (vCJD). Although no confirmed cases of CJD or vCJD have occurred through blood transfusion, health officials are concerned about the risk because of the high degree of uncertainty associated with these agents. For this reason, restrictions on who can donate blood based on potential contact with bovine spongiform encephalopathy (BSE) have been implemented. For example, among the many FDA guidelines based on this criterion, people who spent 3 months or more cumulatively in the United Kingdom between 1980 and 1996 should not donate blood; neither should people who have received a blood transfusion in the United Kingdom between 1980 and the present.
Organ and Tissue Transplantation
In the United States, more than 23,000 human-to-human organ transplantations were performed in 2001 (see Table 3-2). The total number of single- and multi-organ transplants increased by 45 percent between 1991 and 2000. The immunosuppressive drugs used to prevent rejection of the transplanted organs weaken the body’s immune system and leave the host susceptible to infectious diseases. Opportunistic infections, such as those also seen in patients with AIDS, and nosocomial infections are of serious concern in this population. Bacterial infections remain the most frequently diagnosed infections in transplant recipients, and a striking rise in antimicrobial resistance has occurred among these pathogens (Singh, 2000).
More common than organ transplantation is the use of human donor tissues. Each year between 600,000 and 800,000 allograft implantation procedures are performed in the United States (McCarthy, 2002). FDA has
TABLE 3-2 Organ Transplants Performed and Patients Awaiting Transplants in 2001
Type of Transplant
Kidney (5,969 living donors)
Pancreas after kidney
SOURCE: University Renal Research and Education Association, United Network for Organ Sharing, 2003.
specific regulations for the proper handling and processing of these tissues. Nonetheless, in November 2001 a 23-year-old man from Minnesota died after undergoing reconstructive knee surgery involving a bone–cartilage allograft (CDC, 2001l, 2002j). The blood cultures obtained premortem grew Clostridium sordellii, a potentially fatal anaerobic spore and toxin-forming organism. A few days later, an Illinois man who had received donor tissue from the same cadaver also became critically ill following reconstructive knee surgery. The CDC investigators obtained 19 other unused tissues taken from that same donor. As of March 2002, federal officials had uncovered 26 cases of bacterial infection in otherwise healthy patients who had received musculoskeletal tissue allograft implants (not all from this same processor).
The need for donated human organs and tissues far exceeds the present supply. As of July 5, 2002, more than 80,000 persons living in the United States were awaiting organs for transplantation. The shortage of human donors has been increasing annually since the 1980s (Kemp, 1996) and has sparked a renewed interest in transplantation of organs and tissues across the species barrier (Candinas and Adams, 2000).
Xenotransplantation involves the transplantation of cells, tissues, and whole organs from one species to another. While xenotransplantation offers potential benefit for both individual recipients and society, it also represents a public health concern. Such procedures have the potential to
result in human recipients being infected with microbial agents that are not endemic in human populations, thereby potentially introducing new (xenogeneic) infections into the human community. This potential risk is presently unquantifiable (IOM, 2002d). Scientists and policy makers in the field must therefore deal with the challenge of weighing the uncertain collective risk of xenotransplantation against the potential benefit to both individuals and society (Chapman and Bloom, 2001). In 1999, European countries concluded that the scientific base was inadequate to permit proceeding to clinical trials, a decision that halted xeontransplantation studies there. The United States, however, decided that the only way to advance the scientific base was to proceed with caution to clinical trials (Daar, 1999; IOM, 1996).
In the United States, regulation of xenotransplantation procedures is within the purview of FDA, which has ruled that nonhuman primates should not be used as source animals for transplants until scientists have sufficient information to address the associated infectious disease risks (DHHS, 2000a). Significant concerns still remain about the possible transmission of infectious agents from nonprimates. For example, pigs harbor retroviruses that until recently have been considered a negligible risk for human disease (Blusch et al., 2002). Several reports on the infection of human cells in vitro and on the spread of porcine endogenous retroviruses from transplanted porcine islets in murine models may suggest a potential risk for xenogeneic infection. Such studies have reawakened concern regarding what clinicians and scientists do not know about interspecies transmission of retroviruses.
INTERNATIONAL TRAVEL AND COMMERCE
The potential for the rapid dissemination of pathogens, and their vectors and animal reservoirs, throughout the world is increasing greatly as the world continues to experience expanding global trade markets and increasing international travel. Infections that are carried by humans and transmitted from person to person—including influenza, measles, rubella, HIV, tuberculosis, Haemophilus influenzae, and Neisseria meningitidis—are especially amenable to being carried from one geographic area to another. Microbes that can colonize without causing symptoms (e.g., Neisseria meningitidis) or can infect and be transmissible at a time when infection is asymptomatic (e.g., HIV, hepatitis B and C) can spread easily in the absence of recognized infection in traveling or migrant hosts (see Box 3-11).
Airport and railroad malaria illustrate the continual movement of pathogens into new areas. Migrating humans have played a large role in the epidemiology of malaria worldwide, including the spread of drug-resistant malaria (Martens and Hall, 2000). Chloroquine-resistant falciparum ma-
BOX 3-11 Neisseria meningitidis: A Sacred Peril
Approximately 2 million people from about 140 countries, including about 15,000 from the United States, congregate in Saudi Arabia during the annual Hajj. In 1987, a Hajj-related outbreak of serogroup A Neisseria meningitidis, which then spread to other countries, led to the establishment of a requirement that all arriving pilgrims receive the meningococcal vaccine. In spring 2000, however, a similar outbreak of serogroup W135 N. meningitidis occurred among those who made the pilgrimage and subsequently spread to family members and other contacts who had not traveled to Saudi Arabia (CDC, 2000d, 2001g, 2001h). Serotyping, multilocus sequence typing, multilocus DNA fingerprints, and other techniques were used to identify W135 isolates in many other countries (Taha et al., 2000; Popovic et al., 2000). Many of the pilgrims in 2000 had not been vaccinated against the W135 serogroup, since the quadrivalent vaccine (A, C, Y, W-135) is available in only a few countries (including the United States); thus many pilgrims were able to receive only bivalent (A and C) vaccine.
Although the quadrivalent vaccine appeared to prevent disease in most people who had received it, they could still become carriers of W135 N. meningitidis and thus could still serve as sources of infection upon their return. Humans are the only important host for N. meningitidis, and many carry the organism in the oropharynx in the absence of symptoms. In a 2000 study, oropharyngeal cultures were obtained from 451 pilgrims departing from JFK Airport on direct flights to Saudi Arabia, and repeat cultures were taken from 727 returning pilgrims. None of the pilgrims carried W135 at the time of departure, but six of the returning pilgrims were infected (CDC, 2001h). A similar study done in Singapore showed that nearly 17 percent of 171 pilgrims carried N. meningitidis upon their return from the Hajj, 80 percent of which was serotype W135. (Gewolb, 2001).
laria, for example, emerged in two widely separated locations—in Columbia and at the Cambodia–Thailand border—and has subsequently spread from both of these locations to other areas (Wellems and Plowe, 2001). Occasional instances of local malaria transmission have occurred even in the United States, where during the past 10 years, approximately 1,000 cases of imported malaria have been reported to CDC annually; the imported cases are mainly in travelers, military personnel, and immigrants returning to or coming from malaria-endemic areas (Olliaro et al., 1996; CDC, 2002l). Fortunately, in most cases the factors necessary to establish a transmission cycle are not present, and few human cases typically occur subsequent to introduction. On the other hand, all of the necessary ingredients for the establishment of West Nile virus were present in New York City in 1999, and the virus is now established across the United States (see Box 3-12) (Petersen and Roehrig, 2001).
The spatial mobility of the average human has increased more than 1,000-fold since 1800 (Gruebler and Nakicenovic, 1991). As the number of global travelers increases, so does the threat of the spread of infectious diseases. According to the World Tourism Organization (see Figure 3-7), world tourism grew by an estimated 7.4 percent in 2000, and the total number of international arrivals reached nearly 700 million; the latter figure is expected to reach the 1 billion mark by 2010 (Handszuh, 2001). Every region in the world is experiencing this increase. More than 5,000 airports worldwide have regularly scheduled international flights, allowing for quick links to urban centers across the globe. In 2000, more than 18 million commercial airline flights took off from airports around the world, carrying 1.1 billion passengers (Boeing, 2002). In 1999, 725,000 aircraft, 200,000 ships, 463,000 buses, 39,000 trains, and 125 million personally owned automobiles crossed U.S. borders, and in 2000, an estimated 400 million international travelers entered the United States by either land, ship, or air.
Not only are more people traveling, but travel is also faster and more socially widespread, and penetrates into areas of the world not readily accessible in the past. In the nineteenth century, it could take a year to circumnavigate the globe by ship. Today a person can go around the world in less than 36 hours. Roads, bridges, canals, and transport vehicles allow humans to rapidly bypass physical and other barriers that would have stopped or slowed movement in the past. Modern technologies have expanded the range of easily accessible destinations, and allow travelers to enter and survive more extreme environments and encounter more isolated human populations. Not only can infected travelers introduce new microbes into new environments, both while traveling and after having returned home, but, as adventure travelers intrude on new environments and have contact with exotic wildlife, the chance that they will come into contact with microbes that have never before been recognized as human pathogens is real. They may then bring these exotic infectious agents back home with them, where, under appropriate circumstances, an introduced pathogen may persist and spread.
The interactions that occur during travel are another important component of the travel process, with implications for the emergence and spread of infectious disease. In particular, many documented transmission incidents or outbreaks of both airborne and foodborne infections—including influenza, smallpox, tuberculosis, measles, cholera, shigellosis, salmonello-
BOX 3-12 West Nile Virus
On August 23, 1999, an infectious disease physician notified the New York City Department of Health and reported two patients with encephalitis in a hospital in northern Queens. An investigation of nearby hospitals quickly uncovered six additional persons with symptoms suggestive of encephalitis. Initial blood and cerebrospinal fluid tests indicated that the cause was viral. All eight patients were previously healthy persons between the ages of 58 and 87; they all lived within a 16-square-mile area in northern Queens; and none of them had recently traveled. The only exposure that all eight patients shared was that they had all engaged in outdoor activities around their homes in the evening (Nash et al., 2001). Environmental sampling revealed the presence of Culex mosquito breeding sites and larvae in many of the patients’ yards and neighborhoods. Around the same time, local health officials observed an increase in dead birds, especially crows, in New York City. Officials at the Bronx Zoo noted that three captive-bred birds had apparently died from meningo-encephalitis. Eventually, testing of the initial human cases and the dead birds indicated that their illnesses were due to West Nile virus, which had previously never been isolated in the Western Hemisphere (Fine and Layton, 2001) By the end of 1999, 62 cases of severe disease, including 7 deaths, had occurred in the New York area. In 2000, 21 cases, including 2 deaths, were reported in New York, New Jersey, and Connecticut. By 2002, nearly 4,000 cases of West Nile encephalitis had been reported in 39 states and the District of Columbia of which 254 had died from this disease (CDC, 2003c).
West Nile virus belongs to a group of viruses that cause febrile illness usually lasting a week or less. Initial symptoms include fever, headache, malaise, arthralgia or myalgia,
sis, and staphylococcal food poisoning—have occurred inside airplanes, (Ritzinger, 1965; CDC, 1983; Kenyon et al., 1996). Influenza outbreaks have been known to affect more than 70 percent of the passengers on a single aircraft (Moser et al., 1979).
First, getting to the airport often involves using mass transportation (e.g., train or bus) and thus having contact with numerous people from many different areas in a small, enclosed space. At the airport, passengers are exposed to even more people in an often crowded terminal full of people from all over the world (or, in smaller airports, the region). The air inside the bus or airline terminal may have a higher level of microbial contamination than that inside the aircraft itself (Wick and Irvine, 1995). Once on the plane, passengers breathe recirculated, filtered,4 low-humidity (10–20 percent) air. Particular aircraft vary, but generally the recirculated air is mixed with fresh air about 20 times an hour during flight, and less frequently during takeoff and landing. Transoceanic travel often involves
rash, and occasionally nausea and vomiting. Only a small percentage of cases develop encephalitis. West Nile virus has caused disease outbreaks in Egypt, Israel, France, Romania, and the Czech Republic, and is widespread in parts of Africa, the northern Mediterranean area, and western Asia. Birds are the primary reservoir of infection that is spread to humans by the bite of an infected mosquito. The occurrence of West Nile virus in New York illustrates how easily pathogens can extend their geographic range. But how the virus was introduced into New York, how long it has been in the United States, how far its geographic range extends, and what its long-term impact will be on human and animal health are still unanswered questions. West Nile virus could have been introduced through travel by infected humans, importation of illegal birds or other domestic pets, avian flyways, or unintentional introduction of virus-infected ticks or mosquitoes. Other recent outbreaks of West Nile encephalitis have occurred in regions of the world where the disease was previously not found or only rarely found. In 1999, more than 480 suspected cases, including 40 fatalities, of West Nile virus were reported in the Volgograd Region, Russia (Platonov et al., 2001). The Volgograd outbreak, as well as an earlier 1996 Romania outbreak, were both caused by viral strains that were genetically similar to the one that caused the New York outbreak, suggesting that the outbreaks were caused by close relatives and illustrating the ease with which pathogens can circumnavigate the globe and occupy new territories (Platonov et al., 2001; Platonov, 2001).
Public education is the key component to preventing West Nile virus infection. However, even after the highly publicized New York mosquito-borne outbreak, only 9 percent of those surveyed within the outbreak epicenter reported consistent use of personal prevention such as mosquito repellent; 70 percent reported never using mosquito repellent (Mostashari et al., 2001).
larger airplanes, whose passengers are exposed to even more people while in flight. When they exit the plane, travelers typically mingle with arriving passengers from multiple origins in the terminal, frequently spending minutes to hours standing in line, retrieving luggage, arranging transport from the airport, or moving on to another flight.
Cruise ship travel has increased dramatically. Nearly 7 million people took North American cruise vacations in 2001. Cruise ships bring together large numbers of both passengers and crew from diverse geographic origins, including countries with immunization requirements that often differ from those of the United States. Although a cruise may bring to mind days spent lounging on a sunny deck, many of the activities on cruise ships take place in closed spaces (e.g., dining rooms, movie theaters, lecture halls). The ships often stop at multiple ports, where passengers and crew may disembark and interact with people in the local environment before reboarding.
The ships may also pick up and drop off passengers along the way, thus further expanding the opportunities for mixing with microbes from other populations.
Cruise ships have a history riddled with infectious disease outbreaks, often related to contaminated food or inadequately treated water from international ports. Numerous recent reports include gastroenteritis caused by a variety of pathogens, ranging from Staphyloccocus aureus to Shigella spp. to SRSV (small, round-structured virus, formerly known as the Norwalk virus, now referred to as the norovirus). Several recent outbreaks of gastroenteritis associated with norovirus occurred on consecutive cruises on the same ship and on different ships within the same company, suggesting that environmental contamination and infected crew members can serve as reservoirs of infection for passengers (CDC, 2002m). Cruise ships also have a history of infectious respiratory illnesses, including both influenza A and B (CDC, 2001i; Miller et al., 2000), Legionnaires’ disease (CDC, 1994b, 1994c), and tuberculosis (Penman et al., 1997). In 1997, rubella outbreaks occurred among crew members of two different commercial cruise lines (CDC, 1998d).
The features of cruise ships, especially ones that offer special services, such as renal dialysis or medical care, have made them highly attractive vacation options for more infection-susceptible populations. For example, on a cruise ship that experienced an outbreak of influenza while cruising from Montreal to New York in 1997, 77 percent of 1,284 surveyed passen-
gers were aged 65 and older, and 26 percent had risk factors other than age that placed them at risk for complications of influenza. Indeed 17 percent of the passengers and 19 percent of the crew members reported experiencing an acute respiratory illness during the cruise. The etiological agent was identified as influenza A/Sydney/5/97 (CDC, 1997c).
International trade in food and animal agriculture has increased markedly as an important aspect of globalization (see Figure 3-8). The United States and other countries now enjoy more goods from more countries than ever before. It is now possible to buy fresh produce at any time of the year, as well as a whole host of previously unattainable foods from areas around the world. Unfortunately, this wider array of product options brings with it the risk of cross-border transmission of infectious agents. Many species enter the United States each year as contaminants of commodities. Up to 70 percent of selected fruits and vegetables consumed in the United States come from developing countries during certain seasons (Osterholm, 1997). Agricultural produce, nursery stock, cut flowers, and timber can harbor insects, plant pathogens, slugs, and snails. Fish and shrimp pathogens and parasites have been introduced into the country through infected stock for
aquaculture. Crates and containers are potential carriers for snails, slugs, mollusks, beetles, and microorganisms. Military cargo transport can likewise bring harmful species into new settings. Ballast water that is released from ships as cargo is loaded or unloaded has been known to carry several destructive aquatic species (Animal and Plant Health Inspection Service, 1999b). Pathogens in meat and poultry, such as the agents of BSE and foot and mouth disease, can also be delivered unintentionally across borders. This is true as well for diseases not known to affect human health directly but capable of impacting on agriculture and animal health.
Moving live animals across borders can also bring regional diseases to new areas. For this reason, pet dogs and cats are subject to inspection at ports of entry for evidence of infectious diseases that can be transmitted to humans (CDC, 2001j). Reptiles are also popular pet imports; in 1997, more than 1.7 million reptiles were imported into the United States (Humane Society of the United States, 2001). The demand for rare and exotic reptiles constitutes a significant portion of a $3 billion annual illegal trade in plants and animals in the United States. Monkeys and other nonhuman primates may not be imported as pets; however, each year approximately 9,000 primates are imported for scientific and exhibition use.
The manufacture and distribution of goods within the United States has also undergone a transformation. It is now possible to ship products from coast to coast in less than 12 hours. In addition, companies are more centralized in their production, processing, and distribution of goods. This is most evident in the food industry, with consumers now being able to enjoy fresher goods on their grocery shelves at reduced cost. Unfortunately, this rapid system of production and transport also creates a faster, more efficient delivery system for foodborne outbreaks that are more difficult to detect. In the past, such outbreaks occurred locally and could be readily identified by epidemiologic surveillance methods. A foodborne outbreak in this era of globalization is more complicated and may involve multiple states with varying times of illness onset, thus making it more difficult to trace the illness back to its origin.
The United States remains a consumer-driven society, with food choices motivated by competing concerns over cost, safety, nutritional value, and convenience. For example, health concerns associated with a diet rich in animal fats have changed the diet of many Americans. From 1990 to 1999, per capita consumption of whole milk decreased 67 percent and that of red meat 11 percent. In contrast, consumption of fresh fruit increased 30 percent, that of fresh vegetables increased 63 percent, and that of poultry doubled.
BOX 3-13 Shigella sonnei: A Garnish of Parsley?
The impact of imported produce on the occurrence of foodborne disease was demonstrated in an outbreak of multidrug-resistant Shigella sonnei infection among patrons of two different restaurants in Minnesota in 1998 (CDC, 1999c). Isolates were resistant to ampicillin, trimethoprim-sulfamethoxazole, tetracycline, sulfasoxazole, and streptomycin. The isolates also had the same pulse field gel electrophoresis (PFGE) pattern, which was distinct from that of the endemic S. sonnei strains circulating in Minnesota at the time. Epidemiological investigation implicated chopped parsley as the vehicle. Six other outbreaks linked by PFGE and parsley-use characteristics were identified in California, Massachusetts, Florida, and Canada. Trace-back of implicated parsley identified a single farm in Mexico as the likely source. Water used in cooling fresh-picked parsley had not been adequately treated and was susceptible to contamination. Two outbreaks of enterotoxigenic E. coli (ETEC) infection in Minnesota were also linked to parsley that may have originated at the same farm in Mexico.
Although imported parsley was the original source of contamination, handling of the produce at the restaurants contributed to the outbreaks. Each of the implicated restaurants chopped parsley, pooled it in containers, and sprinkled it over various foods and dishes as a topping or garnish. This holding of pooled, chopped parsley increased the available inoculum, making it a more efficient infective dose. In several restaurants, food workers became infected and worked while ill, providing a second amplifying mechanism. Restaurants also served to amplify source contamination through cross-contamination of ready-to-eat foods.
Just to meet today’s year-round demand for fresh produce, the volume of produce imported from developing countries has increased dramatically. For example, 95 percent of green onions sold in the United States during 2000 were imported from Mexico. Green onions are susceptible to contamination by soil, water, or handling, and are typically served fresh as a garnish on salads or cooked foods. During the winter months, 25 percent of cantaloupe and 50 percent of tomatoes consumed in the United States are imported from Mexico. These examples of imported produce, as well as others, have been implicated as vehicles in foodborne outbreaks (see Box 3-13). In 1995, 1996, 1997, and 2000, outbreaks of cyclosporiasis were associated with consumption of raspberries from Guatemala. The outbreaks in 1995 were identified retrospectively, after the widespread outbreaks in 1996 had been linked to Guatemalan raspberries (Herwaldt and Beach, 1999; Herwaldt and Ackers, 1997; Koumans et al., 1998). In spite of detailed investigations, the sources of the contamination have never been identified.
Overall, changes in several factors at the national level have contributed to the changing epidemiology of foodborne diseases in the United States. These include changes in diet, new methods of food production, new infectious agents, new vehicles for previously recognized pathogens, and the increasing prevalence of persons who are immunocompromised (Hedberg et al., 1994). The spectrum of illnesses caused by the consumption of contaminated foods ranges from self-limiting mild gastroenteritis to life-threatening neurologic, hepatic, and renal syndromes. Data collected by FoodNet, the active surveillance network for foodborne disease in the United States, allow comprehensive estimates of disease burden since 1996. Data from this and other surveillance systems are also being used to evaluate the public health impact of changes in food safety policies, particularly in the regulation of the meat and poultry processing industries (CDC, 2002n; Liang et al., 2001). Data from these surveillance systems indicate that contaminated foods cause approximately 76 million illnesses, 325,000 hospitalizations, and 5,000 deaths each year in the United States (Mead et al., 1999) at an estimated annual cost of $7–35 billion (Economic Research Service, 2001; Buzby and Roberts, 1997). More than 200 food-transmitted diseases are known (Bryan, 1982); in 82 percent of foodborne illnesss, however, the identity of the pathogen is unknown (IOM, 2002a). Of 1,500 deaths each year due to known pathogens, 75 percent are caused by Salmonella spp., Listeria monocytogenes, or Toxoplasma spp.
Expansion of the international trade in foodstuffs over the last two decades has made it difficult to screen comprehensively for the presence of dangerous microorganisms and has increased the scope and range of food-borne diseases. Theoretical concerns about the transmission of BSE prions from cattle to humans were tragically confirmed by the outbreak of vCJD in the United Kingdom (see Box 3-14). Fortunately, modern technology has made it possible to track foodborne pathogens across geographic expanses. For example, PulseNet, the molecular subtyping network for foodborne diseases, established standard protocols for pulsed field gel electrophoresis (PFGE) and enabled public health laboratories to compare molecular patterns of microbes electronically (Swaminathan et al., 2001). These methods have been instrumental in a number of investigations of multistate outbreaks of foodborne illness.
Antibiotic resistance is a serious problem among foodborne pathogens. Over the past decade, the worldwide occurrence of resistance among Salmonella spp., Campylobacter spp., and toxigenic-producing Escherichia coli O157 has dramatically increased (Threlfall et al., 2000). It has been estimated that 1.4 million cases of illness from Salmonella and 2.4 million cases of illness from Campylobacter infection occur in the United States
every year (Mead et al., 1999); 26 percent of the Salmonella isolates and 54 percent of the Campylobacter isolates have been found to be resistant to at least one antimicrobial to which previous susceptibility was known. Although antibiotics are not always essential for salmonellosis treatment, they can be lifesaving in certain situations. Most severe Salmonella cases occur in children and the elderly and can be fatal. The global dissemination of the multidrug-resistant Salmonella typhimurium DT104 is of particular concern because of the severity of the illness it causes. Multidrug-resistant S. typhimurium DT104 has been associated with hospitalization rates twice those for other Salmonella infections, and its case-fatality rate is 10 times higher. S. typhimurium DT104 initially emerged in cattle in the late 1980s and can now be found in a variety of foods, including poultry, unpasteurized milk, meat, and meat products.
BREAKDOWN OF PUBLIC HEALTH MEASURES
A breakdown or lack of public health measures, such as adequate sanitation, immunizations, and tuberculosis control, has had a dramatic effect on the emergence and persistence of infectious diseases throughout the world. For example, infectious diseases have resurged in the former Soviet Union over the past decade because of the country’s enormous socio-economic upheavals and the fracturing of its health services that has resulted from poor funding for treatment, vaccine prophylaxis, and health education (Netesov and Conrad, 2001; Coker, 2001). Even the United States has had difficulties maintaining adequate supplies of vaccines in recent years, and immunization rates for adults are still far below national targets. Furthermore, some public health measures, such as public health laws, need to be updated to ensure legal authority for epidemic disease control in the current political climate.
Inadequate Sanitation and Hygiene
Poor sanitary conditions and a lack of proper hygiene contribute to the transmission of many infectious diseases. To date, one of the most significant shortages worldwide is that of potable water. This shortage has implications for the transmission of numerous infectious diseases, such as cholera. Squalid living conditions with overcrowding and the presence of vermin also contribute to the spread of infections, such as plague. Even in the United States, poor infection control practices in hospitals have resulted in high rates of nosocomial infection.
BOX 3-14 Transmissible Spongiform Encephalopathies: From Herd to Mortality
Transmissible spongiform encephalopathies (TSEs) are a family of diseases of humans and animals that are uniformly fatal, causing irreversible cumulative brain damage. In humans, the most common TSE is Creutzfeldt-Jakob disease (CJD), which was first identified in 1921; others include Gerstmann-Straussler-Scheinker syndrome, kuru, fatal familial insomnia, and now, variant Creutzfeldt-Jakob disease (vCJD). Animal TSEs include bovine spongiform encephalopathy (BSE), scrapie, transmissible mink encephalopathy, and chronic wasting disease of American mule deer, white-tail deer, and elk.
BSE, popularly known as “mad cow disease,” wreaked havoc on the livestock industry in the United Kingdom when a 1986 epidemic led to the death of nearly 200,000 cattle. The outbreak is believed to have originated from an endemic spongiform encephalopathy of sheep (i.e., scrapie) that entered the cattle food chain via nutritional supplements manufactured from infected sheep carcasses. Nearly 4.5 million asymptomatic cattle were slaughtered, substantially impacting numerous industries that manufacture bovine-derived products. But perhaps most alarming, BSE has recently been associated with vCJD, a rare and fatal human neurodegenerative condition that was first described in March 1996 and linked to beef products contaminated with central nervous system tissues from BSE-infected cattle.
By 1996, 10 cases of vCJD were reported (Will et al., 1996). Early symptoms included a form of depression or a schizophrenia-like psychosis; half of the case patients experienced unusual sensory symptoms, such as “stickiness” of the skin. As the illness progressed, cases suffered various neurological problems, such as unsteadiness, difficulty walking, and involuntary movements. By the time of death, case patients were completely immobile and mute. The most notable clinical differences between the more common CJD and vCJD are that the latter affects predominantly younger people (mean age of death is 29 versus 65 with CJD) and lasts longer (median of 14 months versus
Cholera is an acute intestinal infection caused by the bacterium Vibrio cholerae, which is spread through contaminated food and water. The infection often results in mild symptoms but can sometimes cause severe, life-threatening diarrhea. The bacterium survives and multiplies outside the human body; it can then spread rapidly in human populations where living conditions are crowded and unprotected water sources are in close proximity to fecal repositories. These conditions are not uncommon in poor countries and in many refugee camps. One refugee camp in the Democratic Republic of Congo experienced a major cholera epidemic in 1994, in which an estimated 58,000–80,000 cases and 23,800 deaths occurred within a month as a result of the consumption of untreated lake water; crowding; poor personal hygiene; and inadequate sanitation, due in part to the rocky
4.5 months for CJD). As of April 2002, 117 cases of vCJD had been reported in the United Kingdom, 6 in France, and 1 each in the Republic of Ireland and Italy (CDC, 2002w).
The nature of the causal agent for vCJD, and TSEs in general, is still a matter of debate. According to a leading theory, the disease agent is composed largely, if not entirely, of a self-replicating protein known as a prion, so that the TSEs are now sometimes referred to as prion diseases. Prions are transmissible protein particles that are devoid of nucleic acid and consist only of modified protein. According to the prion theory, the brain and other organs of mammals (at least those examined thus far) contain a normal version (PrPC) of the pathological protein form (PrPsc) that makes up a prion. When an animal or human becomes infected with PrP, the invading protein somehow converts normal protein molecules into toxic ones by causing the proteins to change shape. Despite strong evidence in support of the prion theory (Prusiner, 1995), a few scientists still believe that the ability of the TSE agent to form multiple strains is better explained by a DNA-containing, virus-like agent. To date, no such agent has been found.
Many scientists are keeping a watchful eye on chronic wasting disease (CWD), a fatal neurological illness of farmed and wild deer and elk (Animal and Plant Health Inspection Service, 2002). It was first identified in 1967, although researchers did not determine it to be a TSE until 1978. The first case was diagnosed in Colorado in 1981. By 2001, CWD had been detected in deer and elk in Wyoming, Nebraska, South Dakota, Canada, Wisconsin, New Mexico, Oklahoma, Montana, Kansas, and Illinois (Animal and Plant Health Inspection Service, 2002). Although the most obvious and consistent clinical sign of CWD is weight loss over time, many infected animals also show behavioral changes, including decreased interaction with other animals, listlessness, lowering of the head, blank facial expression, and repetitive walking in set patterns. No evidence linking CWD to any disease in humans or domestic animals had been detected as of 2002, although studies seeking such evidence have been limited.
volcanic nature of the soil in the area which made digging latrines almost impossible (GOMA Epidemiology Group, 1995). Bathing in the river, long distances to a water source, and consumption of dried fish were significantly associated with the risk of cholera during the 1997 epidemic in southern Tanzania (Acosta et al., 2001).
Cases of cholera are rare in industrialized nations, where modern sewage and water treatment systems exist. In the United States, cholera was prevalent in the 1800s but has now been virtually eliminated. Between 1965 and 1991, just 136 cases were reported to CDC. From 1992 through 1994, the number jumped to 160; half of these were among airline passengers traveling from Latin America (75) and cruise ship passengers from Southeast Asia (5) (Mahon et al., 1996). Although all regions of the world reported a reduction in the total number of cholera cases between 1999 and 2000, in 2000 WHO received reports of 137,071 cases from 56 countries,
which resulted in 4,908 deaths (WHO, 2001f). Africa accounted for 87 percent of the global total. Asia, with a three-fold decrease, still reported 11,246 cases. Central and South America reported 3,101 cases and 40 deaths, these despite reportedly large decreases in Brazil, Ecuador, Guatemala, and Nicaragua. The actual number of cholera cases is believed to be higher in all countries because of alleged underreporting and other surveillance system limitations.
The sixth and most recent global cholera pandemic began in 1961 in Celebes, Indonesia, caused by V. cholera O1, biotype El Tor. It spread rapidly to surrounding countries of eastern Asia and reached Bangladesh in 1963; India in 1964; and the Soviet Union, Iran, and Iraq in 1965–1966 (WHO, 2000d). In 1970, cholera was found in West Africa, where it eventually proliferated and became endemic throughout most of the continent. It then spread to Latin America in 1991, and within the year had reached 11 countries in this region. By the 1990s, cholera had become endemic to Latin America, Africa, and Asia, where periodic countrywide epidemics continued into the twenty-first century.
In 1992, large outbreaks of cholera began in India and Bangledesh that were caused by a previously unrecognized serogroup of Vibrio cholerae, designated O139, synonym Bengal. The epidemic was widespread in the Asiatic continent, with imported cases reported from developed countries. Some regarded this as the eighth cholera pandemic, although the epidemic remained confined to Bangladesh and India. After 1992, V. cholerae O1 was again epidemic in that region (Seas and Gotuzzo, 2000).
Thanks to modern sanitation and the availability of antibiotics and pesticides, another occurrence of the Black Death5 due to plague appears unlikely. However, isolated cases of plague are still reported in various parts of the world, including the United States, and plague outbreaks are still possible where wild rodent populations are persistently infected with the plague bacillus. Such regions include the western United States and parts of South America, Africa, and Asia. The last great plague epidemic occurred in the early twentieth century in India and resulted in more than 10 million deaths.
Plague is caused by the bacterium Yersinia pestis, which is transmitted to humans by fleas whose primary hosts are rodents; rats, ground squirrels,
and rabbits, as well as the occasional house cat, can also harbor infected fleas. Bubonic plague, the most common form of the disease, is acquired directly from the bite of an infected flea. Bubonic plague derives its name from characteristic swollen lymph nodes (called “buboes”) in the groin, axilla, and neck areas. Pneumonic plague, which is less common, can develop from the bubonic form and is spread directly from person to person by the respiratory route. Untreated bubonic plague is fatal in half of all cases; untreated pneumonic plague is invariably fatal.
Crowding and poor sanitation in or near areas where rodent plague is endemic are ideal conditions for the emergence of this devastating bacterial illness. Squalid conditions were a major factor in emergence during a 1994 outbreak of plague in India (Ramalingaswami, 1996). From August to October of that year, India reported to WHO a total of 693 suspected cases of bubonic or pneumonic plague with positive test results for antibodies to Yersinia pestis. Nationwide, 56 of the reported plague cases were fatal (CDC, 1994d). Most of the cases were from the area of Surat, a city of 2.2 million people who generate close to 1,250 metric tons of garbage daily, 20 percent of which is left uncollected and has led to a dramatic growth in the city’s rat population.
Exacerbating this problem in sanitation were two natural disasters. First, floodwaters inundated the low-lying slum areas near the river during a 1994 monsoon. Surat residents complained that nothing was done to remove the great piles of rubbish that remained after the floodwaters receded, providing an ideal habitat for rats. A year before the plague incident, an earthquake measuring 6.4 on the Richter scale had hit the adjacent state of Maharashtra, killing at least 10,000 people and causing extensive damage. Researchers believe that the disturbances and resettlement associated with the earthquake helped drive the wild rodent population inhabiting the forested area near Surat into contact with the domestic rat population, thus introducing the disease into the latter rat population (World Resources Institute, 1996). In financial terms, the plague’s toll cost the Indian economy in excess of $600 million. More than 45,000 people canceled travel plans to India, and the country’s hotel occupancy rate dipped by half. Many countries stopped air and ship traffic to India altogether. In total, exports from the country suffered a $420 million loss.
Nosocomial (hospital-acquired) infections are a serious problem worldwide. According to WHO, at any given time more than 1.4 million people worldwide suffer from infectious complications acquired in hospitals (WHO, 2002f). Nosocomial bloodstream infections are a leading cause of
death in the United States (Edmond et al., 1999; Wenzel and Edmond, 2001). CDC estimates that each year nearly 2 million patients in the United States acquire infections in hospitals, and about 90,000 of these patients die as a result (CDC, 2002o).
A number of factors drive the development of nosocomial infections. Foremost in the developed world is advances in health care technology (discussed earlier), for several reasons. The first is improved survival of vulnerable populations, such as the elderly; infants of very low birth weight; and cancer, AIDS, and transplant patients. These individuals are susceptible to germs that would not be harmful to healthy people. Second, greater numbers of invasive procedures—such as placement of indwelling catheters, feeding tubes, ventilators, transplantations, and prosthetic devices— are being performed, allowing microorganisms more direct access to patients’ bloodstreams. Finally, widespread use of antimicrobial drugs in hospitals is resulting in more drug-resistant organisms that are increasingly difficult to treat.
In addition, overcrowded conditions within hospitals and a lack of proper sanitation and hygiene contribute greatly to the transfer of microbes. Studies have shown that potentially pathogenic organisms can be passed on to patients from unclean stethoscopes (Marinella et al., 1997), lab coats (Wong et al., 1991), environmental surfaces, and latex gloves (Ray et al., 2002). However, cross-transmission of microorganisms by the hands of health care workers is considered the main route for the spread of pathogens in hospitals (Pittet et al., 1999). Thus, simple handwashing practices remain the most important preventive measure. Unfortunately, many hospitals are unable to maintain an adequate level of handwashing among health care workers (Vicca, 1999; Saade et al., 2001; Doebbeling et al., 1992; Jarvis, 1994).
Nosocomial outbreaks of Lassa fever and Ebola viral hemorrhagic fever in Africa illustrate the additional complexities of preventing hospital-acquired infections in developing countries. Lassa fever spread in Nigeria in 1989 because scant resources led to needle sharing and reuse of disposable equipment. Overuse of parenteral treatments, inadequate surgical facilities, and poorly trained personnel also fueled the spread of the virus among patients and health care providers (Fisher-Hoch et al., 1995). Similarly, in the absence of appropriate precautions to prevent exposure to blood and other body fluids, hospital outbreaks of Ebola viral hemorrhagic fever in Zaire in 1995 passed from patients to health care workers and to family members who provided nursing care (CDC, 1995b) (see Box 3-15).
Hospitals are perfect breeding grounds for transferring infections among patients, health care providers, and the community. Patients in intensive-care units (ICUs) are at particularly high risk for nosocomial infections as a result of their underlying illness, the multiple invasive proce-
BOX 3-15 Ebola Virus
The 1995 Ebola virus outbreak in Zaire is an example of the biocomplexity of the emergence of a microbial threat to health. Even though the animal reservoir for the Ebola virus has yet to be discovered, the nosocomial spread of the virus from human to human has had deadly consequences. The outbreak began on April 4, 1995, when a hospital laboratory technician in Kikwit, Zaire, experienced the onset of fever and bloody diarrhea. One week later, he underwent surgery for a suspected perforated bowel. A few days later, medical personnel employed in the hospital to which he had been admitted in Kikwit developed similar symptoms. One of the ill was transferred to a hospital in Mosango (75 miles west of Kikwit); there, several days later, personnel who had provided care for this patient began experiencing similar symptoms as well (CDC, 1995a).
On May 9, 1995, blood samples from 14 of the acutely ill persons in Zaire arrived at CDC in Atlanta and were processed in the biosafety 4 laboratory. Every sample tested positive for the Ebola virus. After sequencing the virus glycoprotein gene, CDC determined that the newly emerged virus was closely related to the Ebola virus that had been isolated during an outbreak of viral hemorrhagic fever in northern Zaire and southern Sudan in 1976 (the first time the Ebola virus had ever been isolated in humans). In 1967, an outbreak of the closely related Marburg virus had occurred in Marburg, Germany, where laboratory workers were exposed to infected tissue from monkeys imported from Uganda. Ebola and Marburg are the only two known members of the filovirus family. Other reports of filovirus infections or outbreaks have been a single case of infection from a newly described Ebola virus in Cote D’Ivoire in 1994, and a 1989 outbreak of yet another Ebola virus (not associated with human disease) in imported monkeys that had been brought into the United States from the Philippines. Among all reported human outbreaks, 50 to 90 percent of cases have been fatal.
By June 1995, public health authorities had identified 296 persons with viral hemorrhagic fever attributable to Ebola virus infection in Zaire; the case-fatality rate was 79 percent (CDC, 1995b). The median age of infected persons was 37 years, with a range from 1 month to 71 years. Transmission of the virus from person-to-person occurs through close contact with infectious blood or other body fluids or tissues; secondary cases have occurred among persons providing medical care for patients and among patients exposed to reused needles. Although aerosol spread of the virus has not been documented among humans, it has been demonstrated among nonhuman primates (Peters et al., 1991; Jaax et al., 1995).
dures performed on them, and their typically older ages. According to the National Nosocomial Infections Surveillance (NNIS) System, from 1997 to 1999, urinary catheter–associated urinary tract infection (UTI) rates were highest in medical (nonsurgical) ICUs (6.5 UTIs per 1,000 days a catheter was used); central line–associated bloodstream infection (BSI) rates were highest in pediatric ICUs (7.7 BSIs per 1,000 days a central line was used); and ventilator-associated pneumonia rates were highest in surgical ICUs
(13.0 cases of pneumonia per 1,000 days a ventilator was used) (CDC, 2000e).
The nosocomial transfer of antimicrobial-resistant organisms to patients remains a significant concern. Looking specifically at ICU patients with nosocomial infections in 2000, NNIS found that more than 55 percent of Staphylococcus aureus isolates were resistant to methicillin, oxacillin, or nafcillin. Methicillin-resistant S. aureus increased 29 percent in 2000 as compared with the mean of the previous 5 years (1995–1999). Resistance of Pseudomonas aeruginosa to quinolones increased 53 percent during the same period in the same population; vancomycin-resistant enterococci increased 31 percent (National Nosocomial Infections Surveillance System, 2001).
Vaccine-preventable diseases still cause millions of deaths each year, mainly in developing countries. Indeed, nearly 3 million people worldwide die annually from major vaccine-preventable diseases.
Childhood immunizations have contributed to a significant reduction in vaccine-preventable diseases, including measles, mumps, rubella, pertussis, and other potentially devastating illnesses. Yet there remains a sizable segment of the population, both in the United States and internationally, that does not receive childhood vaccinations. Several reasons may account for low vaccine rates, including misperceptions about the actual risk for a given disease, concerns about the safety of vaccines due to widely publicized but unsubstantiated claims of adverse effects, and the absence of a health infrastructure for vaccine purchase and delivery. In Africa, for example, only 55 percent of the population in the region had vaccination coverage for polio and measles in 2000 (WHO, 2001g). That same year, the Americas vaccinated 90 percent of the population against both diseases.
Even in the United States, vaccine-preventable diseases cause staggering numbers of deaths and illnesses. For example, pneumonia and influenza deaths together constitute the sixth leading cause of death in the United States. Influenza causes an average of 110,000 hospitalizations and 20,000 deaths annually; pneumococcal disease causes 10,000 to 14,000 deaths annually (DHHS, 2000b). The annual estimate of influenza vaccination for adults aged 65 and older (those at increased risk for severe disease or death from influenza) is roughly 65 percent. Moreover, the percentage of older adults reported as ever having received a pneumococcal vaccination is even lower. The Healthy People 2010 target is to increase to 90 percent the proportion of noninstitutionalized adults aged 65 and older who are vaccinated annually against influenza and who have ever been vaccinated against pneumococcal disease (DHHS, 2000b).
Reported annual measles cases worldwide declined by almost 40 percent between 1990 and 1999. Nonetheless, an estimated 30 million to 40 million cases occurred in 2000, resulting in approximately 777,000 deaths (WHO and UNICEF, 2001). Measles accounts for nearly half of the 1.7 million annual deaths due to childhood vaccine-preventable diseases. In 1999, the African continent accounted for 58 percent of all estimated measles cases in the world, 87 percent of which were located in the west and central regions (WHO, 2000g). In southern Africa, the number of reported measles cases decreased from over 50,000 annually to 100 in 1999 because of intensive vaccination campaigns (WHO, 2002k). In the Americas, countries that have adequately implemented the strategies recommended by the Pan American Health Organization/WHO have successfully inhibited measles transmission.
The number of reported cases in the United States plummeted from approximately 500,000 before vaccine introduction in 1963 to fewer than 1,500 in 1983 (Bellini and Rota, 1998). Despite these measures, however, a resurgence of measles occurred in the United States between 1989 and 1991, causing more than 55,000 cases and approximately 120 measles-associated deaths. Most of the cases occurred in children under 5 years of age. Of those who lost their lives during this epidemic, 90 percent had not been vaccinated (CDC, 2001k). As a result of successful vaccination efforts, the number of reported measles cases dropped to less than 100 in 2000 (CDC, 2002p). Although it was announced in 2000 that measles was no longer endemic in the United States (CDC, 2002q), a continued risk remains for internationally imported measles cases that could result in indigenous transmission.
Diphtheria had been well controlled in Russia for two decades following the initiation of a universal childhood immunization program in the late 1950s. In the 1970s, however, the number of cases began to rise (see Figure 3-9). The epidemic of diphtheria in the Newly Independent States of the former Soviet Union in the early 1990s marked the first large-scale diphtheria epidemic in industrialized countries in three decades (Vitek and Wharton, 1998). In 1993, the number of reported diphtheria cases surged to nearly 20,000, occurring primarily throughout urban Russia, the Ukraine, Belarus, and the Baltics. By 1994, the number of reported diphtheria cases had surged to 50,412 in the Newly Independent States, with Russia accounting for 83 percent of the cases. In 1994, epidemic diphtheria was reported for
all states except Estonia, where most of the adult population had been vaccinated in 1985–1987 (Vitek and Wharton, 1998).
The large increase in diphtheria cases was due mainly to low vaccination coverage for large segments of the population, which resulted from changes in vaccination policy to fewer doses of vaccine with lower antigenic content. In addition, public support for vaccination programs fell as the level of disease diminished, and a vocal anti-immunization movement received favorable press coverage in an atmosphere of increased distrust of the government. Resurgence was also associated with a change in the predominant circulating biotype of the diphtheria-causing organism, a decline in the standard of living resulting from the dissolution of the Soviet Union, and a high level of population movement. Starting in 1993, a wide-scale vaccination campaign was implemented, so that by 1999, diphtheria rates had returned to the levels recorded in the early 1990s (Netesov and Conrad, 2001).
The worldwide persistence of tuberculosis (TB) has been due chiefly to the neglect of its control by governments, poorly managed TB control programs, poverty, population growth and migration, and a significant
increase in TB cases in HIV-endemic areas (WHO, 2002g). Lack of diagnosis of cases, together with poor adherence to antituberculosis medications among those who are diagnosed, are major barriers to eradicating tuberculosis. Insufficient treatment can lead to prolonged infection and communicability, drug resistance, and relapse of the disease, all of which pose serious health threats to those infected and their communities (Volmink and Garner, 2001).
Directly observed therapy (DOT) was introduced 40 years ago as a means of ensuring that patients with tuberculosis would complete their treatment regimens. The approach involves simply the supervised swallowing of medications. However, many DOT programs have become much more comprehensive in an effort to improve therapeutic adherence. In the United States, these programs include such services as assistance with housing, food subsidies, transportation, and social support (Volmink et al., 2000). Because programs differ with regard to the services provided and the enthusiasm of their employees, it is difficult to discern which factors account for the success of a DOT program.
WHO promotes another version of DOT called Directly Observed Therapy, Short Course (DOTS) that focuses on tropical and low-income countries. DOTS involves five elements: (1) government commitment to sustained TB control activities; (2) improved laboratory detection; (3) a standardized short course of treatment with direct observation for at least the initial two months; (4) a free, uninterrupted drug supply; and (5) a reporting system that documents patient progress and allows assessment of the program (WHO, 1999d). The number of countries using DOTS expanded from 10 in 1990 to 148 in 2000 (WHO, 2002h); however, only 27 percent of the estimated total cases of TB worldwide were treated under DOTS in 2000 (WHO, 2002i). While DOTS includes a number of useful components, the available evidence does not provide strong support for the routine adoption of this program in favor of self-administration of treatment (Volmink and Garner, 2001).
Targets for global TB control were ratified by the World Health Assembly. They include (1) successfully treating 85 percent of smear-positive TB cases and (2) detecting 70 percent of all such cases (WHO, 2002h, 2000e). These goals had not been attained by the end of 2000, so the target year was reset to 2005.
Control of Vector-borne and Zoonotic Diseases
One major factor in the resurgence of vector-borne and zoonotic diseases is the decreased support for and deterioration of public health infrastructure. Continued surveillance and control programs for these diseases are expensive. When budget shortfalls occur, these programs are especially
vulnerable to funding reductions or elimination, especially when the diseases are perceived to be controlled. This is as true in the United States as it is in developing countries. The sporadic and epidemic nature of many vector-borne diseases has resulted in the closure of many state programs and laboratories. A number of states affected by West Nile virus previously had robust arbovirus programs, excellent laboratory facilities, and concomitant vector biology expertise, all of which had been dismantled. With the emergence of West Nile virus, CDC’s Division of Vector-borne Infectious Diseases had to institute capacity-building programs for the states and emergency training programs in arbovirus surveillance and medical entomology (CDC, 2001m). A similar training program had to be instituted for mammalogists and field biologists following the emergence of Sin Nombre virus in the American Southwest.
Societal perceptions have also complicated the control of vector-borne diseases, especially the use of pesticides for vector control. The emergence of West Nile virus in New York resulted in a recommendation to spray pesticides to control adult mosquitoes in the face of the impending epidemic. This action was resisted by many people in the New York City area because of the perceived dangers of pesticide exposure. Public perceptions about pesticide usage may still persist from the negative consequences associated with the historical usage of DDT to control vectors. Despite this major achievement in public health to control malaria in highly endemic areas, recognition that indiscriminate DDT usage led to accumulation of the pesticide in nature and to detrimental effects on nontarget organisms resulted in a total ban on its use in many countries. Yet these environmental problems stemmed from agricultural uses of DDT, not from public health usage to treat walls in homes for vector control. New, more expensive, less stable, and less effective pesticides have replaced DDT for vector control in many countries. Unfortunately, the widespread halt in DDT usage has coincided with a resurgence of malaria in the Americas and elsewhere (Roberts et al., 1997; Attaran et al., 2000), as well as with increases in other diseases, such as leishmaniasis and dengue (see Box 3-16) (Gratz, 1999). These problems and the development of resistance to alternative pesticides in targeted vector populations have renewed the use of DDT for domicile treatment to control malaria in several countries (see the discussion of vector-borne and zoonotic disease control in Chapter 4).
Antiquated Public Health Laws
Public health laws enacted for the control of infectious diseases are rooted in colonial history. Early state and local governments recognized their responsibility to safeguard the public’s health from infectious diseases (e.g., smallpox, cholera, plague) by promulgating sanitary regulations, or-
dering the abatement of public health nuisances (e.g., infested premises), and enforcing disease control measures (e.g., isolation, quarantine, vaccination). Shifting priorities as new diseases emerged help explain why infectious disease laws developed in piecemeal fashion. Over time, some public health laws were updated or modified as medical knowledge (e.g., testing, screening, treatment) and epidemiology advanced. Many public health laws, however, have not been regularly reviewed or revised since the middle of the twentieth century (Gostin et al., 1999).
State laws regarding infectious diseases are often fragmented and inconsistent (embodying differing approaches among the states) and inadequate (failing to provide the powers that are needed) (Gostin, 2001a). Outdated state laws do not support, and may even thwart, effective public health surveillance and interventions. Consequently, the DHHS (2000b) and the IOM (1988) have recommended reforming laws to ensure appropriate legal preparedness for naturally occurring infectious diseases and the intentional use of a biological agent. A corpus of strong public health laws is critically important to ensure such preparedness. Such laws afford public health officials essential powers, such as screening, reporting, vaccination, treatment (including DOT), and isolation or quarantine. These and other powers are constitutionally acceptable when necessary to protect the public’s health if performed with procedural guarantees (Gostin, 2001b, 2002a).
Modern public health laws also enable authorities to prevent disease transmission by ensuring healthy conditions. Inspections of private premises, nuisance abatements, and business regulations can help avert and control epidemics. Legal responses to the recent emergence of West Nile virus are illustrative. Many public health authorities asked property owners to eliminate standing bodies of water that, as discussed previously, encourage the breeding of mosquitoes that transmit the virus. Inspections of public and private premises were performed. If private property owners failed to eliminate sources of standing water, authorities entered their property to abate the public health nuisance. Likewise, recognizing that stockpiles of discarded tires provide optimal conditions for breeding mosquitoes, the New York state legislature drafted regulations to require tire retailers to practice safe disposal and recycling (Hodge, 2002).
Public health laws also authorize the collection and analysis of public health information through surveillance and epidemiologic investigations. States have enacted reporting requirements for specified infectious diseases, often pursuant to guidance from CDC (see the discussion in Chapter 4). The collection and use of identifiable health data for public health purposes raise individual privacy concerns. Courts generally allow the collection of such data provided that adequate safeguards are in place. New federal privacy regulations pursuant to the Health Insurance Portability and Ac-
BOX 3-16 The Breakdown of Vector Control
The emergence of epidemic dengue and dengue hemorrhagic fever and shock syndrome (DHF-SS) in the Americas illustrates the dire consequences of the demise of public health control capacity and expertise in vector-borne diseases. More than 2.5 billion people are at risk for dengue virus infection; 50 million cases are estimated to occur annually; and the incidence of DHF-SS continues to increase throughout the world (Gubler, 2002). The major urban vector of dengue viruses is the mosquito Aedes aegypti, whose anthropophilic and endophilic behavior makes it an unparalleled vector. The abundance and distribution of Ae. aegypti are directly linked to the presence of breeding sites (e.g., water storage vessels, discarded cans, tires, or containers that hold water long enough for the vectors to hatch, develop, and emerge). This in turn greatly complicates control efforts, especially source reduction control programs designed to eliminate larval breeding sites. Increased population growth and urbanization, much of it unplanned, has contributed greatly to the dramatic increase in Ae. aegypti abundance (Gubler, 2002; Gratz, 1999).
The dramatic change in the epidemiology of dengue and DHF-SS in the Americas has been the subject of much speculation. Prior to the 1980s, only one or two of the serotypes of dengue circulated in the Americas, and DHF-SS was nonexistent. This was in sharp contrast to Southeast Asia, where all four dengue serotypes cocirculated, and DHF-SS was (and remains) a major public health problem. Many factors condition the differing epidemiology of dengue in the two parts of the world, including the virulence of the viruses (Gubler, 2002). However, there is no doubt that one of the major factors contributing to the emergence of DHF-SS in the Americas was the resurgence of Ae. aegypti in tropical and subtropical cities, concomitant with rampant and unplanned urbanization (see the accompanying figure).
In the 1950s and 1960s, the Pan American Health Organization (PAHO) and participating Western Hemisphere countries established a remarkably effective program to eradicate Ae. aegypti (Gubler, 2002; Gratz, 1999). The major impetus for this effort was the desire to preclude the emergence of sylvatic yellow fever into urban populations,
countability Act explicitly permit public health data collection if consistent with state law.
New and emerging infectious disease threats, especially the threat of bioterrorism, have led federal, state, and local governments to reexamine public health laws addressing infectious disease control. Antiquated laws are increasingly being reviewed and updated to eliminate inconsistencies and reflect current medical knowledge and legal and ethical norms. By 2002, thirty-six states had introduced bills and twenty states had enacted legislation based on the Model State Emergency Powers Act, which consolidates public health powers to enable authorities to respond effectively to bioterrorism and other public health emergencies (Gostin et al., 2002).
which remains a major concern today. Many countries (the United States being one notable exception) were remarkably successful in this regard. Ironically, the success of the programs led to their demise, and the resources and infrastructure needed to support these efforts were soon shifted to other priorities. Now Ae. aegypti is essentially hyperabundant throughout the Americas, and concomitantly all four dengue virus serotypes (including the virulent Asian genotypes which are associated with DHF-SS) are co-circulating in the region (Beaty, 2000).
Distribution of Aedes aegypti (shaded areas) in the Americas in the 1930s, before the mosquito eradication program; in 1970, at the end of the mosquito eradication program; and in 2003 in the absence of effective mosquito control.
Another project to develop a comprehensive model state public health act is currently under way, sponsored by the Robert Wood Johnson Foundation’s Turning Point initiative. These and other legal tools may lead to significant improvements in the reformation of public health laws.
POVERTY AND SOCIAL INEQUALITY
As we enter the twenty-first century, mortality from infectious diseases is correlated more closely than ever before with transnational inequalities in income (Houweling et al., 2001). Countries throughout the developing world and the former communist bloc have been embracing the market-based policies advocated by Europe and North America. The result has
been extensive privatization of state-owned banks and companies, creation of new stock and bond markets, and exposure of economies to foreign investment and capital. These global economic trends affect not only the personal circumstances of those at risk for infection, but also the structure and availability of public health institutions. The structure of health care delivery, in turn, profoundly affects the ability of high-risk populations to pursue health care. For example, the transmission of illnesses such as TB, which nearly disappeared in affluent countries after the introduction of effective antimicrobial therapy in the 1950s, has continued to rise in poor countries. It is not coincidental that many of the latter countries have been hardest hit by microbial threats to health such as HIV, dengue, drug-resistant TB, and malaria (Murray and Lopez, 1997).
The relationship between infectious diseases and economic development has been of increasing interest to scholars and practitioners in a variety of fields (Sen, 1999; Gwatkin et al., 1999; Whitehead Institute, 2002). Studies within the fields of social epidemiology and medical anthropology have illustrated the relationships between large-scale social patterns, such as poverty (see Box 3-17), and clinical, epidemiological, and even biological phenomena (Farmer, 1996; Berkman and Kawachi, 2000; Schoepf, 2001). Public health economists have also identified trends correlating health with resource distribution (Gwatkin, 2000; Gwatkin et al., 1999; Castro-Leal et al., 2000). It is important to note that the arrow points in both directions: not only do infectious diseases have significant and far-reaching economic implications, but poverty and social inequality in and of themselves are major factors in disease emergence (Bloom and Sachs, 1998; Eandi and Zara, 1998; Bhargava et al., 2001; WHO, 2002j; Dixon et al., 2002).
Socioeconomic status is often implicated in public health trends that might appear at first glance to be unrelated. For example, chronic infection with hepatitis B virus has been associated with low educational attainment, lower social stratum, and crowded urban residence. Meanwhile, hepatitis B virus has been implicated as a major etiological agent of liver cancer, suggesting that even the latter condition may have an economic determinant (Stuver et al., 1997). Socioeconomic status has been identified as a factor contributing to the escalating problem of worldwide antimicrobial resistance (Okeke et al., 1999). In 1990, developing countries spent $41 per person on health, compared with $1,500 per person in industrialized countries. This underfunding creates a chronically inadequate or erratic supply of drugs, which, combined with several other factors, such as transportation costs and the high costs of medical treatment and drugs, leads to poor patient compliance. Poor compliance, in turn, leads to the emergence of antimicrobial resistance, as has been learned all too well from the emergence of multidrug-resistant TB. Even where excellent country-level TB control programs are in place, treatment efficacy in poor communities can
BOX 3-17 World Poverty Statistics
Although extreme poverty declined worldwide in the 1990s from 28 percent in 1987 to 23 percent in 1998 (World Bank Group, 2002), 2.8 billion people are still living on less than US$2 a day and 1.2 billion are living on less than US$1 a day (see the figure below). The average income in the richest 20 countries is 37 times the average in the poorest 20, a gap that had doubled over the past 40 years (World Bank, 2001).
Within and among regions, income disparities became more pronounced during the 1990s. Countries in transition to a market economy also saw a sharp increase in inflation and poverty. Moldova, one of the poorest countries in Europe, experienced a dramatic worsening of poverty; the percentage of people living below the national poverty line increased from 35 percent in May 1997 to 46 percent in the fourth quarter of 1998. The Russian Federation similarly experienced a jump in poverty from an estimated 12 percent during the Soviet period to 43 percent by 1996. In the Kyrgyz Republic, poverty was fundamentally a rural phenomenon, whereas in Bulgaria and Hungary in 1997, the Roma population comprised a disproportionately large percentage of the poor (World Bank Group, 2002). In the United States almost 33 million people were living below the poverty level in 2001 (U.S. Census Bureau, 2002b).
Distribution of the population (1.2 billion) living on less than $1 per day, 1998.
NOTE: The $1 a day is in 1993 purchasing power parity terms.
SOURCE: World Bank Group, 2002.
be undermined by inappropriate cost recovery policies and the remoteness of secondary or tertiary care facilities, potentially leading to the emergence or spread of drug resistance through inadequate therapy (Kim et al., 1999). The susceptibility of poor populations to disease is governed by a number of identifiable conditions, including malnutrition, lack of access to clean water and sanitation (as discussed earlier), housing conditions of poor quality, ignorance of preventive measures, absence of social agencies to teach the avoidance of risky behaviors, lack of adequate transportation to and from health care facilities, and limited funds for out-of-pocket expenditures (WHO, 2002j).
Health is closely correlated with economic productivity; indeed, recent studies indicate that disease burden and demography may be more closely correlated with macroeconomic performance than with other commonly used indicators, such as fiscal policy and political governance (Bloom and Sachs, 1998). As measured by trade and foreign direct investment, one might expect that the poorest countries would be among the world’s least integrated into the global economy. However, this is not uniformly the case. While cross-border trade accounted for nearly one-fifth of the United States’ and one-third of Mexico’s gross domestic product in 1999, in Botswana, to take one extreme case, that figure was 44 percent (World Bank, 2002). Moreover, because of their lower overall volume and diversification of production, poor countries tend to be more sensitive to fluctuations in international commerce; shocks to commodity prices and terms of trade can drastically affect government revenues, the availability of foreign currency reserves, and economic activity in general (Diaz-Bonilla et al., 2001). Finally, the level of indebtedness in poor countries is often so high as to necessitate close coordination with outside agencies in setting fiscal policy.
The influence of the global economy is not limited to general economic function, but impacts directly on the health sector. In several relatively affluent countries with high disease burdens, increases in earnings for upper-income percentiles are leading to the formation or expansion of markets for private health services (Sbarbaro, 2000), a trend that has at times been actively promoted by development agencies (Stocker et al., 1999). Without proper planning, health-sector privatization can interfere directly with government efforts to combat emerging infectious diseases (Kim et al., 1999). Meanwhile, high levels of indebtedness in many countries with high disease prevalence can divert desperately needed funds from the health sector (Piot and Coll Seck, 2001)
Participation in the globalization process, in the worst case, can potentially further widen international health disparities, leaving the poor even more vulnerable to diseases (Diaz-Bonilla et al., 2001). Then, too, spillovers occur in the opposite direction: financial crises, environmental deteriora-
tion, and the epidemic spread of emerging infections can impact with great force far beyond their nominal zones of incidence. As a matter of both enlightened self-interest and, perhaps, a newly transnational ethical sensibility, it is essential that the United States and other developed countries incorporate in their response to emerging infectious diseases close attention to public health in the developing world.
WAR AND FAMINE
War and famine are closely linked. Not only do they both lead to severe disruptions in food distribution and consumption, but in fact a frequent causal relationship exists between the two. As of April 2001, the Food and Agriculture Organization (FAO) was tracking 16 countries with “food emergencies” (i.e., catastrophic declines in crop production) in sub-Saharan Africa; 9 of these emergencies were directly related to civil strife (FAO, 2001). So-called “complex humanitarian emergencies” provoked by the combined conditions of famine and war contribute directly to the spread of infectious diseases and have fairly consistent sequelae, including malnutrition, measles, diarrheal diseases, respiratory tract infections, and malaria (Toole and Waldman, 1990).
Between 1990 and 1998, 108 wars worldwide claimed 5.5 million lives (Wallensteen and Sollenberg, 1999). Conditions of armed conflict generally result in a breakdown of domestic stability, loss of food security, and destruction of the medical infrastructure. While acquiring reliable data in war-torn regions is difficult, millions of people are thought to die each year not from direct acts of violence, but from inadequate health services (Roberts, 2001; Toole et al., 1993; Toole and Waldman, 1990). According to one recent study, for example, patients with tuberculosis whose chemotherapy was disrupted because of war were three times more likely to die than those who were fully treated during peacetime (Gustafson et al., 2001). As noted, such disruption in treatment not only affects an individual’s risk of death, but also increases the risk of the emergence of drug-resistant strains (Khan and Laaser, 2002; Federation of American Scientists, 2002).
The direct human cost of battle pales in comparison with the human cost of its aftermath. Of the countries in which humanitarian emergencies were designated by the National Intelligence Council in 1999, well over half were the sites of high-intensity conflict (Wallensteen and Sollenberg, 2001). These upheavals had consequences far beyond their nominal zones of incidence. A recent estimate put the number of war refugees worldwide at a full 1 percent of the global population (Summerfield, 1997). In the
present context, displacement due to war can contribute significantly to the emergence and spread of infectious diseases (Kalipeni and Oppong, 1998; Murray et al., 2002). In most of these emergencies, three out of four deaths are attributable to communicable diseases. Refugee camps are usually crowded and dirty, with little or no access to medical care or protection from vectors, and full of people from many different geographic areas (and thus probably carrying a broad range of infectious agents). For example, in 1994 more than a million Rwandan refugees were sheltered in Goma, Democratic Republic of the Congo, formerly known as Zaire, when cholera and dysentery swept through the camps, killing 12,000 people in just 3 weeks. In the post-conflict phase, malaria accounted for over one-third of the total mortality among displaced populations in Central Africa in the aftermath of the Great Lakes crisis of 1994, and TB was estimated to have caused one-fourth of all deaths among refugees in Somalia in the 1990s (Connolly, 2002).
While related in some instances to large-scale environmental patterns, famine is highly correlated with factors other than the weather. Rather, it is highly correlated with social, economic, and political forces, including land tenure, deforestation, and rapid demographic change. According to one theory, a root cause of famine is ultimately a deficiency in food “entitlement,” owing to political disenfranchisement (Sen, 1981); an exclusive focus on issues of food production that ignores this root condition is deeply counterproductive (Sen, 1999). During the Rwandan genocide, to take one extreme example, famine conditions in the vicinity of the refugee camps were thought to have been greatly exacerbated by the toll on household industry exacted by the shigellosis epidemic (Paquet and van Soest, 1994).
As with war, the causal chain between famine and disease is bidirectional (Topouzis and Hemrich, 2000). Emerging epidemics can severely disrupt food production, especially through high mortality and morbidity among workers in agricultural areas and the depletion of family savings to care for those stricken. The social epidemiology of HIV in sub-Saharan Africa testifies to this bidirectional phenomenon; countries that are more dependent on agriculture are affected more by HIV/AIDS (Topouzis and du Guerny, 1999). Clearly, HIV is a recipe for a catastrophic decline in food supplies; preliminary data suggest that such a decline has in fact taken place. In Zimbabwe, according to a recent report cited by FAO, communal agricultural output has declined by half over the past 5 years, almost entirely as a result of HIV/AIDS.
LACK OF POLITICAL WILL
The need for political will to fight microbial threats is still invoked frequently, more in the breach than in the observance. A global political commitment, like peace or universal love, is described in rather vague terms—an impossible goal whose absence is nonetheless to be lamented. Above all else, the notion is presented in the context of an effort to get others involved, parties whose intercession would presumably be decisive in a way that ours would not. Unfortunately, many possible opportunities have been lost as a result of this lack of political will and a general complacency toward infectious diseases reminiscent of the end of the twentieth century (see Box 3-18).
Vague allusions to political will have embraced, perhaps unwittingly, a rather profound concept. Such references may be traced, arguably, to Rousseau’s idea of the volonté générale. “The problem,” he wrote in 1776, “is to find a form of association which will defend and protect . . . the person and goods of each associate, and in which each, while uniting himself with all, may still obey himself alone, and remain as free as before.” And the solution? “Each of us puts his person and all his power in common under the supreme direction of the general will, and, in our corporate capacity, we receive each member as an indivisible part of the whole. At once, in place of the individual personality of each contracting party, this act of association creates a moral and collective body, composed of as many members as the assembly contains votes, and receiving from this act its unity, its common identity, its life and its will” (Rousseau, 1993).
The conception of the social contract was of course a powerful one for Thomas Jefferson and other theorists of the American Revolution. In an age of rapidly increasing global integration, it takes on new resonance—and new complexity. Who would be the parties to a global social contract, and how is their will to be determined? How can the liberties of individual countries and their citizens be balanced against their collective responsibility? Is there any collective responsibility at all? Little consensus exists on these questions among the international community. What is certain, however, is that through microbial threats such as HIV, TB, and malaria, the association of human individuals is being enforced, sometimes in the most brutal possible of ways. If only in this small domain, then, it is essential that we expand our conception of political will to encompass not only governments in the regions of highest prevalence, but also corporations, officials, health professionals, and citizens of more fortunate regions that, willingly or not, share with their governments a common microbial landscape.
To proceed with any hope of success in the struggle against emerging infectious diseases, our model of political will must commit four key groups of stakeholders—donors, health professionals, country authorities, and pa-
BOX 3-18 Lost Windows of Opportunity
Emerging infectious diseases are closing, or have the potential to close, windows of opportunity for infectious disease eradication or elimination. The eradication of smallpox stands as one of the outstanding achievements in the history of public health. Eradication was achieved because of a worldwide effort that was supported by the necessary political will and human and technical resources. The world was able to take advantage of the window of opportunity for smallpox eradication because a safe vaccine was available.
In the year that smallpox was declared eradicated (1980), HIV appeared and rapidly colonized Africa and the world. Today the prevalence of HIV is greater than 25 percent in some adult populations, such as that of the Democratic Republic of Congo (formerly Zaire). In the United States, a military recruit who was immunized against smallpox developed generalized vaccinia because he was HIV-seropositive and died (Redfield et al., 1987). That tragic event highlights the fact that if the global smallpox eradication campaign had been postponed, the world would not have been able to eradicate smallpox as easily as was the case before 1980.
Many windows of opportunity have been lost. In the 1950s and 1960s, gonorrhea was highly prevalent throughout African countries. Governments did not attempt to change people’s behavior to prevent its transmission. Treatment was offered either infrequently or not at all. When available, treatment for sexually transmitted diseases was many times more expensive than treatment for other diseases, especially in the private mission hospitals throughout Africa. Therefore, gonorrhea went largely untreated, and its prevalence increased to a less manageable level. Today, gonorrhea is present throughout Africa, where it causes infertility in women and is one of the major driving forces in the HIV epidemic, facilitating the transmission of the virus. Had effective public health education been in place in the 1960s to help change sexual behavior and had antibiotic treatment been used effectively, there would not be such a great problem with gonorrhea today. In this case, a window of opportunity to control one disease and reduce the rate of transmission and impact of a far more serious disease has been lost.
The prevalence of tuberculosis (TB) and multidrug-resistant TB is increasing globally. The emergence of HIV facilitated the resurgence of TB, another example of a case in which a window of opportunity has been lost. Global surveys show that there is a 1 percent prevalence of resistance to at least one TB drug. Multidrug treatment for TB costs between US$20 and US$30 for a complete cure, but treatment costs are approximately US$3,000 for multidrug-resistant TB. In many places, a window of opportunity to achieve a manageable level of TB by the proper use of drugs has been lost.
The global effort in the 1960s and 1970s to eradicate malaria succeeded in eradicating malariologists, but not the disease. Today, the malaria parasite is resistant to the
tients and civil society—to the necessity of collective action. These groups are quite interdependent in their commitments. To secure the participation of patients and community members (for example, in undergoing testing for TB or HIV), it is often necessary to convince them that treatment will be available, and then to structure their active participation through grassroots-
drugs of choice—chloroquine or pyrimethamine–sulfadoxine (fansidar), or both—because of improper treatment. Drug-resistant malaria takes longer to respond to treatment. In addition, the mosquito species that transmit the parasite are resistant to the insecticides that previously controlled them because of the improper use of the insecticides and a breakdown of public health infrastructure to monitor their appropriate use. A window of opportunity to eliminate malaria and mitigate its impact has been lost, and as a result, increasing numbers of adults are losing work and more children are dying because of the resurgence of malaria.
The spread of other infectious diseases has resulted from lost public health opportunities to prevent their spread. Poor public health practices by local hospital workers in Kikwit, Zaire, drove the 1995 Ebola hemorrhagic fever outbreak. A cycle of transmission among the patient care staff spread the virus to their families and additional patients. The international community learned of the outbreak in May 1995, nearly 20 weeks after the first case had been reported. Poor communication, poor infection control practices, and poor preventive public health measures reflect the weak public health care systems and infectious disease surveillance capacity in most of Africa. With the end of the Cold War, the end of the colonial era, and the decline of Western interest in tropical diseases, the public health infrastructure in many African countries has deteriorated. Infectious disease surveillance is nearly nonexistent, and emerging infections frequently go unrecognized and unreported.
Immunization, the vanguard of public health practice, is losing ground in both developing and developed countries. For example, the rate of immunization against yellow fever is declining in most countries of the world, particularly those in which the yellow fever virus is endemic. It is very difficult to get an African government to commit to programs of vaccination against yellow fever as part of routine immunization efforts, even though the vaccine is safe and inexpensive and confers long-lasting immunity. In addition, tourists are becoming increasingly less rigorous in obtaining vaccinations. An international alert recently occurred when a photographer returned to Germany with an unknown disease. At first it was thought to be Ebola hemorrhagic fever, but yellow fever was confirmed as the diagnosis.
The bovine spongiform encephalopathy (BSE) outbreak in cows in the United Kingdom, with the subsequent resulting outbreak among humans of variant Creutzfeldt-Jakob disease (vCJD), is an example of ignorance in animal food-handling practices and public health measures. In the late 1970s, the procedures for rendering bonemeal and other products from animal carcasses changed. The resultant food products were used in animal feed. However, infectious agents were transmitted through the animal feed from infected carcasses back into ruminants, resulting in the BSE epidemic and the transmission of the BSE agent to humans, and ultimately in vCJD.
SOURCE: Institute of Medicine (2001b).
oriented program activity. To achieve high-level cooperation in establishing those programs, it is necessary to convince country authorities that adequate and consistent funding will be available. And to secure that funding from private and public donors alike, it is necessary to convince them that interventions are reasonable in scope and have been designed appropriately
and with sufficient attention to scientific and clinical safeguards; that effective public health agencies are in place to implement these programs; and, above all, that the programs will ultimately redound to the benefit of the global community as a whole.
Fortunately, much progress has been made in the last few years toward satisfying these reasonable provisions. The Global Plan to Stop TB and the WHO-led Commission on Macroeconomics and Health have each made significant strides, establishing precise estimates of resource needs for specific global interventions. The inauguration of the United Nations Global Fund provides a focal point for program activity around HIV, TB, and malaria. This useful institution has the capacity not only to direct necessary program inputs efficiently, but also to ensure that they are used in a manner consistent with best scientific practices. Structures such as the Global Fund and the MDR-TB Green Light Committee already are serving as a nexus of project activity. Given sufficient attention and resources, they have the capacity not only to turn the tide against these emerging infectious diseases, but also to build infrastructure—and perhaps more important, global consensus—sufficient to combat infections that have not yet emerged.
INTENT TO HARM
The threat of intentional attacks using biological agents on the United States and other countries has never been as serious as it is today. Even before the events in late 2001, when our nation experienced a lethal bioterrorism attack, there was reason for grave concern. Modern history confirms that biological weapons were explored by many nations, although most programs were officially terminated with the Biological Weapons Convention (BWC) treaty, developed in 1972 and now ratified by more than 140 nations. That treaty prohibited the possession, stockpiling, or use of biological weapons, although it contained no provisions for monitoring, inspection, and enforcement (Kadlec et al., 1999).
The United States abandoned its program for offensive biological warfare in 1969, but had been successful in weaponizing infectious organisms and toxins. The revelation in the mid-1990s that the Soviet state had secretly developed a similar but more extensive enterprise (Alibek, 1999) after signing on to the BWC treaty, increased alarm regarding this potential threat. Concern heightened with the disclosure of an ambitious biological weapons program mounted by Iraq (Davis, 1999; Ekeus, 1999), as well as findings that Aum Shinrikyo, the Japanese group that released nerve gas in the Tokyo subway system, had also experimented with botulin and anthrax and sent teams to Zaire in an effort to obtain Ebola for use as a weapon (Olson, 1999). Episodes in the Unites States involving extremist groups or individuals that had obtained dangerous pathogens, such as plague bacillus,
for dubious purposes added to the growing perception of risk (Henderson, 1999).
Today, no one should doubt the likelihood that the development of weapons of mass destruction lies within the reach of others. Some have taken comfort in the fact that the extensive programs of the United States and the former Soviet Union are believed to have been beyond the capacity of other nations. However, we must recognize that these programs manufactured multiple agents without the benefit of today’s advances in science and technology, which have significantly broadened the field of potentially capable state and nonstate actors.
Numerous commissions have reviewed the threat of bioterrorism in recent years (United States Commission on National Security/21st Century, 2001; National Commission on Terrorism, 2000; Gilmore Commission, 2000). They have uniformly concluded that the United States is vulnerable to a bioterrorist attack and that the likelihood of such an event is high. Nations suspected of having offensive biological warfare programs have been named by the Office of Technology Assessment (U.S. Congress, 1993a), and these same states are often also identified as terrorist sponsors. In light of these agreed-upon threats, why has there been so little concern about this possibility in many quarters, and why has so much surprise been expressed over the outcome of a handful of letters containing anthrax spores dispatched through the mail? One important factor is a lack of familiarity among civilian scientists with the concepts that were the pillar of the old U.S. biowarfare program as well as the Soviet program, particularly in regard to the danger of aerosols. Another reason may be the unexpected use of an envelope as the delivery mechanism for such a deadly manufactured powder as that used in the anthrax letters of 2001, instead of a more stealthy and lethal dissemination system.
Nature of the Threat
When one considers the ways in which the intentional use of a biological agent might be carried out, the first rule should be that we do not know who the possible terrorist will be, his or her motivation, or the wherewithal that may be available for the attack. Thus, attacks intended to incapacitate selected persons and thereby gain attention or to cause serious illness for revenge might employ a very different approach than attacks designed to cause mass casualties. An effort by a disgruntled clinical laboratory worker might have a very different scope from one by a well-funded nonstate organization or a state-sponsored group. Parenthetically, the failure of the Japanese Aum Shinrikyo cult to succeed with biological terrorism should not provide much comfort concerning the need for state sponsorship, given the manifest ineptitude of the perpetrators (Smithson, 2000).
A second issue is the dissemination of factual information concerning the real dangers of such an attack. Some (including members of this committee) would argue that the less is said, the better. However, American society does not respond without facts and public opinion programs. This means the actual dangers must be explained to the public and responsible political leaders without inflammatory rhetoric or disclosure of detailed methods for the assaults. In reaching a balance, we take some chance that plain speaking could motivate some to undertake the very actions we are trying to prevent. However candid discussion of the facts may help the public, the media, and health authorities respond in a calmer and more rational fashion than was observed with the aerosol anthrax attacks of 2001. The reality of those attacks is far more provocative than any abstract discussion (see Box 3-19).
Microbes could be delivered to a target population by multiple routes. Directly inoculating victims, infecting natural vectors or reservoirs and loosing them on the target population, or infecting a few people and counting on their spreading the infection even further are some possibilities. If we focus on terrorist strategies that can inflict mass casualties, however, none of these mechanisms is highly feasible today with the exception of smallpox, a virus that is well known to spread from human to human after a long and successful career in that evolutionary niche. If we conclude that other organisms must be delivered directly to the target host, we should also consider water, food, and aerosols as potential vehicles of infection. Contaminated water from wells and storage containers has been associated with outbreaks of disease, but the use of this approach for causing mass casualties is limited because of the dilution factor, chlorination, and the usual treatment of water before consumption in this country. Foodborne pathogens have caused many outbreaks in the United States and are a major cause of morbidity and mortality. Food items, however, are usually not consumed synchronously except at special events; although the extensive network of global commerce can assist in distributing an initial source over wide geographic areas. Improved surveillance of foodborne diseases and newer methods of molecular typing of offending organisms should provide a countermeasure to the possible wide dissemination of contaminated food. If a few cases are recognized and traced to a food source, warnings and recalls may serve to protect others from catastrophic harm.
The overall societal impact of any one of these dissemination methods could be considerable, regardless of the actual health damage. Case studies already exist, including nonlethal Salmonella infection of several hundred citizens (Torok et al., 1997); Sarin gas attacks with 12 deaths (Smithson, 2000); food tampering; and, most recently, anthrax delivered by letter and even anthrax hoaxes. Perhaps the most important route of attack, however, is aerosolization because of its ability to cause such large numbers of casu-
BOX 3-19 Anthrax: Postmarked for Terror
From October 4 through November, 2001, CDC, state and local public health authorities, and local health care providers identified a total of 22 cases of anthrax, including 11 confirmed cases of inhalational anthrax and 7 confirmed and 4 suspected cases of cutaneous anthrax (Jernigan et al., 2002). Five of the inhalational anthrax cases resulted in death. Most of the infected individuals were postal workers at facilities in New Jersey and the District of Columbia, where letters contaminated with anthrax were handled or processed using high-speed sorting machines. Approximately 300 postal and other facilities were tested for B. anthracis spores, and some 32,000 persons were administered antimicrobial prophylaxis following potential exposure at workplaces. About 5,000 persons were advised to complete a 60-day course of antibiotics. Additionally, CDC confirmed three isolates of B. anthracis, indistinguishable from the U.S. isolates, which had been recovered from the outer surface of letters or packages sent in U.S. State Department pouches to the United States Embassy in Peru (2) and Austria (1) (Polyak et al., 2002).
Historically, human anthrax in its various forms (i.e., inhalational, cutaneous, and gastrointestinal) has been most common in people who have regular, close contact with animals or animal products contaminated with Bacillus anthracis spores. Both livestock and wild herbivores are reservoirs of anthrax, which can be spread to humans during slaughtering or, less often, by human ingestion of contaminated meat. Upon exposure to air, B. anthracis transforms from a vegetative state to a highly resistant spore that can remain viable for years; dried or otherwise processed skins and hides of infected animals may harbor the spores and are the primary mechanisms by which the disease spreads worldwide.
Anthrax was known as “woolsorters’” disease in the 1800s, when textile workers became ill from exposure to spore-contaminated animal fibers. Although improved industrial hygiene and restrictions on imported animal products have dramatically reduced the number of cases of human anthrax in the United States in the last part of the twentieth century, epidemics occasionally occur in this country among employees who work with animal products, especially goat hair. Cutaneous and gastrointestinal outbreaks related to handling and consuming infected cattle meat occur much more regularly in other parts of the world. The estimated number of human cases of anthrax worldwide is 20,000 to 100,000 per year as a result of agricultural or industrial exposure (Braunwald et al., 2001). The largest outbreak of inhalational anthrax occurred in Sverdlovsk, Russia, in 1979, after an accidental aerosol release from a military facility; at least 66 people were documented to have died as a result (Meselson et al., 1994).
alties. The deficiencies of aerosols, such as dependence on meteorological conditions, the unsuitability of most organisms for airborne spread, and the technical demands involved, may be counterbalanced in the hands of skillful perpetrators by the advantages of standoff attack, the silent spread of incapacitating or lethal disease, and wide-area coverage.
Aerosols and droplets have long been recognized as routes of microbial transmission. Measles, influenza, smallpox, and tuberculosis are all known to be transmissible between patients by droplets; measles can also be spread by aerosol. In the laboratory, aerosol promulgation of tularemia, rickettsiae, viral hemorrhagic fevers, and many other agents is a threat to the microbiologist (DHHS, 1999). Artificially generated aerosols of anthrax and other agents are high on the list of terrorist attack options. Decreases in human tuberculosis and virtual elimination of diseases such as measles and smallpox from common medical experience, as well as the development of enhanced methods for protecting laboratory workers (ironically using technology developed during the U.S. biowarfare program), have resulted in a loss of appreciation for this route of infection. Yet the U.S. and Soviet biological weapons programs were based largely on the properties of selected agents for causing large-scale infection of human populations under the proper meteorological conditions and with carefully developed methods of aerosol dissemination. Moreover, the terrorist could attack enclosed environments, such as stadiums or large buildings, to negate the meteorological factors that degrade a small-particle aerosol.
Biological agents have not seen widespread use in warfare, so it is not surprising that there is skepticism as to their efficacy. It is generally not appreciated that the U.S. program in offensive biological warfare (terminated in November 1969) rigorously tested each step in the link between a microorganism selected by several criteria and the delivery of a credible biological attack (U.S. Congress, 1993a, 1993b; Rosebury, 1947; Hersh, 1968; McDermott, 1987; Cole, 1997; Sidell et al., 1997; IOM, 2001c). Tularemia is an excellent example because extensive information on this agent is available in the published literature, records of congressional hearings, and the popular press. From its initial isolation (Francis, 1921) this organism was notorious for causing infections in the laboratory, a frequent hallmark of aerosol infectivity. The agent’s aerosol properties were studied intensively, and methods were found to enhance its stability in storage and in aerosols. Animals and later humans were challenged with graded doses of the bacterium delivered in different particle sizes to establish the quantitative properties of these aerosols. Open-air dissemination was mimicked using a surrogate organism, Serratia marcescens, and this effort confirmed that an organism with the aerosol stability and infectivity of Francisella tularensis could cause mass casualties over large geographic areas, provided attention was given to meteorological conditions. The areas affected could reach thousands of square kilometers. The resulting environmental retransmission from the large numbers of different nonhuman mammalian and arthropod species that would be infected in a tularemia attack cannot be
evaluated. Thus, little doubt exists that numerous human casualties could be caused by efficiently weaponized organisms readily available in nature.
A relatively small number of agents are suitable for causing thousands or hundreds of thousands of casualties, and this fact may provide a basis for prioritizing medical and other measures to deny the intent of terrorists. It is impossible to focus on every possible agent, and discussions persist among experts as to whether some should be added to or omitted from the list of those to be addressed. However, general agreement has been reached that those pathogens cited in Box 2-2 in Chapter 2 are the most deadly and the most likely to seriously destabilize government functioning and civil society. They grow to excellent titer for more efficient manufacture, and they are highly stable and infectious in aerosols when properly prepared. Toxins are inherently less efficient because they cannot match the killing or incapacitating power of these highly infectious organisms; the toxins must produce their effects as delivered, but the infectious agents grow and produce toxins or other effects in the recipient’s body.
According to a WHO scenario, several infectious agents could be expected to produce 35,000 to more than 100,000 casualties if 50 kg were delivered in a line source and carried downwind over a populated area. In the case of some of the more stable agents, downwind reach would exceed 20 km. The Office of Technology Assessment (U.S. Congress, 1993b) has published similar figures. It must be borne in mind that the U.S. and Soviet programs prepared metric tons, not kilograms, of agent, and that appropriate devices for delivering line sources or multiple overlapping point sources were available (Alibek and Handelman, 2000; Meselson et al., 1994; Sidell et al., 1997). The impact on infected members of the population would depend on the agent used and the nature of the response (for example, the alacrity with which initial patients were recognized, public health and medical infrastructure, vaccine and antibiotic stockpiles).
The additional impact that might be possible by modifying naturally occurring organisms using methods well within the reach of simple biotechnology, including induction of antimicrobial resistance, enhancement of virulence by the addition of toxin genes, or selection of more stable or virulent organisms, is formidable. Issues surrounding the more extensive engineering of threat agents are beyond the scope of this discussion. It is important to note, however, that the potential exists.
A CASE IN POINT: INFLUENZA—WE ARE UNPREPARED
The factors that underlie the emergence of all infectious diseases are expanding in magnitude and converging at an ever more rapid pace, thus increasing individual and societal vulnerability to infection. Not only do individual factors lead to the emergence of infectious diseases, but the convergence of factors in time and space can lead to effects greater than the summing of individual factors might predict. Recognition of a convergence of factors can provide warning of an impending microbial threat, and an impetus to act now rather than simply react after an infection has become rooted in society. A better understanding of how the factors involved in emergence can converge to change vulnerability to infectious diseases would allow better preparedness for the prevention and control of microbial threats to health.
Humanity’s struggle with influenza is illustrative of such a convergence of factors, which has resulted in maintaining the presence of this virus and periodically led to epidemics of the disease. Social, political, and economic factors interact with ecological factors to drive influenza viruses to respond through biological and genetic factors, thus circumventing human defense mechanisms and, in today’s increasingly global society, exerting effects on economic, social, and political life worldwide (see Figure 3-1 for a visual model of this convergence of factors). The challenges to the prevention and control of influenza as a natural threat illuminate the ultimate challenge of addressing the convergence of factors that led to its emergence in the first place. Indeed, influenza is the paradigm of a microbial threat to health in which continual evolution of the virus is the main mechanism underlying epidemic and pandemic human disease. The gene pool of influenza A viruses in wild aquatic birds provides all the genetic diversity required for the emergence of new strains of pandemic influenza in humans, lower animals, and birds. A new influenza pandemic in humans is inevitable, and despite the development of pandemic plans in several countries, including the United States, we remain poorly prepared.
Epidemics and Pandemics
The highly variable nature of influenza virus permits the microbe to escape immune responses generated by previous infections and to cause annual epidemics and occasional pandemics of disease in humans (Wright and Webster, 2001). The severity of the epidemics ranges from mild to severe; on average, in nonpandemic years influenza causes 20,000 deaths in the United States. At irregular intervals—three to four times per century— human pandemics of influenza arise. The most devastating of these in recent history, the “Spanish flu” of 1918 (see Box 3-20), caused more than
BOX 3-20 The 1918 Influenza Pandemic
The 1918 influenza A pandemic claimed more than 20 million lives worldwide in less than a year and ranks among the worst disasters in human history. In the United States alone, it is estimated that 1 in 4 people became ill during the pandemic and that 675,000 people died.
Doubt remains as to whether the 1918 influenza pandemic originated in the United States, China, or France. There is agreement that a mild wave occurred simultaneously in the United States, Europe, and Asia in March–April 1918. It is postulated that genetic changes in that virus resulted in high pathogenicity in the second wave. The second wave occurred in September–November 1918 and affected one-quarter of the world’s population; 500 million people were clinically affected during the pandemic.
The name Spanish flu came not from major outbreaks in Spain, but from high mortality among troops in France that for intelligence reasons were attributed to Spanish origins. The highest mortality from the disease occurred after the arrival of American troops in France. Indeed, General Erich Ludendorff, the Imperial German Army Chief of Staff, concluded that it was the virus, not the fresh troops, that ended the World War. A remarkable feature of the 1918 pandemic was that deaths were highest among young adults in the 20–40 year age range.
Molecular analysis of the hemagglutinin (HA), neuraminidase (NA), and nonstructural genes from formalin-treated lung samples in paraffin blocks from soldiers that died in the second wave and from lung tissue from an Inuit woman buried in the permafrost in Alaska has provided information on the probable origin of the virus (Taubenberger et al., 2001). Phylogenetic analysis of the complete HA and NA sequences supports the hypothesis that the 1918 virus was derived from avian influenza precursors and was most closely related to classical swine influenza virus. To date, however, this analysis provides no insight into the enormous pathogenicity of the virus.
The return of military personnel throughout the world coincided with the peak of the second wave. In many cities, the disease was so severe that coffins were stacked in the streets, and the impact was so profound that it depressed the average life expectancy in the United States by more than 10 years. In spring 1919, a nasty but less lethal third wave occurred, and substantial mortality also recurred in 1920 (Kilbourne et al., 1987).
The complete sequence of the 1918 virus will be resolved in the near future, and reverse genetics technology is in place to remake this virus. If we wish to understand the molecular basis of high pathogenicity, remaking the virus may be the only option. If this is done, great care must be exercised to use the highest level of biosecurity. The available sequence information on the HA would permit us to make vaccines, and the sequence of the NA indicates sensitivity to the neuraminidase inhibitors. The precursor virus(es) of the 1918 virus still exist in nature and there is nothing to prevent it or a virus of similar virulence from reemerging (Taubenberger et al., 2001).
20 million deaths worldwide and affected more than 200 million people. In only a few months, it killed more people than had been killed in battle during the 4 years of World War I (1914 to 1918). Viruses descended from the pandemic strain continued to cause annual epidemics from 1920 to 1956. The “Asian flu” pandemic (caused by an H2N2 virus) killed approximately 70,000 persons in the United States. The most recent pandemic, the 1968 “Hong Kong flu,” killed approximately 34,000 persons in the United States. Thus, a pattern is evident: each pandemic is followed by relatively mild yearly epidemics caused by related viruses for which the populace enjoys widespread immunity. After a time, however, the evolving influenza virus gene pool inevitably produces a strain to which humans have no immunity. If we are unlucky, it is a highly transmissible and lethal strain.
Disturbingly, in 1977 an H1N1 virus similar in all respects to a virus from 1957 reappeared in humans in Northern China. This virus was not highly lethal—in fact, it caused only moderate respiratory illness in persons under 20 years of age. The cause of great concern was the possibility that this virus could have come from a frozen source, released accidentally from a laboratory. This event raises the specter of the reappearance of H2N2 influenza viruses that have been stored since the pandemic of 1957. No one born after 1957 has high-level immunity to these viruses, and the biosecurity of such agents is a matter of increasing concern. It has now been more than 30 years since a new pandemic influenza virus has emerged. The world’s influenza advisory groups have warned that a new pandemic is not only inevitable, but overdue.
Impact of Influenza on Society and the Economy
The social and economic impacts of influenza are most apparent during a pandemic. During the lethal wave of the 1918 Spanish flu pandemic (October–November 1918), cities throughout the world were unable to bury their dead; in undeveloped areas, entire villages perished. The social and economic burden of influenza during interpandemic periods is less well studied, especially in tropical areas where malaria and diarrheal diseases remain major problems. However, studies in Canada, the United States, and Holland have shown that annual epidemics of influenza have a major impact on hospital costs among children and the elderly and reduce productivity. Indeed, after evaluating the economic impact of interpandemic influenza, several countries have recommended the annual use of influenza vaccine. In the United States, this recommendation has been extended to all persons aged 50 years or older and those at high risk because of underlying diseases or immunosuppression. The province of Ontario, Canada, has made the most progress in this respect; in 2002, vaccination was offered free of charge to everyone over 6 months of age. In other provinces of
Canada, vaccination is still recommended for those aged 65 and older and for all high-risk groups. Broader vaccination has not been pressed in the United States, purportedly because of limited supplies of vaccine. The result is a vicious cycle, however, as manufacturers will not produce quantities in excess of the certain demand.
Genetic and Biological Factors
Microbial Adaptation and Change
Influenza virus is ideally designed for continuous evolution. Its highly variable antigenic domains, which are situated at the outer end of the spike glycoproteins, permit maximal variability without compromising the function or assembly of the virion (see Figure 3-10) (Lamb and Krug, 2001). The virus’s genome comprises eight RNA segments that can be shuffled or reassorted in cells that are coinfected with multiple viruses. Because of the lack of proofreading mechanisms, influenza virus undergoes an extremely high rate of mutation as it replicates (approximately 1.5 × 10–5 mutations per nucleotide per replication cycle). To cope with the continual genetic variation of human influenza viruses, WHO has established a worldwide network of more than 100 laboratories that isolate viruses for antigenic and molecular analysis (Cox and Subbarao, 2000). These analyses form the basis of WHO’s annual recommendations for influenza vaccines for the Northern and Southern Hemispheres.
Unlike influenza viruses in humans, influenza viruses in their natural aquatic bird reservoirs appear to be in evolutionary stasis (Webster et al., 1992). Some avian influenza viruses have shown no changes in their surface glycoproteins for more than 50 years. The RNA continues to undergo mutation, but the mutations provide no selective advantage; these influenza viruses have become perfectly adapted to their natural hosts over the course of time. After transfer to a new host, however, the viruses evolve rapidly, undergoing a high rate of nonsynonymous mutation that alters their amino acid structure.
The existence of five host-specific lineages of influenza (in humans, horses, pigs, domestic poultry, and sea mammals) indicates that aquatic avian influenza viruses have adapted to these species, overcoming differences between avian and mammalian hosts in body temperature, cell surface receptors, and mode of transmission (see Figure 3-11). In aquatic birds, influenza virus is an enteric parasite that is transmitted by ingestion of fecally contaminated water. In humans, the virus replicates in the respiratory tract and is transmitted via aerosol. The available evidence suggests that the avian–human transition is accomplished via infection of pigs. Pigs possess receptors for both avian (α 2–3 terminal sialic acid) and human
(α 2–6 terminal sialic acid) influenza viruses and thus can act as intermediate hosts. In this respect, it is noteworthy that both the 1918 Spanish and the 1968 Hong Kong pandemic viruses were isolated from pigs and from humans at approximately the same time. The interspecies transmission of influenza usually results only in transitory, localized disease that may be mild to severe. The H5N1 “bird flu” incident in Hong Kong in 1997 was such an incident. Six of eighteen infected persons died, but a stable lineage was not established (see Figure 3-12). The possibility that the virus might
adapt to humans, however, was sufficiently disquieting to prompt the wholesale slaughter of poultry in Hong Kong on two occasions.
During and after adaptation of influenza viruses to a new host, a continuing battle for supremacy occurs between microbe and host. The innate and adaptive human immune responses battle to clear the virus, while the virus evolves strategies to circumvent the immune responses. The virus stays a few steps ahead of natural or vaccine-induced human immunity by means of antigenic drift, or the accumulation of amino acid substitutions in the antigenic epitopes on the spike glycoproteins (hemagglutinin [HA] and neuraminidase [NA]) to which neutralizing antibodies bind. Genetic shift, or the acquisition of new gene segments from the aquatic bird reservoir, can completely change the epitopes that evoke humoral and cell-mediated immunity. This phenomenon may explain in part the devastation wreaked by
the 1918 Spanish flu pandemic. The microbe has also developed ways to downregulate the innate immune response. One of the nonstructural proteins of influenza A viruses (NS1) is an interferon agonist that downregulates interferon—a natural inhibitor of influenza viruses (Garcia-Sastre, 2002). The yearly epidemics of influenza attest to the ongoing battle between host and virus. Human interventions—vaccines and antivirals—are efficacious on an individual basis, but have had little effect on the global spread of the disease.
Two classes of antivirals are used against influenza viruses: the adamantines, which block the ion channel formed by the influenza matrix (M2) protein, and the neuraminidase inhibitors, which prevent virus release by blocking NA enzyme activity (Hayden, 2001). The virus is able to circumvent these antivirals through the natural selection of resistant mutants. Resistance to the adamantines emerged in the first patients who were treated. However, the microbe has had less success in developing resistance to the NA inhibitors. Resistance to these agents requires mutations in both HA and NA, and the NA mutation compromises transmission of the virus. Thus, resistance can be achieved only at a price to the virus.
In summary, the challenges presented by influenza virus reflect its ability to alter itself with remarkable rapidity. This characteristic allows it to survive, to adapt to new hosts, and to evade control strategies.
Human Susceptibility to Infection
The most severe influenza virus infection experienced by most humans is the first infection acquired after the decline of maternal antibodies; the outcome depends on the competency of the individual’s immune function and on the pathogenic potential of the specific variant of influenza virus. Patients who are immunosuppressed because of disease or therapy may shed influenza virus for long periods, and a greater likelihood exists in these individuals that the virus will acquire resistance to natural immune mechanisms and to antiviral therapy. The pathogenicity of influenza virus strains may also differ among host groups. Young adults were most susceptible in the 1918 pandemic, despite peak immune competence at that age. The future may reveal that the virus was able to downregulate the host immune response through as-yet unrecognized mechanisms.
Pacific Island communities also appeared to differ in their susceptibility to the 1918 Spanish flu. The death rate among the Maori population in New Zealand was 43.3 per 1,000 people—almost six times the death rate among New Zealanders of European extraction. Socioeconomic factors account for some but not all of this difference. Other possible factors include the absence of previous exposure of the Maori population to any influenza virus.
The main preventive human defense mechanism against influenza virus infection is humoral immunity (i.e., antibodies) to the highly variable HA and NA spike glycoproteins of the virus. To recover from influenza infection and remove infected cells, on the other hand, the body depends on cell-mediated immunity. Thymus-derived lymphocytes (T cells) recognize specific antigenic epitopes on the viral nucleoprotein and polymerase proteins, and they cross-react with these epitopes on other influenza virus strains. Both types of specific immune response require prior exposure to the virus. Therefore, an immune-naïve child, who has developed neither humoral nor cell-mediated immunity to the virus, may have a severe respiratory infection. On exposure to a second influenza virus that is antigenically similar to the first but has undergone antigenic drift, the child will be infected but will recover more rapidly because of the cross-reactive cell-mediated immune response. However, there is a conundrum associated with the immune response to highly variable microbes. The child’s second exposure to influenza virus will induce a response directed mainly against the first influenza virus encountered. In this phenomenon, known as original antigenic sin, the immune system retains a lifelong memory of the first virus exposure in childhood. Thus, the antibody response is misdirected, and the efficacy of humoral immunity is reduced. This mechanism affects immunity to all infectious agents that undergo antigenic drift, including HIV.
Fifteen HA and nine NA subtypes of influenza A viruses circulate in the aquatic birds of the world. The viruses cause no apparent disease in these natural hosts, with which they appear to be in near-perfect equilibrium (Webster et al., 1992). Phylogenetically, these viruses can be divided into two clades, one in the Americas and the other in Eurasia. To date, only three of the fifteen HA subtypes have established lineages in humans. It is possible that only those subtypes have the capacity to infect humans. However, the direct transmission of avian H5N1 and H9N2 influenza viruses to humans in Hong Kong in 1997 and 1999 suggests the possibility that all subtypes can infect humans. The adaptation of influenza viruses to wild aquatic birds that migrate over vast distances (e.g., from southern South America to the North Slopes of Alaska) is an evolutionary strategy that allows the widespread fecal dissemination of the viruses at no apparent cost to the host. It is only after transmission and adaptation to mammals or domestic poultry that the virus evolves into a disease-causing microbe.
Social, Political, and Economic Factors
Animal Husbandry, Human Behavior, and Travel
The human population of the world continues to increase, as does the number of animals required to feed it. China has seen the most dramatic rise in the number of animals over the past decade. The demand for meat protein has increased strikingly as the result of socioeconomic progress, and populations of pigs and chickens have grown exponentially. Zoonotic disease potential inevitably increases in proportion to the animal population. Poultry, pigs, and people are the known hosts of influenza viruses, and most of the influenza pandemics of the twentieth century have originated in China. Substantial influenza activity has been noted in Hong Kong, which is hypothesized to be a documentable epicenter for the emergence of influenza pandemics. In 1997, avian H5N1 influenza virus was transmitted directly from poultry to humans, killing six of eighteen infected persons. In 1999, avian H9N2 influenza viruses were transmitted to two children and caused mild respiratory disease (see Figure 3-12). In 2001 and 2002, H5N1 viruses that are highly pathogenic to poultry and to mammals (as shown by testing in mice) reappeared in Hong Kong. To prevent spread to humans of the 2001/H5N1 viruses, all of the poultry in Hong Kong was killed and buried. Since 2001, all poultry markets in Hong Kong have been emptied on the same day each month to reduce the buildup of virus. Despite these precautions, however, all of the elements are in place to generate a new pandemic: vast numbers of the primary and secondary susceptible hosts on the mainland and in Hong Kong, and a constantly evolving pathogen. It is inevitable that an influenza pandemic strain will emerge from this mix.
However, the purchase of live poultry is a long-standing tradition, and thousands of people are employed in that industry. A change to the Western-style sale of chilled or frozen slaughtered poultry will meet with resistance until health authorities and the public recognize the ultimate cost of a new pandemic in Asia. Technical and political factors are also at work. The wide availability of refrigeration has now rendered the live poultry markets obsolete, but cultural preferences remain a strong political impediment to regulatory change. As a long-term solution, live poultry markets should be closed not only in Asia, but also in New York City. The markets in New York City are a factor in the emergence of the H7N2 influenza viruses that are causing great losses in the poultry industry in the northeastern United States. More than 4 million birds have had to be slaughtered, and the disease outbreak has prompted a ban on U.S. poultry in Japan. Besides the live markets, close monitoring of other crowded flocks of poultry will be needed.
Modern air travel (discussed earlier) will inevitably hasten the spread of a new pandemic of influenza. Once the virus appears in a major urban area, modern travel will allow its global distribution within a matter of days. The economic impact of an outbreak of highly pathogenic influenza was clearly seen in Hong Kong in 1997. The tourist and poultry industries collapsed because of the H5N1 “bird flu” incident, and Hong Kong suffered a severe economic downturn.
Intent to Harm
Recent advances in reverse genetics of influenza viruses now make it possible to generate influenza viruses to order (Neumann and Kawaoka, 2001). This new technology can reduce the time needed for vaccine preparation by 1 to 2 months if all other necessary resources are available. Perhaps more important, it will allow us to discover the molecular basis of the lethality of some viruses, such as the 1918 Spanish flu pathogen, and identify new targets for intervention in both the microbe and the host. Unfortunately, this new knowledge will also make it possible to generate extremely deadly agents—to recreate the 1918 Spanish flu virus, for example, or to add the H5N1 bird flu genes to a human influenza strain. Although influenza is not high on the list of bioterrorism agents, it has the potential to wreak widespread havoc on human life or to devastate important agricultural resources. Influenza is an exemplar of nature’s natural biowarfare; it now has the added potential to be used by humans for intentional harm.
Influenza is not an eradicable disease. It has now been more than 34 years since the Hong Kong/68 (H3N2) pandemic, and, as noted, all influenza virologists agree that a new pandemic is imminent. All of the developed countries of the world and WHO have created influenza pandemic plans to deal with such an event, and WHO is in the process of developing a Global Agenda for Influenza. Key issues in the global agenda are improvement of global surveillance, assessment of the global burden of influenza, and acceleration of vaccine development and usage.
The disturbing reality is that despite the certainty of a pandemic, even the developed countries of the world are quite unprepared for such an event. The public health infrastructure is inadequate. Hospitals lack the capacity to accommodate a surge of patients. Vaccine manufacturers had severe problems in meeting the demand in 2001 and 2002, the mildest influenza years in two decades, and the repertoire of antiviral drugs is completely inadequate. And increasing bacterial resistance to antibiotics
raises questions about our ability to deal effectively with secondary pneumonia, a common cause of influenza deaths.
If a country cannot cope with interpandemic influenza, it is likely that the pandemic, when it does occur, will cause massive societal disruption. Such disruption cannot be prevented, but it can be lessened if we take action now. A minimum of 6 months is needed to prepare a new influenza vaccine. Only 11 companies worldwide manufacture influenza vaccine, and all of these companies together could not prepare a sufficient quantity even for national, let alone global needs. Therefore, the only immediately available strategy in the face of an influenza pandemic is the use of antivirals. Supplies of these agents are currently tailored to meet very low demand, and it takes an estimated 18 months to manufacture significant quantities of the drugs from the starting materials. Therefore, anti-influenza drugs will be available only if they are stockpiled in advance of a pandemic. Modeling studies are needed to plan the most effective use of such a stockpile of drugs.
The steps needed to deal effectively with interpandemic influenza can also help in preparing for an influenza pandemic. The new initiative promoting universal influenza vaccination in Ontario, Canada, can serve as a model for the world. If demonstrated to be effective, it should be expanded to other areas. Unless vaccine usage is substantially increased during interpandemic years, vaccine manufacturing capacity will be inadequate to meet the demand generated by a pandemic.