Infectious disease epidemics and pandemics have occurred throughout human history. Progress in medical science has made us less vulnerable to their devastation now than at any point in the past. Nevertheless, the nation's current experience with the human immunodeficiency virus (HIV) and acquired immunodeficiency syndrome (AIDS) is a sobering reminder that serious microbial threats to health remain and that we are not always well equipped to respond to them.
As we approach the twenty-first century, it must be remembered that large segments of the world's population still struggle against bacterial, viral, protozoal, helminthic, and fungal invaders. Before addressing the current situation, two past experiences are worth highlighting: the plague pandemic of the mid-1300s and the influenza pandemic of the early 1900s. Both demonstrate the potential danger of uncontrolled infectious disease.
Plague, caused by the bacterium Yersinia pestis, has ancient roots. In the Iliad, for example, Homer makes reference to a plague-like illness prevalent during the Trojan War (1190 B.C.) that he noted was associated with the movement of rats into populated areas (Marks and Beatty, 1976). We know now that plague bacteria are transmitted to humans by fleas whose primary hosts are rodents. Rats, ground squirrels, rabbits, and, occasionally, even house cats can harbor infected fleas. The last great epidemic of plague occurred early in this century in India (causing more than 10 million deaths) (Mandell et al., 1990).
Bubonic plague, the most common form of the disease, is acquired directly from the bite of an infected flea. Other, less common forms of plague include pneumonic, which can develop from the bubonic form and is spread directly from person to person by the respiratory route; septicemic; and meningeal, or plague meningitis. Bubonic plague derives its name from the characteristic swollen lymph nodes (called buboes) in the groin, axilla, and neck areas. Untreated bubonic plague is fatal in half of all cases; untreated pneumonic plague is invariably fatal.
Plague is probably best known because of its role in the Black Death (so called because of the gangrenous extremities often seen in those with advanced disease), a devastating pandemic that swept through much of Asia and Europe during the Middle Ages. Some 20 million people, representing 20 to 25 percent of Western Europe's total populace, are thought to have died during a four-year period (McEvedy, 1988). In large European cities during the peak of the epidemic, people died in such great numbers that few were left to bury the dead. An accurate count of plague victims—especially the poor, who lived in the most crowded conditions—was impossible.
The Black Death arrived in Europe from central Asia in 1346, probably by the ''Silk Road," a trading route from Asia to Europe. It reached Italy by ship from Caffa (McEvedy, 1988). Like many pandemics, its spread was made possible by the devices of transportation. Medical historians believe that the plague bacillus was endemic in the marmot population of Central Asia. It is likely that Mongol invaders who killed marmots for their fur inadvertently extended the range of the Y. pestis-infected rodents and their fleas. The Mongols often rode long distances in a single day and may thereby have transported an occasional infected rat or some fleas as they moved along the trade routes toward Europe (McNeill, 1976).
In the late 1890s, bubonic plague appears to have been introduced into San Francisco by infected rats traveling aboard an Asian merchant ship. In 1900, a small outbreak of the disease developed in the Chinese population of San Francisco. That same year, ground squirrels were found to be infected; soon the plague bacillus had infected most of the area's burrowing rodents (McNeill, 1976). Plauge infection is now enzootic in much of the rodent population in the western United States, Mexico, and Canada.
Thanks to modern sanitation and the availability of antibiotics and pesticides, another occurrence of the Black Death seems unlikely. Still, an outbreak is not out of the question, particularly in regions, including the western United States and parts of South America, Africa, and Asia, where wild rodent populations are persistently infected with the plague bacillus. Isolated cases of plague are reported to this day in various parts of the world, including the United States. Crowding and poor sanitation, if they occur in or near areas where rodent plague is enzootic, could provide conditions for the reemergence of this once devastating bacterial illness. Current legal
restrictions on the use of most pesticides in recreational and wildlife areas limit our ability to control plauge in the event of an epizootic.
Influenza, like plaugue, has caused illness and death among humans for hundreds of years. Given this history, it is somewhat disconcerting that today many people consider this viral disease an unpleasant but basically harmless illness. A diagnosis of influenza nowadays often generates little more than sympathy from one's friends and family, yet one need look no further than the early 1900s to realize the seriousness of the threat that can be posed by this pathogen.
Although influenza pandemics have varied in the severity of their impact, all have caused widespread disease and death. This was especially true of the 1918 influenza A pandemic, which claimed more than 20 million lives worldwide in less than a year and ranks among the worst disasters in human history. In the United States alone, it is estimated that one in four people became ill during the pandemic and that 500,000 died. Mortality from influenza and pneumonia was estimated as 4.8 per 1,000 cases in 1918, three times that of immediately preceding years.
The symptoms of infection with influenza virus are familiar to nearly everyone: fever, chills, headache, muscular aches, and cough. Although most people recover fully within a week, even during pandemics, the elderly, the very young, and those with chronic diseases are at risk of death from the viral infection itself or from complications resulting from secondary bacterial pneumonia. (This kind of pneumonia is responsible for most influenza deaths in persons over the age of 60.)
The origin of the 1918 influenza pandemic remains in doubt, but it may have begun with a mild but sizable outbreak of illness in the United States early in that year, concurrent with the final stages of World War I. The outbreak was confined primarily to military camps and crowded workplaces and was distinctive, in retrospect, because of the large number of young adults who died of the disease. Overall, however, the initial mortality rate was low, and in many places the outbreak passed without comment. Indeed, what turned out to be the first wave of the pandemic was recognized only after the second wave had passed.
The first wave swept across North America in March and April and then subsided almost as rapidly as it had appeared; the infection moved on to Europe, where it first reached epidemic levels in France in April 1918. Over the following several months, influenza spread throughout the whole of Europe. European-based fighting forces found themselves losing the service of significant numbers of troops to what became popularly known (except in Spain) as the Spanish flu, owing to its high incidence in Spain.
Although only 2 to 3 percent of those who fell ill died, the unusually high fatality rate among previously healthy young adults meant the loss of a disproportionate number of society's most productive members. This first wave spread rapidly, encircling the globe in less than five months.
The disease resurfaced in a more virulent form in the United States in August of 1918, causing large numbers of deaths in many U.S. cities as it spread from the East Coast to California. Health authorities reacted by requiring citizens to wear masks in public places and by taking other steps that were presumed to prevent the spread of disease. Many of these efforts were not put in place, however, until the worst of the epidemic had passed.
A third wave, in the spring of 1919, completed what is usually described as the Great Pandemic, although substantial influenza mortality in 1920 might have been the result of the same outbreak (Kilbourne, 1987). The 1918 influenza pandemic affected such a large number of people over such a wide geographic area that it has been the focus of a number of studies (Hoehling, 1961; Crosby, 1976; Osborn, 1977; Neustadt and Fineberg, 1978; Kilbourne, 1987).
Although the 1918 pandemic ranks as one of the single most devastating outbreaks of infectious disease in human history, the technology of the time did not permit the virus to be isolated. Thus, no sample of the causative influenza strain is available for study, which means that the virus's remarkable virulence and transmissibility have not been investigated using the tools of modern molecular virology. Serological evidence suggests, however, that a virus antigenically similar to the 1918 influenza virus persists to this day in swine.
Since the 1918 pandemic, there have been two major global outbreaks of influenza A in the twentieth century—in 1957 and 1968. Significant but less serious pandemics occurred in 1947 and 1977. A potentially serious outbreak appeared imminent in 1976 but, for reasons that are still not clear, never materialized. Some see this 1976 episode as evidence of a failure to judge correctly the threat posed by a human pathogen. Others say that the experience simply points out how difficult it is to predict accurately the course of what could have been a devastating epidemic.
It is impossible to know when the next influenza pandemic will strike. Dangerous new pandemic influenza viruses are most likely to be the result of genetic reassortment of two different, existing influenza viruses. Fortunately, "successful" reassortments are rare. Even so, the danger of such recombinations is always present, and many scientists believe it is simply a matter of time before one occurs again. The lethal 1983 epidemic of avian influenza in chickens (caused by a point mutation in the H5N2 subtype) is a stark reminder that small changes in the viral genome can produce pathogens of exceptional virulence and transmissibility (Kawaoka and Webster, 1988).
OPTIMISM AND INDIFFERENCE
As will be made clear throughout this report, the number and variety of microbial threats to human health are daunting. Just as clear, however, is the fact that tremendous strides have been and continue to be made in the battle against infectious diseases. In particular, advances in medical science and public health practices have vastly improved our understanding of and ability to control many of these illnesses.
Penicillin, discovered in 1929 and eventually produced in a usable form during World War II, was the first of a multitude of new antibiotics that have saved the lives of many who otherwise would have succumbed to bacterial infections. Similarly, the development and mass production of effective vaccines against such diseases as measles, pertussis, diphtheria, and polio have prevented large segments of the population from contracting these and other very serious diseases. More recently, advances in biotechnology, in particular, genetic engineering, have made it possible to produce drugs, vaccines, and other therapeutic agents with increased specificity.
Also important in the battle against infectious disease have been interventions against arthropod vectors of disease agents. Pesticides have played a critical role in suppressing arthropod-borne diseases in the United States and abroad. Unfortunately, excessive agricultural use of some pesticides has resulted in the destruction of nonpest insects and, in some cases, in food and water contamination. Public health use of these chemicals can also cause resistance in the very insects they are intended to kill. In the United States, these environmental and public health concerns led to a 1972 ban on the agricultural use of DDT, a broadly effective and inexpensive means of controlling many insect pests. Ten years later, U.S. production of the chemical ceased altogether, and there was a dramatic worldwide drop in its use. Increasingly, insect resistance and legal restrictions on pesticide use are hindering efforts to control disease-carrying vectors with the repertoire of potentially available chemicals.
One of the most effective ways of preventing epidemics of arthropodborne diseases has been to eliminate the sites in which the vectors breed and develop. This approach was attempted inadvertently as long ago as the sixth century B.C., when the Greeks and Romans undertook major engineering projects to drain marshy swamps in an effort to control outbreaks of fever, which it is now believed were due to malaria. General William C. Gorgas, a U.S. Army physician and engineer, effectively controlled malaria and yellow fever during construction of the Panama Canal (1904–1914) through a combination of drainage and larviciding (Gorgas and Hendrick, 1924). Another example is the eradication of Aedes aegypti from extensive areas of the Western Hemisphere by intensive control of the domestic water sources of this vector. Unfortunately, when such programs were ended,
either for economic reasons or because limitations had been put on the use of pesticides, vector populations grew to pre-eradication levels (Soper et al., 1943).
Historically, advances in U.S. public health have come as a result of one or more major health crises—usually in the form of an epidemic or the fear of one. In the nineteenth century, for example, state and local authorities were willing to spend relatively large sums on sanitary and quarantine measures during epidemics, but once the danger was past, funds were eliminated and the health regulations fell into neglect. Fortunately, that pattern has changed somewhat; in its place, we have established and now maintain a system of health regulations and sanitary measures that generally protect the health of the U.S. population. Over time, however, systems deteriorate, and enforcement of regulations may become more lax. It is essential, therefore, that we continue vigorous efforts to prevent the introduction of infectious agents.
Sanitation and Hygiene
The practices of public sanitation and personal hygiene have dramatically reduced the incidence of certain infectious diseases. In the United States, the mistaken notion that exposed refuse was somehow responsible for outbreaks of yellow fever led to early efforts at sanitary control. In 1796, for example, the New York Medical Society declared that local conditions, especially the "intolerable stench" around the city's docks and the filth around slaughterhouses, were primarily responsible for outbreaks of epidemic fever, including yellow fever (Duffy, 1990). Laws were passed to establish a permanent health office to enforce quarantine regulations and to authorize the cleanup of city streets.
Clean water supplies and their protection from human and other wastes are now fundamental public health principles. Where good sanitary practices are followed, many diseases that once were epidemic, including cholera, are successfully controlled. The same may be said for personal hygiene. Hand washing is an effective method of preventing the spread of many infectious agents, including cold and enteric viruses. Similarly, safe food-handling practices, including proper storage, cleaning, and preparation, have resulted in fewer cases of bacterial food poisonings, among other benefits. The pasteurization of milk, which was instituted to prevent the transmission of bovine tuberculosis to humans, has been equally effective against other diseases such as brucellosis and salmonellosis.
The use of quarantine, another approach to controlling infectious diseases, dates back at least to the Middle Ages. The name itself derives from
efforts in the mid-1400s by the Republic of Ragusa to prevent the spread of bubonic plauge (McNeill, 1976). The republic, on the eastern shore of the Adriatic, required incoming ships to set anchor at sites away from the harbor. The ships' occupants were then required to spend 30 days ("trentina," later extended to 40 days, or "quarantina") in the open air and sunlight, a practice thought to preclude the spread of plague.
The earliest quarantine requirements in America were established in 1647 by the Massachusetts Bay Colony, specifically for ships arriving from Barbados. Other colonial settlements followed suit, with the goal of preventing the spread of yellow fever and smallpox (Williams, 1951). Quarantine proved to be an appropriate tool for controlling smallpox, an infectious viral disease spread easily from person to person, but the practice was ineffective in combating the spread of yellow fever, a mosquito-borne viral disease. In 1799, Congress granted authority for maritime quarantine to the secretary of the treasury and directed that state health laws (including quarantine of ships) be observed by customs officers.
In 1876, as a result of the failure of the Treasury Department to exercise its quarantine authority, U.S. Surgeon General John M. Woodworth proposed a national quarantine system. The proposal called for inspecting all incoming ships, conducting medical examinations of passengers and crews, and detaining under quarantine those thought to be infected. (The length of the quarantine was to be based on the incubation period of the specific disease in question.) These regulations were to be enforced by the Marine Hospital Service, the forerunner of the present U.S. Public Health Service (PHS).
Historically, there is no question that, as a public health measure, quarantine was relied upon more heavily than its practical value warranted. Detainment and isolation were extreme inconveniences to those affected; there was a stigma attached to being quarantined; the detection, restraint, and isolation of suspected cases of diseases that required quarantine were costly; and, because of the stigma, people failed to report their disease. Consequently, although quarantine was useful in controlling isolated outbreaks of human disease, and is still of value in combating epidemics of animal disease (if the animals thought to be infected are subsequently killed), for the most part quarantine policies did more harm than good.
Following World War II, the belief that the practice was of little use to modern disease control efforts began to gather support. A 1966 PHS advisory committee stated that "it is no longer possible to have confidence in the idea of building a fence around this country against communicable diseases, as is the traditional quarantine concept. The increasing volume and speed of international travel make this unrealistic" (Advisory Committee on Foreign Quarantine, 1966).
Following a reorganization of the PHS in 1967, the responsibility for
quarantine was shifted to the Centers for Disease Control (CDC). For several years following that shift, CDC's Foreign Quarantine Division (currently the Division of Quarantine) operated 55 domestic quarantine stations and had a sizable presence abroad. Since then, this number has been greatly reduced, in line with modern epidemiological concepts and technologies. Last year, 7 domestic stations remained open, and in early 1992, the San Juan, Puerto Rico, station was reopened with a staff of one (C. McCance, Director, Division of Quarantine, Centers for Disease Control, personal communication, 1992).
Currently, the United States does not require immunizations for entry within its borders. The master of a ship or commander of an aircraft, however, must radio immediately to the quarantine station at or nearest the port of arrival to report any death or illness among passengers or crew. This procedure allows for the arriving carrier to be handled in a controlled manner. To supplement the few remaining quarantine stations, U.S. Immigration and Naturalization Service and U.S. Customs Service inspectors have been trained to inspect all passengers and crew for signs and symptoms of communicable diseases at ports to which CDC staff have been assigned. Persons who are suspected of being ill are referred to them; if the port has no such personnel, CDC staff at the nearest quarantine station are called. The agency has contract physicians at all ports of entry for medical backup. Certain animals, shipments of etiologic agents, and vectors of human diseases are required to have import permits, and their mode of shipment must conform to requirements.
Prior to 1985, CDC listed 26 diseases that invoked its authority to detain, isolate, or provisionally release persons at U.S. ports of entry. Today, only 7 such diseases are listed: yellow fever, cholera, diphtheria, infectious tuberculosis, plague, suspected smallpox, and viral hemorrhagic fevers. Over the past decade, this authority has been used only three times (C. McCance, Director, Division of Quarantine, Centers for Disease Control, personal communication, 1992).
The most heartening evidence of humankind's ability to triumph over infectious diseases is the eradication of smallpox. A systemic viral disease characterized by fever and the appearance of skin lesions, smallpox is believed by some to have been responsible for the death of more people than any other acute infectious disease. It has also been one of the most feared of all contagious diseases.
Like plague, smallpox was an ancient affliction. There is good reason to think that the Egyptian pharaoh Ramses V died of the disease in the mid-twelfth century B.C. (Fenner et al., 1988; Behbehani, 1991). A disease
considered to be smallpox was mentioned in Chinese writings dating to the third century A.D. (Fenner et al., 1988).
Smallpox probably originated in either Egypt or India and over the ensuing centuries became endemic in both countries. The disease spread, eventually becoming pandemic, as explorers, soldiers, and others infected by the smallpox virus traveled to all parts of the globe. Smallpox was introduced into Mexico by the Spanish army in 1520, killing 3.5 million Aztec Indians (more than half the population) during the brief span of two years. By the late 1500s, the disease had also decimated the populations of South America. In Europe alone, during the seventeenth and eighteenth centuries, smallpox killed about 400,000 persons every year (Fenner et al., 1988; Behbehani, 1991). As late as the 1950s, there were some 50 million cases of smallpox worldwide each year. By 1967, the year that saw the start of the worldwide smallpox eradication program, between 10 and 15 million cases were reported annually (Fenner et al., 1988).
North America was not spared. The disease was introduced into Massachusetts by European settlers in 1617 and spread rapidly. Between 1636 and 1698, six major epidemics in Boston had caused a substantial number of deaths. Native Americans, like indigenous populations in other parts of the world, had never been exposed to the smallpox virus and were particularly hard hit; between one-half and two-thirds of the Plains Indians had died of smallpox by the time of the Louisiana Purchase. The practice of quarantine, instituted for the first time in the American colonies in a misguided effort to prevent the spread of yellow fever, was used with some success in the seventeenth century in the battle against smallpox. Nevertheless, epidemics during the eighteenth century sometimes affected as much as a third of the population. By 1785, smallpox had spread west to California and north to Alaska (Fenner et al., 1988).
The observation, perhaps as early as the tenth century in China, that uninfected people could be protected against smallpox infection by a process termed "variolation" offered the first hope that the disease could be controlled. In variolation, material from the pustule of an individual with smallpox was scratched into the skin of an uninfected person. In most instances, this procedure produced a self-limiting disease and, importantly, an immune reaction that protected those who had been variolated from future smallpox infection. Occasionally, people who were variolated would develop severe disease and die; furthermore, they could transmit the disease to others. Overall, however, variolation was blamed for only about a tenth as many deaths as were caused by naturally acquired smallpox. Variolation was introduced into the American colonies by Cotton Mather and was used extensively during the Revolutionary War.
Edward Jenner, an English physician, was aware that persons who worked with cows developed cowpox (a mild disease) but did not get smallpox. In
1796, Jenner inoculated a boy with cowpox lymph taken from a sore on a milkmaid. The child developed a mild local reaction. Six weeks later, he inoculated the boy with pus from a patient with smallpox. It produced no significant reaction; the boy was immune to the dreaded disease. Jenner used cowpox to immunize others against smallpox, a procedure called vaccination, which later became the means to prevent and subsequently eradicate the disease.
By the mid-1800s, Jenner's method of vaccinating against smallpox was being used in most parts of the world. The resulting steady decline in smallpox incidence was significant. In 1920, in the United States, more than 110,000 cases of smallpox were reported. By 1940, because of vaccination, fewer than 3,000 cases were reported. The last case of smallpox in the United States occurred in 1949 (Fenner et al., 1988).
Epidemic outbreaks of smallpox continued elsewhere, however, especially in countries lacking adequate health infrastructure and the economic means to support an immunization program. In 1967, a year in which there were some 2 million deaths worldwide from the disease (Fenner et al., 1988), the World Health Organization (WHO) initiated a global program to eradicate smallpox. The program combined widespread immunization in epidemic areas and extensive case-finding (surveillance and containment of disease outbreaks). During the program's 10-year life, much was learned about the importance of quality control in vaccine production and the need for flexibility in implementing a large-scale disease-control effort. Disease surveillance was key to the triumph over smallpox and is discussed more fully later in this report. The last naturally acquired case of smallpox was reported in Ethiopia in 1977.
Currently, the Pan American Health Organization (PAHO), the WHO regional office for the Americas, is leading an effort to eradicate poliomyelitis (polio) from the Western Hemisphere. Begun in 1985, the PAHO polio eradication and surveillance program had three goals: achieving and maintaining high levels of vaccine coverage; intensifying surveillance to detect all new cases; and aggressively controlling outbreaks of disease. More than 20,000 health facilities are currently included in the program; 80 percent of them report weekly to a central facility.
By the end of 1991, it was evident that the program was succeeding. In that year, wild poliovirus was isolated from only eight persons in the Americas, whereas in 1986, there were more than 900 confirmed cases. All of the 1991 isolates were from children and were wild type 1 poliovirus (De Quadros et al., 1991). Importantly, by the end of that year, transmission of wild poliovirus seemed localized to one country, Peru. The program's ultimate
goal of eradication of polio from the Americas thus appears to be closer and closer to achievement.
Several aspects of the program have helped it to succeed. The most important has been active feedback to data contributors. Not only do participating hospitals and clinics receive a weekly bulletin that summarizes and analyzes the data received by the central PAHO office each week (an activity greatly facilitated by the network of computers on which the surveillance system is based), but response teams are dispatched to outbreak areas for epidemiologic investigation within 48 hours of an outbreak report. Probably the second most important strength of the polio surveillance system is that it includes laboratory facilities, staffed with trained scientists, as well as clinical facilities. These laboratories are well supplied and equipped (all of the laboratories work with DNA probes; several have polymerase chain reaction [PCR] technology) and are able to exchange specimens as well as data.
The polio surveillance network is financed largely by its host countries, with substantial contributions from PAHO, the U.S. Agency for International Development (USAID), and the InterAmerican Development Bank (IADB). Support also comes from Rotary Club International and the CDC.
Balanced against this history of progress is the reality of a world still very much engaged in confronting the threats to health posed by a broad array of microbes. Medical and epidemiological uncertainties make it impossible to obtain an exact count of the number of infectious diseases that afflict human populations at any point in time. The evidence suggests, however, that humankind is beset by a greater variety of microbial pathogens than ever before. Some of this, of course, may be due to our increased ability to recognize or identify microbes.
Focusing on the past two decades is instructive. During this period, scientists have identified a host of apparently "new" infectious diseases, such as Lyme disease, that affect more and more people every year. Researchers are also discovering that a number of widely occurring diseases, whose exact causes had until recently remained a mystery, are probably the result of microbial infection. Peptic ulcer, a familiar and widespread condition, fits into this category (a recently described bacterium, Helicobacter pylori, is the probable cause), as does cervical cancer (strongly associated with human papillomavirus infection). The potentially infectious origins of other syndromes, such as atherosclerosis, rheumatoid arthritis, and chronic fatigue syndrome, are being pursued.
The incidence of a number of known infectious diseases is escalating, including some that were once considered under control. The reasons for
the escalation vary but include the waning effectiveness of some approaches to disease control and treatment, changes in the ways human beings interact with the environment, and the fact that certain individuals have become more susceptible to infection. These circumstances explain the reemergence of, among other diseases, malaria and tuberculosis.
Finally, the introduction of disease agents into the United States from other parts of the world continues. HIV, whose place of origin is thought by many to be Africa, was the most dramatic because of its long incubation period, rapid spread, and initially unknown classification (see Chapter 2). There are other, less well-known examples. For example, malaria has been introduced in southern California and Florida (see Chapter 2), and dengue has appeared in southern Texas. Although both diseases were rapidly recognized and controlled, they are continually being introduced into this country by travellers from the tropics and thus remain constant threats.
A deadly, hemorrhagic disease of crab-eating monkeys, imported in 1990 to Reston, Virginia, received considerable media attention (see Chapter 2). Scientists at the U.S. Army's Research Institute of Infectious Diseases determined that the causative agent was a virus closely related to the fatal Ebola virus of Zaire and Sudan. (In Africa in 1976, 50 to 90 percent of those infected with the Ebola virus died.) The Reston virus, as it became known, was found to have infected veterinarians and other individuals caring for the monkeys, but fortunately this strain did not make people sick. Nevertheless, the episode illustrates well the potential of foreign disease agents to enter the United States.
Clearly, despite a great deal of progress in detecting, preventing, and treating infectious diseases, we are a long way from eliminating the human health threats posed by bacteria, viruses, protozoans, helminths, and fungi. The following brief snapshots of current and potential threats to health give a sense of the scope and magnitude of the challenges ahead.
Lyme disease is now the most common arthropod-borne disease in the United States. Since 1980, when only a handful of cases were reported, the incidence of the disease in the United States has grown steadily. In 1991, 9,344 cases of Lyme disease were reported to the CDC (D. Dennis, Chief, Bacterial Zoonoses Branch, Centers for Disease Control, personal communication, 1992). There is good reason to believe that these numbers are a significant underestimate. Epidemic Lyme disease is also a growing problem in Europe. In Germany alone, more than 30,000 new cases occur each year (Matuschka and Spielman, 1989). Cases have also been reported in China, Japan, South Africa, and Australia (Jaenson, 1991).
Lyme disease, which can evolve into a debilitating illness, is accompanied
by an extraordinarily wide range of symptoms, many of which mimic other disease syndromes. The disease was initially named erythema migrans for the distinctive skin lesion that results from the bite of the spirochete-carrying ticks. Diagnosis of the condition, especially during its early stages, is difficult. Treatment relies on antibiotics, but this approach becomes less effective as the disease progresses. The disease is caused by infection with Borrelia burgdorferi, a spirochete, and it is transmitted by certain ticks in a complex cycle that also involves mice and deer.
The emergence of Lyme disease is directly linked to changes in land use patterns. In the eastern United States (and in Europe) during the seventeenth, eighteenth, and much of the nineteenth centuries, vast expanses of land were cleared for farming. In the United States during the 1900s, however, much of the East Coast underwent reforestation, in large part owing to the decline of the small farm and the shift of agriculture to the Midwest. New-growth forest now provides an ideal habitat for deer.
Reforestation was accompanied by increased residential development in wooded, suburban areas. The resulting proximity of people, mice, deer, and ticks has created nearly perfect conditions for the transmission of the Lyme disease spirochete to human hosts. Although people are becoming aware of the risk of contracting Lyme disease, ecological trends that favor the survival of reservoir and vector hosts continue. There is little reason to think that the upward climb in incidence will level off or decline soon.
Few, if any, infectious disease specialists 10 years ago would have predicted an infectious etiology for gastritis, peptic ulcer, or duodenal ulcer disease. Yet there is now considerable evidence that a bacterium, Helicobacter pylori, infects gastric-type epithelium in virtually all patients with these conditions.
H. pylori was identified initially as an unknown bacillus in biopsies of involved areas of active chronic gastritis (Warren and Marshall, 1983). Evidence that H. pylori infection is the cause, and not the consequence, of gastritis and ulcers is of several types. First, acute infection with H. pylori in humans results in chronic superficial, or type B, gastritis (Dooley et al., 1989). Second, H. pylori infection is not associated with other types of gastritis, which would be expected if it were merely infecting already damaged epithelium (O'Connor et al., 1984). Third, antimicrobial therapy directed against H. pylori has been shown to heal gastritis and duodenal ulcers in infected patients at least as effectively as acid-suppressive therapy (Glupczynski et al., 1988; Morgan et al., 1988; Rauws et al., 1988). Fourth, gastritis developed following experimental ingestion of H. pylori by human volunteers (Morris and Nicholson, 1989). Finally, recurrence of duodenal
ulcers has invariably followed and not preceded recurrence of H. pylori infection (Logan et al., 1990).
Antibiotic treatment of gastritis, peptic ulcer, and duodenal ulcers associated with H. pylori currently is experimental. Results to date indicate that combinations of antibiotics are required to suppress or eradicate the bacterium.
Helicobacter pylori may also be associated with an increased risk of gastric carcinoma (Parsonnet et al., 1991; Nomura et al., 1991). The finding raises the possibility that this cancer, one of the world's most common, could be mitigated by screening for and treating H. pylori infection.
Malaria, which had been eliminated or effectively suppressed in many parts of the world, has greatly increased in incidence over the past two decades. This parasitic disease results in some 1 million deaths each year, mostly among children, and is becoming increasingly difficult to treat and prevent. Many previously effective drugs no longer work against strains of the parasite that have become resistant. Particularly distressing is resistance to chloroquine, once the treatment of choice for the most severe form of malaria. Equally troubling is that a growing array of once-potent pesticides are no longer effective against mosquitoes that harbor the parasite. The use of other, still effective pesticides is often restricted.
Malaria occurs both in the tropics and, less frequently, in the temperate zones of the world. The disease was a major problem for early European settlers of North America. As recently as the early 1900s, half a million cases of malaria were recorded each year in the United States.
Currently, some 1,200 cases of malaria are diagnosed in the United States annually. The vast majority are individuals entering the country for the first time or returning from foreign travel. A very small number of cases are the result of direct transmission involving indigenous mosquitoes. Most of these have occurred among Mexican agricultural workers living in California in substandard conditions. The largest outbreak since 1952 involved 30 people in San Diego County in 1988. That experience and others like it, in conjunction with the rapid rise in drug-resistant strains of the malaria parasite, raise the possibility of even larger outbreaks in the future (see Chapter 2).
Malaria continues to pose a serious threat to the U.S. military. There were 500,000 cases of malaria among U.S. soldiers during World War II (Ognibene and Barrett, 1982; D. Robinette, Senior Program Officer, Medical Follow-up Agency, Institute of Medicine, personal communication, 1991), and more than 80,000 cases were diagnosed in American troops in Vietnam from 1965 to 1971 (Canfield, 1972). Troops sent to the Middle
East during the Persian Gulf War in 1991 were at risk of contracting malaria, which is endemic in western Saudi Arabia, Yemen, Oman, northern Iraq, and parts of the United Arab Emirates (Gasser et al., 1991). Fortunately, only six cases were diagnosed during the conflict (L. Roberts, Entomology Consultant, Office of the Surgeon General, U.S. Army, personal communication, 1992).
Dengue affects more people worldwide than any other arthropod-borne disease except malaria. More than 2 billion people are at risk, and in some Asian cities virtually every child has been infected by age 12. A small but significant percentage of those infected with dengue virus have severe disease, with hemorrhagic fever and shock.
Epidemiologically, dengue hemorrhagic fever/dengue shock syndrome (DHF/DSS) has become an increasingly important international disease. During the first half of the 1980s, 804,000 cases were reported, more than during the previous 25 years. Another 993,000 cases were reported from 1985 through 1989 (Halstead, 1990). Also troubling is the fact that by the early 1990s, Aedes aegypti mosquitoes, the primary vector for dengue virus transmission to humans, had returned in large numbers to most of Central and South America, presenting abundant opportunity for new epidemics.
In 1985, another potential dengue vector, Ae. albopictus, was found to be established in the southern United States; it had been introduced in used tires imported from Japan. This mosquito, commonly known as the Asian tiger mosquito, has spread rapidly across the South and established itself as a permanent resident. Although it has not been found to carry dengue in the United States, it has other troubling features. Eastern equine encephalitis (EEE) virus was recently isolated from Ae. albopictus mosquitoes collected in and around a tire dump in Polk County, Florida, demonstrating the ability of this imported mosquito to carry a native virus (Centers for Disease Control, 1992d). EEE virus is perpetuated in nature in a cycle that includes primarily marsh-nesting and shore birds and the strictly bird-feeding mosquito Culiseta melanura. These Ae. albopictus mosquitoes probably acquired the virus by feeding on infected birds.
Tuberculosis (TB) was the leading cause of death from infectious disease in the United States and Western Europe until the first decade of this century, and it remained the second leading cause from that time until the advent of antimicrobial drugs in the 1950s (Rich, 1944; Waksman, 1964; Dubos and Dubos, 1987). At present, TB kills more people worldwide than
any other infectious disease (Caldwell, 1987; Murray et al., in press). Each year, according to the WHO, 8 million new cases of clinical TB are diagnosed, and 2.9 million people die of the disease. In the United States, until 1985, TB incidence had been in decline for more than three decades. Between 1986 and 1991, however, 28,000 more cases were reported than were predicted to occur based on past experience.
TB is a bacterial disease whose principal manifestation is destruction of lung tissue; it is spread primarily through the respiratory route by patients with active pulmonary disease. One out of every 10 to 12 healthy individuals infected with the tubercle bacillus develops clinical disease. The case fatality rate of untreated TB in people with clinical disease is 50 percent, and their average life span is six months to two years.
Multiple factors are contributing to the rise in cases of TB. Of major importance are increased poverty and a growing number of homeless individuals and families, substance abuse, a deteriorating health care infrastructure for treating chronic infectious diseases, and the HIV disease pandemic (perhaps the most significant factor at present). Complacency within the medical community and among the public at large and shortages of the drugs used to treat TB are additional factors in the increase.
TB is difficult to treat. Multidrug therapy is invariably necessary, with the drugs administered over at least six months to effect a clinical cure and prevent the emergence of drug-resistant organisms. Where health care infrastructure is adequate and compliance with treatment is maintained, cure rates should exceed 90 percent, even in HIV-infected individuals who have TB, providing resistant organisms are not present. When treatment is either inappropriate or inadequate, resistance to one or more of the treatment drugs often develops. The increased finding of resistant organisms reflects a major breakdown in the social and health care infrastructures.
When multidrug-resistant TB is present, case fatality rates can exceed 80 percent in immunocompromised individuals (see Chapter 2). The presence of multidrug-resistant organisms puts not only TB-infected individuals but also health care workers, social workers, corrections officials, families, and contacts at risk of contracting a disease that is difficult or essentially impossible to treat. Multidrug-resistant TB now represents a major threat to health in the United States.
In the 1950s, with the advent of antituberculosis drugs, TB became one of several newly treatable infections diseases. The common assumption regarding such diseases has been that drugs that have been effective in treating them will continue to be so. Current experience with TB is causing many to question that assumption. Other organisms that have developed resistance to frontline drugs (probably for some of the same reasons) are also signaling the possibility of trouble ahead. Penicillin-resistant streptococcal Group A and Group C infections and penicillin-resistant pneumococcal
infections have been documented, as have vancomycin-resistant staphylococcal and enterococcal infections. Without more careful prescription of antimicrobials by physicians and more consistent compliance with treatment regimens by patients, pathogenic bacteria are likely to undergo mutations that will enable them to resist available antibiotic therapy.
It is unrealistic to expect that humankind will win a complete victory over the multitude of existing microbial diseases, or over those that will emerge in the future. This will be true no matter how well stocked our armamentaria of drugs and vaccines, no matter how well planned our efforts to prevent and control epidemics, and no matter how advanced our basic science and clinical understanding of infectious diseases. Microbes are ranked among the most numerous and diverse of organisms on the planet; pathogenic microbes can be resilient, dangerous foes. Although it is impossible to predict their individual emergence in time and place, we can be confident that new microbial diseases will emerge.
Still, there are many steps that scientists, educators, public health officials, policymakers, and others can and should be taking to improve our odds in this ongoing struggle. With diligence and concerted action at many levels, the threats posed by emerging infectious diseases can be, if not eliminated, at least significantly moderated. To achieve this goal, however, a number of fundamental problems must be addressed. These problems fall into four broad categories:
Perceiving the Threats: The emergence of HIV disease has stimulated a high level of interest in the scientific, medical, public health, and policymaking communities. By and large, however, awareness of and concern about the threats to human health posed by other emerging and reemerging microbial diseases remain critically low. A small minority, mainly infectious disease specialists, have for years warned of the potential for serious epidemics and our lack of preparedness for them. In what can only be called a general mood of complacency, these warnings have gone largely unheeded.
Detecting the Threats: Surveillance is the primary means by which the incidence of established diseases is monitored and outbreaks of new diseases are detected. The domestic disease surveillance network in the United States is being scaled back as a result of fiscal problems in many states, raising concerns about its ability to perform a vital public health function. Equally worrisome, the existing international surveillance networks are focused on little more than a handful of well-defined diseases, and U.S. involvement in this worldwide effort is diminishing. Epidemiological
knowledge of most globally important diseases is incomplete at best; for many there is very little information.
Understanding the Threats: Despite progress in basic and applied infectious disease research, gaps remain in our knowledge about most bacteria, viruses, protozoans, helminths, and fungi. These scientific ''blind spots" have slowed or prevented efforts to understand the variety of factors responsible for disease emergence and reemergence. The diversion of funds for emergencies or, in some cases, inconsistent levels of funding have made it especially difficult to address the full range of research opportunities and needs.
Responding to the Threats: Once emerging microbial threats are detected, responses to them are often feeble. Diseases that appear not to threaten the United States directly rarely elicit the political support necessary to maintain control efforts. U.S. support for surveillance, control, and research activities in other countries is extremely limited. Here, as in other nations, failure to sustain domestic efforts to control infectious diseases is an equally serious problem. Ill-informed decision making can prevent accurate assessments of the actual danger posed by microbial threats to health, and it can slow or even halt steps to address an emerging disease problem. Ironically, these same forces can produce an overresponse to less serious situations. Finally, profit and liability concerns have undercut the market incentives for manufacturers of vaccines, drugs, and pesticides to develop and distribute needed supplies to the most impoverished populations both in the United States and in other countries.
History has shown and this committee believes that the threat from the emergence of infectious diseases is not one to be taken lightly. The development of a strategy for addressing emerging infectious disease threats requires that we understand the factors that precipitate the emergence of these agents and the resultant diseases. These factors are examined in Chapter 2.