Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 75
COUNTERTERRORTSM TECHNOLOGTES AND INFRASTRUCTURE PROTECTION
OCR for page 76
OCR for page 77
Using Biotechnology to Detect and Counteract Chemical Weapons ALAN J. RUSSELL, JOEL L. KAAR, AND JASON A. BERBERICH Departments of Chemical Engineering and Bioengineering University of Pittsburgh Agentase, LLC Pittsburgh, Pennsylvania The tragedy of September 11, 2001, and the ensuing anthrax attacks height- ened public and governmental awareness of the need for reliable, cost-efficient, and deployable diagnostic and treatment systems for chemical weapons. Many of the present methods require cumbersome equipment and complex analytical techniques. In addition, many decontamination solutions are toxic, corrosive, and flammable, and therefore not appropriate for use over large areas or with personnel. Biocatalytic methods of detection and decontamination/demilitariza- tion offer several advantages over existing methods. Enzymes are environmentally benign, highly efficient biological catalysts on appropriate reactants, with hydrolysis rate enhancements exceeding a million fold over the uncatalyzed reactions. They may also be used in conjunction with other processes to limit the environmental impact of existing systems. Perhaps the most attractive feature of enzymes is that they can function effectively under ambient conditions, thus decreasing energy costs and increasing safety (espe- cially when agents are stored in proximity to explosives). To date, enzymes are the most effective known catalysts for degrading nerve agents (organophosphates [OP] are the class of compounds commonly referred to as nerve agents). En- zyme concentrations in the micromolar range are sufficient to degrade nerve agents on contact, and just 10 milligrams of enzyme can degrade as much nerve agent as 1 kilogram of concentrated bleach (a conventional method of degrading nerve agent). The study of OF biocatalysis dates back to the mid-1940s, when Abraham Mazur found that mammalian tissue hydrolyzed OF esters (Mazur, 1946~. Doug- las Munnecke (1977) used cellular extract from a mixed bacterial culture as a catalyst to detoxify organophosphorous pesticide. The rate of enzymatic hy- 77
OCR for page 78
78 FRONTIERS OF ENGINEERING drolysis of parathion, an OP ester commonly used in pesticides, was found to be nearly 2,500 times the rate of hydrolysis in 0.1 normal sodium hydroxide. Mun- necke (1979) later attempted to immobilize the extract on porous glass and po- rous silica beads. It was reported that 2 to 5 percent of activity present in a cellular extract could be retained by the beads, and 50 percent of the immobi- lized activity could be maintained for a full day under ambient conditions. The enzyme organophosphorus hydrolase (OPH) was later isolated and shown to be an effective catalyst for the degradation of a range of OP esters. Frank Raushel and colleagues have studied extensively the activity and stability of OPH from Pseudomonas diminuta (Dumas et al., 1989~. At Texas A&M University, James Wild's laboratory has even produced the enzyme in corn (per- sonal communication). Squid diisopropylfluorophosphotase (DFPase), an en- zyme capable of degrading nerve agents, such as soman, is now in commercial production. In expanding the library of such enzymes, Joseph DeFrank and colleagues from the U.S. Army have discovered and analyzed another nerve agent-degrading enzyme, organophosphorus acid anhydrolase (OPAA) (DeFrank and Cheng, 1991~. Many species produce enzymes that efficiently degrade nerve agents, al- though the natural function of these enzymes remains unknown. The class of OP compounds has only been exposed to nature for a few decades, certainly not long enough for natural systems to have evolved enzymes to degrade them. Thus, the ability of enzymes to degrade these compounds is still an unexplained quirk of fate. The only nerve agent-degrading enzyme with a known natural function is OPAA, which is a peptidase enzyme (an enzyme that catalyzes the hydrolysis of peptides into amino acids). Interestingly, this enzyme is far less active with its natural substrate than with soman. Cells must constantly control their environments and, therefore, must be in a position to rapidly manipulate the concentrations of biocatalysts. Thus, many proteins, such as enzymes, last for only minutes or hours under ambient condi- tions, and enzymes have generally evolved to be unstable molecules. A number of research groups around the world have been working for many years to stabi- lize enzymes for numerous applications, including catalysts for the decontami- nation of nerve agents. These mostly military researchers are also part of a NATO project group (PG31) working to formulate "green" enzyme-based de- contamination systems. Because the preparation and purification of enzymes can be costly, enzymes must be immobilized to increase their reusability and stability. Effective immo- bilization requires that the enzyme-containing material be prepared so that the enzyme maintains most of its native activity, maintains a high operational stabil- ity in its working environment, and maintains a high storage stability. In con- ventional immobilization methods, either covalent or ionic interactions link the protein to a support material. Enzymes have been immobilized on a wide variety of support materials, including alumina pellets, trityl agarose, and glass/silica
OCR for page 79
USING BIOTECHNOLOGY TO DETECT AND COUNTERACT CHEMICAL WEAPONS 79 beads. Polymers also make excellent enzyme-support materials because of their structural flexibility and solvent resiliency. Numerous polymers, including nylons, acrylates, and several copolymer blends, have been used as effective supports. Combining the power of biology with the sophistication of polymers pre- sents considerable opportunities. Synthesis of enzyme-containing polymers in- volves the covalent immobilization of the enzyme directly into the polymer net- work via reactive functionalities on the enzyme surface, thus ensuring retention of the enzyme in the polymeric material. Bioplastics prepared in this way ex- hibit remarkable stability under normally deactivating conditions and, thus, are ideal for the development of the next generation of responsive, smart biomater- ials. One of the most pressing needs at the beginning of the twenty-first century is for active, smart materials for chemical defense. The threat of chemical weapons is now ubiquitous, and these poor man's nuclear weapons can kill on contact. Existing chemical defense polymers are designed to provide an impenetrable barrier between us and our environment. By contrast, layered materials use polymers to bind toxic chemical agents irre- versibly to the material. Degrading toxic chemicals requires catalysts that can be incorporated into the polymer in a stable and active form. In protective clothing, a layer of polyurethane, foam-entrapped, activated charcoal is embedded between several layers of polyester fabric. Polyurethanes are effective substrates for OF adsorption, and reports have documented that polyurethane foam particles can be used as adsorbent materials for pesticide vapors in farming fields. If proteins could be incorporated into polyurethanes, some interesting materials might emerge. Enzyme-containing polyurethane materials are ideal matrices because of their ease of preparation, the large range of polymer properties that can be pre- pared, and multipoint, covalent attachment of the enzyme to the polymer. Bio- plastics are prepared by reacting a polyurethane prepolymer that contains mul- tiple isocyanate functionalities with an aqueous solution containing the enzyme. The solubility of enzymes in this aqueous phase can be significant, enabling loadings up to 15 percent. Foams, gels, and coatings can be prepared depending on the reactivity of the isocyanate. Any enzyme that is present in the aque- ous solution can participate in the polymer synthesis, effectively creating an enzyme-containing-polymer network with multipoint attachment. The key question is how well such biopolyurethanes function. The answers are striking. Enzymes can be stabilized from days to years, and the enzymes in no way alter the physical properties of the polyurethane. The derived materials are so active that just 1 kilogram of enzyme immobilized in a multipoint cova- lent fashion is enough to degrade up to 30,000 tons of chemical agent in one year. Detoxifying a chemical agent is only part of the technology we will need to combat the real and present danger posed by these materials. Once a toxic agent
OCR for page 80
80 FRONTIERS OF ENGINEERING binds to a surface, it must be degraded and identified. Indeed, every time a chemical or biological sensor gives a false-positive result, tens of thousands of dollars are invested in responses that are not needed. In the Middle East and Africa, false positives could lead to war; and spurious, sensor-triggering events could change the course and nature of a war. Chemical sensors in particular are generally unreliable, and engineering so- lutions to the problem will require overcoming some formidable technical chal- lenges. We must first understand how chemical weapons work and use that knowledge to design biologically based sensing devices. The common feature of biosensors to date has been a lack of stability. An effective chemical-weapon sensor must not only be able to operate under ambient conditions, but must also be able to operate in extreme desert and arctic conditions. A material that is inherently catalytic and can therefore be used to monitor continuously and detoxify detected molecules in real time represents the "holy grail" of this emerging discipline. Indeed, a surface that can both self- decontaminate and sense the progress of decontamination would represent a quantum improvement in protection for war fighters. The rapid detection of chemical agents is critical to inspections performed under the Chemical Weap- ons Convention, because inspectors are rarely able to control the environment at the inspection site. It is therefore essential that analytical tools vary in sensi- tivity, specificity, and simplicity. The selectivity of enzymes that are active on, or inhibited by, chemical agents of interest, makes them attractive components for biosensor technology. Nerve agents exert their biological effect by inhibiting the hydrolytic enzyme, acetylcholinesterase. Thus, the inhibition of this enzyme makes it ideal for use in nerve agent-sensing systems. The most common kit available today for agent detection, the M256A1, uses enzymes and colored substrates to detect nerve agents. Unfortunately, current continuous enzyme-based sensing is limited by the instability of most enzymes and their sensitivity to changes in the environ- ment. The M256A1 functions by reacting phenylacetate with eel acetylcho- linesterase to produce a colored material. If an agent or other inhibitor is present, the color does not appear. The test takes 15 minutes to perform and is subject to interference from a number of sources (Table 1~. Another drawback is the in- ability of the M256A1 to distinguish between different nerve agents. All of these techniques rely on the absence of a reaction to signal the presence of an agent. These sensors would be much more informative if they provided positive responses (e.g., undergoing a color change when the surface is either clean or contaminated). Nevertheless, enzyme-inhibition assays are commonly used to detect nerve agents. In 2001, an enzyme-based sensor was developed and fielded that combines all of the desirable features and is resistant to almost all types of interference. The sensor is based on coupling the hydrolytic activity of acetylcholinesterase (an acid- producing reaction) with the biocatalytic hydrolysis of urea (a base-producing
OCR for page 81
USING BIOTECHNOLOGY TO DETECT AND COUNTERACT CHEMICAL WEAPONS 81 TABLE 1 Sensitivity of Nerve-Agent Sensors to Various Kinds of Interference Sensor Interference Me M9 M256 M272 CAM High temperature Yes Yes Yes Yes No Cleaning solvents Yes Yes No No No Petroleum products Yes Yes Yes Yes Yes Antifreeze No Yes No No No Insect repellent Yes Yes Yes No Yes Dilute bleach Yes Yes Yes Yes No Water No Yes No No No Source: Longworth et al., 1999. Note: "Yes" indicates the sensor is sensitive to the interference. "No" indicates the sensor is not sensitive to the interference. reaction). In an unbuffered solution containing substrate, the formation of acid by acetylcholinesterase-catalyzed hydrolysis would lead to a rapid drop in pH. Be- cause changes in pH affect the ionization state of amino-acid residues in a protein's structure, every enzyme has an optimal pH at which its catalytic activity is greatest (Figure 1~. As the pH deviates from the optimal level, enzyme activity decreases. Therefore, the pH of the unbuffered solution ultimately reaches a point at which acetylcholinesterase is completely inactivated. Now consider the idealized situation in which the unbuffered solution con- taining the acid-producing enzyme is supplemented with a second enzyme that catalyzes the formation of base (Figure 1~. In this system, the base-producing enzyme (light line) has a pH optimum significantly lower than that of the acid- producing enzyme (dark line). Thus, the formation of base will counteract the production of acid, thereby maintaining pH by creating a dynamic pH equilib- rium between the competing enzyme reactions (pH 7.5 in Figure 1~. Because the generation of base is biocatalytic, base is produced only in response to a de- crease in pH. If the catalytic activity of one of the enzymes is altered via inhibi- tion, the dynamic equilibrium of the system is destroyed; the subsequent shift in pH caused by the inactivation of the dynamic equilibrium can then be used to induce a signal indicating the presence of a target enzyme inhibitor. This is the basis for biocatalytic, dynamic-reaction equilibrium sensing. The advantages over traditional enzyme sensors include: rapid signal development; strong and intuitive responses; and high resistance to interference by tempera- ture. An interferant-resistant nerve-agent sensor manufactured by Agentase, LLC, couples acetylcholine, acetylcholinesterase, urea, and urease in a polymer with a pH-sensitive dye. In the rare event a weapon of mass destruction is used, such as another release of sarin in the Tokyo subway, no effective, environmentally benign de- contamination systems are available. Even though enzymes provide an efficient,
OCR for page 82
82 1.4 It 1- ~ 0.8- .2 `~ 0.6- ~ 0.4- N 0.2- O- FRONTIERS OF ENGINEERING reaction generates base a' - ' \~;,~et pointing reaction generates acid 5 6 7 8 pH 9 10 FIGURE 1 The pH dependence curves of two enzyme-catalyzed reactions. The base- producing enzyme (light line) has a lower optimal pH than the acid-producing enzyme (dark line). When both enzymes are present, the pH instantaneously stabilizes at the intersection point, which we call the pH "set point." safe, and environmentally benign method of decontaminating nerve agents, sev- eral major hurdles still hinder their use. To decontaminate significant quantities of nerve agent, the system will require a good buffering system to handle the large amounts of acid produced from the enzyme-catalyzed hydrolysis. This problem is amplified in low-water environments, where the localized concentra- tion of agent can easily reach 0.3 molar. Currently, the preferred buffer for enzymatic decontamination systems un- der development by the U.S. Army is ammonium carbonate. The addition of solid ammonium carbonate to water results in a pH of 8.5 to 9.0 with no adjust- ment needed, and the ammonium ions are known to stimulate the activity of OPAA (Cheng and Calomiris, 1996~. Ammonium carbonate, however, does not have a high buffering capacity, thus making it impractical for use in large-scale decontamination. Recent studies at Porton Down, United Kingdom, demonstrated that in de- tergent and microemulsion systems containing 50 millimolar (mM) ammonium carbonate buffer, enzymatic degradation of high concentrations of soman (1.3 to 1.5 percent) resulted in a drop in pH from 9 to 6 in less than five minutes, at which point the enzyme was inactive. Complete decontamination of the agent was not achieved. Increasing the concentration of buffer two-fold decreased the rate of drop in pH; the pH fell to 6 in less than eight minutes. However, com- plete decontamination was still not achieved. The use of conventional biological buffers at concentrations sufficient to maintain pH in an optimum range for
OCR for page 83
USING BIOTECHNOLOGY TO DETECT AND COUNTERACT CHEMICAL WEAPONS 83 enzyme activity introduces additional obstacles, including potential enzyme in- hibition and chelation of essential metals in the enzyme structure. The biocatalytic, dynamic-reaction equilibrium approach previously de- scnbed can overcome such obstacles in the large-scale decontamination of nerve agents. Base can be generated on demand in response to the decrease in pH resulting from agent degradation. Consider an unbuffered aqueous system that includes a nerve agent-degrading enzyme, urea, and urease. In the presence of a nerve agent, biocatalytic hydrolysis would cause a rapid decrease in pH. In a method analogous to biocatalytic, dynam~c-reaction, equilibrium sensing, this decrease activates urease, which generates base via the conversion of urea. Our laboratory has successfully demonstrated using biocatalytic, dynam~c-reaction, equilibrium sensing for the complete decontamination of paraoxon by OPH, using urease-catalyzed urea as a buffering agent (Russell et al., 2002~. Such defense systems present a significant advancement in chemical weapons de- fense, giving military personnel and emergency response teams the ability to achieve complete decontamination using minimal amounts of buffering matenal. In summary, chemical weapons exert their terrifying effects on human physi- ology by interrupting biological processes. The chemicals bind tightly to en- zymes, but they can also be degraded by enzymes through fortuitous side reac- tions. The inhibition and activity of these enzymes can be part of our defensive arsenal. Thus, biotechnology is both the target and a potential weapon in the war on terronsm. ACKNOWLEDGEMENTS This work was partially funded by a research grant from the Army Research Office (DAAD19-02-1-0072) and by the U.S. Department of Defense Multi- disciplinary University Research Initiative (MURI) Program administered by the Army Research Office (DAAD19-01-1-0619~. The principal author has an equity stake in Agentase, LLC. REFERENCES Cheng T.-c., and J.J. Calomiris. 1996. A cloned bacterial enzyme for nerve agent decontamination. Enzyme and Microbial Technology 18(8): 597-601. DeFrank, J.J., and T.-c. Cheng. 1991. Purification and properties of an organophosphorus acid anhydrase from a halophilic bacterial isolate. Journal of Bacteriology 173(6): 1938-1943. Dumas, D.P., S.R. Caldwell, J.R. Wild, and F.M. Raushel. 1989. Purification and properties of the phosphotriesterase from Pseudomonas diminuta. Journal of Biological Chemistry 264(33): 19659-19665. Longworth, T.L., J.C. Cajigas, J.L. Barnhouse, K.Y. Ong, and S.A. Procell. 1999. Testing of Commercially Available Detectors against Chemical Warfare Agents: Summary Report. Aber- deen Proving Ground, Md.: Soldier and Biological Chemical Command, AMSSB-REN. Mazur, A. 1946. An enzyme in animal tissue capable of hydrolyzing the phosphorus-fluorine bond of alkyl fluorophosphates. Journal of Biological Chemistry 164: 271-289.
OCR for page 84
84 FRONTIERS OF ENGINEERING Munnecke, D.M. 1979. Hydrolysis of organophosphate insecticides by an immobilized-enzyme system. Biotechnology and Bioengineering 21 ( 12): 2247-2261 . Munnecke, D.M. 1977. Properties of an immobilized pesticide-hydrolyzing enzyme. Applied and Environmental Microbology 33: 503-507. Russell, A.J., M. Erbeldinger, J.J. DeFrank, J. Kaar, and G. Drevon. 2002. Catalytic buffers enable positive-response inhibition-based sensing of nerve agents. Biotechnology and Bioengineering 77(3): 352-357.
OCR for page 85
An Engineering Problem-Solving Approach to Biological Terrorism MOHAMED ATHHER MUGHAL Homeland Defense Business Unit U.S. Army Soldier and Biological Chemical Command (SBCCOM) Aberdeen Proving Ground, Maryland In the fall of 2001, anthrax-laced letters resulted in 22 confirmed infections and five deaths. Since then, the Federal Bureau of Investigation and local law- enforcement agencies have responded to thousands of reports of the use or threat- ened use of biological agents. Clearly, biological terrorism is a real and growing threat in the United States. But what is biological terrorism? And what ap- proaches can we use to develop strategies to protect ourselves from this emerg- ing threat? BIOLOGICAL TERRORISM The primary consequence of a large-scale biological terrorist attack would be a catastrophically large number of patients, and response systems must be capable of providing the appropriate types and amounts of medical treatments and services. However, the full spectrum of potential consequences goes far beyond medical casualties. A well planned biological terrorist attack would strain the public health surveillance systems; it would require that responders make quick, accurate disease identifications and medical diagnoses. By defini- tion, biological terrorism is a criminal act, so an attack would entail a compre- hensive criminal investigation. Depending on the biological agent used in an attack, residual environmental hazards may result. A significant portion of the population in the target area might have to be medically managed and physically controlled. These diverse responses will require an integrated command-and-control system that includes federal, state, and local jurisdictions. In short, managing 85
OCR for page 92
OCR for page 93
Internet Security WILLIAM R. CHESWICK Lumeta Corporation Somerset, New Jersey One of the design principles of the Internet was to push the network intelli- gence to the "edges," to the computers that use the network rather than the network itself (Seltzer et al., 1984~. Any given edge computer could send pack- ets to any other edge host, leaving the details of packet delivery to the routers in the center of the network. This principle greatly simplified the design of the routers, which simply had to hot-potato packets toward the appropriate edge host as efficiently as possible. Routers could drop packets if there was congestion, leaving the edge hosts to provide reliable data delivery. Under this scheme, routers could relay packets with few complications, and they could be imple- mented in state-of-the-art hardware. Routers at the core of the Internet have benefited from Moore's Law and operated at its limits since the late 1970s. This approach is in direct contrast to the standard telephone system, in which the intelligence resides in centrally controlled phone switches, and the edges have dumb terminals (i.e., standard telephones). In the phone system, the phone company invents and implements the technology. In the decentralized Internet approach, edge computers can install arbitrary new protocols and capabilities not envisioned by the designers of the Internet; the World Wide Web is the most obvious example. As a result of this design, most current newspaper articles covering the Internet refer to things that happen at the edges. Viruses infect PC clients, worms attack network servers, and the Internet dutifully delivers denial-of- service attack packets to the edges. (For the security concerns of edge hosts, see Wagner, 2003.) Most Internet edge hosts play the role of "clients" or "servers," despite the popular use of the term "peer-to-peer" networking. A client requests a connec- 93
OCR for page 94
94 FRONTIERS OF ENGINEERING lion, and a server provides the service. Servers usually take all comers and must be accessible to a large number of hosts, most of which are clients and are privately owned. Hosts must connect to servers but aren't necessarily servers, which makes them harder to attack. A large majority of edge computers on the Internet are probably susceptible to attack and subversion at any given time. Because it is not economically feasible to harden all of these hosts, they are isolated from the rest of the Internet and many potential attacks by breaking the end-to-end model. It is impossible to mount a direct attack on a computer that cannot be reached. The trade-off between security and functionality does decrease the envisioned power and func- tionality of the Internet somewhat. Often, there is no satisfactory trade-off: the choice is between an unsafe service and no service at all. Enclaves of edge computers are isolated topologically, through firewalls, routing tricks, and cryptography. At the same time, trusted communities of hosts, called intranets, allow businesses to operate with a reduced probability of successful attacks. This approach tends to create an organization with a hard outside and a soft, chewy center. Insider attacks can (and do) occur fairly often, and perimeter defenses often, even usually, contain holes. FIREWALLS Although network services can (and should) be hardened when necessary, the easiest defense is to get out of the game to turn off the service or render it unreachable by attacking hosts. This can done by completely disconnecting the network from the Internet (as some high-security government networks have done) or by connecting it through a device that blocks or filters incoming traffic, called a firewall. Complete disconnection offers the best security, but it is very unpopular with most users, who may then seek to use sub rosa connections. Nearly all corporate networks, and portions of many university networks (which have traditionally been open and unfiltered), use firewalls of varying strictness to cleanse incoming packet flows. Most consumers also have firewalls in their cable modems, DSL routers, wireless base stations, or client computers. Firewalls provide a central site for enforcing security policies. Many com- panies provide a few firewalls as gateways between the corporate intranet and the Internet, equipped with the rules and filters to enforce the company's secu- rity policies. The Internet security expertise, which is scarce and expensive, is situated at a few centralized choke points. This lowers the costs of configuring perhaps hundreds of thousands of corporate hosts to resist software attacks from random strangers and malicious software ("malware"~. Firewalls, a form of the classic perimeter defense, are supposed to offer the only connections between an intranet and the Internet, but long perimeter de- fenses can be difficult to monitor. A firewall may amount to circling the state of Wyoming, but modern corporate intranets can span the world. Therefore, it can
OCR for page 95
INTERNET SECURITY 95 be easy to add an unauthorized connection that breaches the perimeter without the knowledge of the centralized authority. Company business units and rogue employees may believe they have a compelling need often a business need to access the Internet in a way not permitted by corporate policy. For example, it may take time to get approvals for new services, and the new services may require filters that take time to install or may simply be impractical- overconstrained by a poorly designed protocol that doesn't admit to easy filter- ing. An employee who decides it is better to apologize later than to get permis- sion now might make his or her own connection. Unauthorized connections that pierce the corporate perimeter may come from business partners, misconfigured gateways, or newly acquired companies. Even if unauthorized connections have been installed for good business reasons, the reason, and the person who understood the reason, may be long gone by the time a problem is identified. A new network administrator may have to choose between (1) closing down a suspect Internet connection and risking a vital busi- ness service and (2) leaving the connection up and allowing invaders into the company. Rogue connections are often mistakes. It is easy to misconfigure virtual private networks (VPNs), Internet tunnels linking disconnected islands of trust (e.g., linking remote office networks or home computers with headquarters). Company employees often connect to a corporate network using encryption to conceal their data, thus creating a VPN that can interconnect home and office. An employee can also create a hole in the corporate perimeter. (My company's principal business is to find these holes.) If you are in charge of such a network, how can you deal with these prob- lems? The most effective network administrators aggressively manage their networks. They control new and existing connections carefully. They use map- ping tools to detect new routes and connections. They are equipped with, and use, aggressive management controls violators of policy can be and are fired and even prosecuted. Perimeter defenses like firewalls can be effective, and there are periodic pop-quizzes from the Internet itself that prove this. For example, if a widespread worm does not appear on an intranet, you may be reasonably sure that the perim- eter does not have overt holes. Firewalls have limitations, of course. Attacks can come from inside an intranet, and many attacks or weakness are caused by stupidity rather than malice. (One government network administrator said he had more problems with morons than with moles.) An intranet must be hard- ened from internal attacks, which brings us back to host-based security. ROUTING LIMITATIONS You cannot directly attack a host you cannot reach. Besides firewalls, which can block access to a community of hosts, the community can also use network
OCR for page 96
96 FRONTIERS OF ENGINEERING addresses that are not known to the Internet. If the Internet routers don't know how to reach a particular network, an Internet edge host probably cannot send packets to that network. A community with an official set of private addresses that are guaranteed never to be announced on the Internet can use these ad- dresses to communicate locally, but the addresses are not reachable from the Internet. The community can still be attacked through intermediate hosts that have access to both private and public address spaces, and many viruses and worms spread this way. Some attacks on the Internet come from host communities that are only announced on the Internet intermittently. The networks connect, the attacks occur, and then the networks disconnect before a pursuit can be mounted. To give a community using private address space access to the Internet, one uses network address translation. NETWORK ADDRESS TRANSLATION The Internet has grown, with very few tweaks, by more than nine orders of magnitude a testament to its designers. But we are running uncomfortably close to one design limit of IP version 4, the technical description of the Internet and intranets, and their basic protocols. We have only about three billion net- work addresses to assign to edge hosts, and about half of them are currently accessible on the Internet, and more have been assigned. Thus, address space has gotten tight; by one common account, a class B network (roughly 65,000 IP addresses) has a street value of $1 million. A solution that emerged from this shortage was a crude hack called "net- work address translation" (NAT). Many hosts hide on a private network, using any network address space they wish. To communicate with the real Internet, they forward their packets through a device that runs NAT, which translates the packet and sends it to the Internet using its own address as the return address. Returning packets are translated to the internal address space and forwarded to the internal host. A home network full of game-playing children probably looks like one busy host from the Internet. Instead of the end-to-end model, in this model many hosts route their packets through a single host, sharing that host's network address. Only one, or one small set of network addresses is used, conserving official Internet address space. This model has had a useful implication for security. Hosts on the home network cannot be easily scanned and exploited from the outside. Indeed, it takes some work to detect NAT at all (Bellovin, 2002~. Internal hosts enjoy considerable protection in this arrangement, and it is likely that some form of NAT will be used often, even when (if?) IP version 6 (a new protocol description with support for a much larger address space) is deployed. A special configura-
OCR for page 97
INTERNET SECURITY 97 tion is required on the NAT device to provide services to the Internet. The default configuration generally offers no services for this community of hosts, and safe defaults are a good design. EMERGING THREATS TO THE INTERNET Although one can connect safely to the Internet without a topological de- fense, like skinny dipping, this involves an element of danger. Host security has to be robust, and the exposed machines have to be monitored carefully. This is exactly the situation faced by web servers. A server like www.budweiser.com must have end-to-end connectivity so customers can reach it. Often commercial web services have to connect to sensitive corporate networks (think of FedEx and its shipping database or a server for online banking.) These web sites must be engineered with great care to balance connectivity and security. The Internet is a collaboration of thousands of Internet service providers (ISPs) using the TCP/IP protocols. There is somewhat of a hierarchy to the interconnections; major backbone providers interconnect at important network access points and feed smaller, more local ISPs. The graphical and dynamic properties of the Internet are popular research topics. Some recent studies have suggested that the Internet is a scale-free net- work, which would have important implications for sensitivity to partitioning. But this conclusion overlooks two issues. First, the raw mapping data used to build graphical descriptions are necessarily limited. Internet mapping is not perfect, although it is getting better (Spring et al., 2002~. Second, the most critical interconnections were not formed by random processes or by accident, as perhaps are other "six degrees of separation" networks. Internet interconnec- tions, especially backbone connections, have been carefully engineered by their owners over the years and have had to respond to attacks and a variety of misad- ventures, occasionally backhoes. These experiences have helped harden critical points in the network. (Network administrators do not like wake-up calls at 3 a.m., and unreliability is a bad component of a business model.) But there are weaknesses, or at least obvious points of attack, in the core of the Internet. These include routing announcements, the domain name sys- tem (DNS), denial-of-service (DOS) attacks, and host weaknesses in the routers themselves. Routing Announcements A typical Internet packet goes through an average of 17 routers; the longest paths have more than 30 hops. A router's job is to direct an incoming packet toward the next hop on its way to its destination. For a router near an edge, this is simple all packets are directed on a default path to the Internet. But routers
OCR for page 98
98 FRONTIERS OF ENGINEERING on the Internet need a table of all possible Internet destinations and the path to forward packets for each one. A routing table is built from information exchanged among Internet routers using the border gateway protocol (BGP). When a network connects or discon- nects from the Internet, an announcement is sent toward the core routers; dozens of these routing announcements are generated every minute. If a core router accepts an announcement without checking it, it can propagate trouble into the Internet. For example, a router in Georgia once announced a path to MIT's networks through a local dial-up user. As a result, many parts of the Internet could not reach MIT until the announcement was rescinded. Such problems often occur on a smaller scale, and most ISPs have learned to filter rout- ing announcements. Only MIT should be able to announce MIT's network announcements. One can easily imagine an intentional attempt to divert or disrupt the flow of Internet traffic with false announcements or by interfering with BGP in some way. Because ISPs frequently have to deal with inadvertent routing problems, they ought to have the tools and experience to deal with malicious routing attacks, although it may take them a while, perhaps days, to recover. The Domain Name System Although the domain name system (DNS) is technically not part of the TCP/ IF protocol, it is nearly essential for Internet operation. For example, one could connect to http://126.96.36.199/, but nearly all Internet users prefer http:// www.amazon.com. DNS translates from name to number using a database dis- tributed throughout the Internet. The translation starts at the root DNS servers, which know where to find .go, .eddo, .czar, etc. Servers' databases contain long lists of subdomains there are tens of millions in the .com database that point to the name servers for individual domains. For example, DNS tells us that one of the servers that knows about the hosts in amazon.com is at IF address 188.8.131.52. DNS can be and has been attacked. Attackers can try to inject false re- sponses to DNS queries (imagine your online banking session going to the wrong computer). Official databases can be changed with phony requests, which often are not checked very carefully. Some domains are designed to catch typographi- cal errors, like www.anazon.com. In October 2002, the root name servers came under massive denial-of- service (DOS) attacks (see below). Nine of the 13 servers went out of ser- vice the more paranoid and robust servers kept working because they had additional redundancy (a server can be implemented on multiple hosts in mul- tiple locations, which can make it very hard to take down). Because vital root DNS information is cached for a period of time, it takes a long attack before people notice problems.
OCR for page 99
INTERNET SECURITY 99 Denial-of-Service Attacks Any public service can be abused by the public. These days, attackers can marshal thousands of hacked computers, creating "botnets" or "zombienets." They can subvert large collections of weak edge hosts and run their own soft- ware on these machines. In this way, they can create a large community of machines to direct packets at a target, thus flooding target computers, or even their network pipes, to degrade normal traffic and make it useless. These pack- ets can have "spoofed" return addresses that conceal their origins, making trace- back more difficult. DOS attacks occur quite often, frequently to politically unpopular targets; Amazon, Microsoft, and the White House are also popular targets. Some PC viruses spread and then launch DOS attacks. The Internet infrastructure does not provide a means for trace-back or sup- pression of DOS attacks, although several techniques have been proposed. Tar- geted hosts can step out of the way of attacks by changing their network ad- dresses. Added host and network capacity at the target is the ultimate protection. Routers as Edge Hosts Security problems for routers are similar to those for hosts at the edge of the network. Although routers' operating systems tend to be less well known and not as well understood as those of edge hosts, they have similar weaknesses. Cisco Systems produces most of the Internet routers, and common failures in their software can subject those routers to attack. (Of course, other brands also have weaknesses, but there are fewer of these routers.) Attacks on routers are known, and underground groups have offered lists of hacked routers for sale. The holes in routers are harder to exploit, but we will probably see more problems with hacked routers. Botnets Attackers are looking for ways to amplify their packets. The Smurf attacks in early 1998 were the first well known attacks that used packet amplification (CERT~ Coordination Center, 1998~. The attackers located communities of edge computers that would all respond to a single "directed-broadcast" packet. These "tickling" packets had spoofed return addresses of the intended target. When community members all responded to the same tickling packet, the target was flooded with hundreds or even thousands of packets. Most amplification networks now block directed-broadcast packets. At this writing, netscan.org has a list of some 10,000 broken networks with an average amplification of three, barely enough for a meaningful attack. So the hacking community began collecting long lists of compromised hosts and installed slave software on each one. These collections have many names botnets and zom-
OCR for page 100
100 FRONTIERS OF ENGINEERING bienets, for example. A single anonymous host can instruct these collections to attack a particular site. Even though the target knows the location of these hot hosts (unless they send packets with spoofed return addresses), there are too many to shut down. Worse, the master can download new software to the hots. They can "sniff" local networks, spread viruses or junk e-mail, or create havoc in other ways. The master may use public key encryption and address spoofing to retain control over the botnet software and hide the source of the master. The latest development is the use of botnets to hide the identity of web servers. One can put up an illegal web server, and traffic to the server can be laundered through random corrupted hosts on the Internet, making the actual location of the server quite difficult to find. CONCLUSION Most hypothesized Internet security problems on the Internet have eventu- ally appeared in practice. A recent one, as of this writing, was a virus that patches security problems as its only intended action in other words, a "good" virus. (Of course, it had bugs, as most viruses do.) I first heard proposals for this approach some 15 years ago. Although well intentioned, it is a bad idea. The good news is that nearly all Internet attacks, real or envisioned, involve flawed software. Individual flaws can be repaired fairly quickly, and it is prob- ably not possible to take the Internet down for more than a week or so through software attacks. Attacks can be expensive and inconvenient, but they are not generally dangerous. In fact, the threats of cyberterrorism devalue the meaning of "terror." There's more good news. The Internet continues to be a research project, and many experts continue to work on it. When a dangerous, new, and interest- ing problem comes along, many experts drop what they are doing and attempt to fix it. This happened with the Morris worm in 1988 and the SYN packet DOS attacks in 1996, for which there was no known remediation at the time. Major attacks like the Melissa virus were quelled within a week. The success of the Internet has fostered huge economic growth. Businesses have learned to control the risks and make successful business models, even in the face of unreliable software and network connections. Insurance companies are beginning to write hacking insurance, although they are still pondering the possible impact of a widespread, Hurricane Andrew-like Internet failure. I think we have the tools and the experience to keep the Internet safe enough to get our work done, most of the time.
OCR for page 101
INTERNET SECURITY 10 REFERENCES Bellovin, S.M. 2002. A Technique for Counting NATted Hosts. Pp. 267-272 in Proceedings of the Second Internet Measurement Workshop, November 6-8, 2002, Marseilles, France. Available online at: . CERTIFY Coordination Center. 1998. CERTIFY Advisory CA-1998-01 Smurf IP Denial-of-Service Attacks. Available online at: . Saltzer, J.H., D.P. Reed, and D.D. Clark. 1984. End-to-end arguments in system design. ACS Transactions on Computer Systems 2(4): 277-288. Spring, N., R. Mahajan, and D. Wetherall. Measuring ISP Technologies with Rocketfuel. 2002. Presented at ACM-SIGCOMM '02, August 19-22, 2002, Pittsburgh, Pennsylvania. Available online at: . Wagner, D. 2003. Software Insecurity. Presentation at 2003 NAE Symposium on Frontiers of Engineering, September 19, Irvine, California.
OCR for page 102
Representative terms from entire chapter: