Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 116
Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change Appendix H Game Theory and Interdependencies Geoffrey Heal, Ph.D. Paul Garrett Professor of Public Policy and Business Responsibility Columbia University, New York, New York Howard Kunreuther, Ph.D. Cecilia Yen Koo Professor of Decision Sciences and Public Policy University of Pennsylvania, Philadelphia, Pennsylvania There are certain bad events that can only occur once. Death is the obvious example: an individual’s death is irreversible and unrepeatable. More mundane examples are bankruptcy, being struck off a professional register, and other discrete events. In addition there are other events that can in principle occur twice but that are so unlikely and/or so dreadful that one occurrence is all that can reasonably be considered. The events of September 11, 2001, are perhaps of this type. A set of coordinated anthrax attacks in several highly populated regions is another. The fact that such events are typically probabilistic, taken together with the fact that the risk that one agent faces is often determined in part by the behavior of others, gives a unique and hitherto unnoticed structure to the incentives that agents face to reduce their exposures to these risks. The key point is that the incentive that any agent has to invest in risk-reduction measures depends on how he or she expects the others to behave in this respect. For cases where there are complementarities or positive externalities, if the agent thinks that they will not invest in security, then this reduces the incentive for the agent to do so. On the other hand, should the agent believe that they will invest in security, then it may be best for it to do so also. So there may be an equilibrium where no one invests in protection, even though all would be better off if they had incurred this cost. Yet this situation does not have the structure of a prisoner’s dilemma game, even though it has some similarities. A fundamental question that needs to be posed is “Do individuals and organizations invest in security to a degree that is adequate from either a private or social perspective?” In general the answer is no, for reasons that are described below. COMMON FEATURES OF THE PROBLEM There are several different versions of this problem of interdependencies, and all have certain features in common. In what follows a payoff is assumed to be discrete and binary. A bad event either occurs or does not, and that is the full range of possibilities. You die or you live. A firm is bankrupt or not. An anthrax attack is successful or not in a densely urban city. A plane crashes or not. Another feature common to these interdependent problems is that the risk faced by one agent depends on the actions taken by others—there are externalities. The risk of an airline’s plane being blown up by a bomb depends on the thoroughness with which other airlines inspect bags that they transfer to this plane. The risk that an anthrax attack in an urban city is successful depends on the nature of our system for preventing, detecting, and responding to the threat of biological weapons. Finally there is a stochastic element in all of these situations. In contrast to the standard prisoner’s dilemma paradigm where the outcomes are specified with certainty, the interdependent security problem involves chance events. The question addressed is whether to invest in security when there is some probability, often a very small one, that there will be a catastrophic event that could be prevented or mitigated. The risk depends in part on the behavior of others in the system. The unfavorable outcome is discrete in that it either happens or does not. IMPORTANCE OF PROBLEM STRUCTURE These three factors—non-additivity of damages, dependence of risks on the actions of others, and uncertainty—are, as we shall see, sufficient to ensure that there can be equilibria at which there is underinvestment in risk-prevention measures. The precise degree of underinvestment depends on the nature of the problem. To illustrate the nature of interdependencies we focus on two examples: airline security and computer security. If an airline accepts baggage that contains NOTE: This appendix is based on material appearing in Heal and Kunreuther (2006).
OCR for page 117
Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change a bomb, this need not damage one of its own planes: it may be transferred to another airline before it explodes. So in this framework one agent may transfer a risk fully to another. It may of course also receive a risk from another. There is a game of “pass the parcel” here. The music stops when the bomb explodes. It can only explode once so only one plane will be destroyed. The structure of this game is quite different in the case of computer networks. Here it is commonly the case that if a virus (or hacker) enters the network through one weak point, it (or he or she) then has relatively easy access to the rest of the network and can damage all other computers as well as the entry machine (Kearns, 2005). In this case the bad outcome has a characteristic similar to a public good: its consumption is non-rivalrous. Its capacity to damage is not exhausted after it has inflicted damage once. A bomb, in contrast, has a limited capacity to inflict damage, and this capacity is exhausted after one incident. The computer network problem is similar to what might happen in a bioterrorist attack such as anthrax or smallpox where it is possible for contamination to spread across individuals. Even if an individual or firm has taken protective actions, there is still some chance that it can be contaminated or infected by others who have not undertaken similar measures and hence are at risk. For example, if a person has been vaccinated or taken preventive medicine against a disease, he or she may still contract the illness from others who have the disease if the vaccine or medicine is not 100% effective. In these cases where there are complementarities or positive externalities created by an individual taking protective measures, there is more incentive for one unit to invest in protective measures if the other units have taken similar actions. In fact, investing in security is most effective if all elements of the system obtain protection and weak links may lead to suboptimal behavior by everyone. In both cases, the airline and computer security problems, the incentives depend on what others do. Suppose that there are a large number of agents in the system. In Kunreuther and Heal (2003) we show that in the computer security problem, if none of the other machines are protected against viruses or hackers then the incentive for any agent to invest in protection approaches zero. For airline security, if no other airline has invested in baggage checking systems and there is a high probability that bags will be transferred from one airline to another, the expected benefits to any airline from this investment approaches 63% of what it would have been in the absence of contagion from others. As we show below there can be a stable equilibrium where all agents choose not to invest in risk reduction measures, even though all would be better off if they did invest. An interesting property of some of these equilibria is the possibility of tipping as described by Schelling (1978). How can we ensure that if enough agents will invest in security that all the others will follow suit? In some cases there may be one agent occupying such a strategic position that if it changes from not investing to investing in protection, then all others will find it in their interests to do the same. And even if there is no single agent that can exert such leverage, there may be a small group. Obviously this finding has significant implications for policy-making. It suggests that there are some key players whom it is particularly important to persuade to manage risks carefully. Working with them may be a substitute for working with the population as a whole. CHARACTERIZING THE PROBLEM: TWO-AGENT PROBLEM We now set out formally the framework to study interdependent security (henceforth denoted IDS). Consider two identical airlines, A1 and A2, each having to choose whether or not to invest in a baggage screening system. Each faces a risk of a bomb exploding on its plane, causing a loss of L. There are two possible ways in which damage can occur: a bomb can explode either in a bag initially checked onto the airline’s own plane or in a bag transferred from the other airline. The probability of a bomb exploding in luggage initially checked on a plane of an airline that has not invested in security is p. The expected loss from this event is pL. If the airline has invested in security precautions then this risk is assumed to be zero. Even if an airline has invested in a baggage screening system there is still an additional risk of loss due to contagion from the other airline if it has not invested in security. The probability of a dangerous bag being accepted by one airline and then being transferred to the other is denoted by q. With respect to the chances of contagion, q is the likelihood that on any trip a dangerous bag is loaded onto the plane of one airline and is then transferred to another airline where it explodes. We assume that there is not enough time for an airline to examine the bags from another airline’s plane before they are loaded onto its own plane. These probabilities are interpreted as follows. On any given trip there is a probability p that an airline without a security system loads a bomb that explodes on one of its own planes. For the airline scenario, thorough scanning of baggage that an airline checks on its own plane will prevent damage from these bags, but there could still be an explosive in a bag transferred from another airline. There is thus an additional risk of loss due to contagion from another agent who has not invested in loss prevention, denoted by q. If there are n ≥ 2 airlines, the probability per trip that this bag will be transferred from airline i to airline j is q/(n − 1). Note that the probability per trip that a bag placed on an airline without a security system will explode in the air is p + q. We assume throughout that the damages that result from multiple security failures are no more severe than those resulting from a single failure. In other words, damages are not additive. In the airline baggage scenario, this amounts to an assumption that one act of terrorism is as serious as several. In reality, having two bombs explode on a plane is
OCR for page 118
Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change no more damaging than just one. The key issue is whether or not there is a failure, not how many failures there are. Indeed as the probabilities are so low, single occurrences are all that one can reasonably consider. One could think of the definition of a catastrophe as being an event so serious that it is difficult to imagine an alternative event with greater consequences. We focus first on the case of two airlines, each of which is denoted as an agent. This example presents the basic intuitions in a simple framework. We then turn to the multi-agent case. To illustrate the framework in the context of a real-world event, consider the destruction of Pan Am flight 103 in 1988. In Malta terrorists checked a bag containing a bomb on Malta Airlines, which had minimal security procedures. The bag was transferred at Frankfurt to a Pan Am feeder line and then loaded onto Pan Am 103 in London’s Heathrow Airport. The transferred piece of luggage was not inspected at either Frankfurt or London, the assumption in each airport being that it was inspected at the point of origin. The bomb was designed to explode above 28,000 feet, a height normally first attained on this route over the Atlantic Ocean. Failures in a peripheral part of the airline network, Malta, compromised the security of a flight leaving from a core hub, London. Assume that each airline has two choices: to invest in baggage screening, S, or not to do so, N. Table H.1 shows the payoffs to the agents for the four possible outcomes. Here Y is the income of each airline before any expenditure on security or any losses from the risks faced. The cost of investing in security is c. The rationale for these payoffs is straightforward. If both airlines invest in security, then each incurs a cost of c and faces no losses from damage so that their net incomes are Y − c. If A1 invests and A2 does not (top right entry) then A1 incurs an investment cost of c and also runs the risk of a loss from damage emanating from A2. The probability of A2 contaminating A1 is q, so that A1’s expected loss from damage originating elsewhere is qL. This cost represents the negative externality imposed by A2 on A1. In this case A2 incurs no investment costs and faces no risk of contagion but does face the risk of damage originating at home, pL. The lower left payoffs are just the mirror image of these. If neither airline invests, then both have an expected payoff of Y − pL − (1 − p)qL. The term pL here reflects the TABLE H.1 Expected Costs Associated with Investing and Not Investing in Airline Security Airline 2 (A2) S N Airline 1 (A1) S Y − c, Y − c Y − c − qL, Y − pL N Y − pL, Y − c − qL Y − [pL + (1 − p) qL], Y − [pL + (1 − p)qL] NOTE: S, screening of baggage; N, no screening. risk of damage originating at one’s own airline. The term qL, showing the expected loss from damage originating at the other airline, is multiplied by (1 − p) to reflect the assumption that the damage can only occur once. So the risk of contagion only matters to an airline when that airline does not suffer damage originating at home. The conditions for investing in security to be a dominant strategy are that c < pL and c < p(1 − q)L. The first constraint is exactly what one would expect if there were only a single airline: the cost of investing in security must be less than the expected loss. Adding a second airline tightens the constraint by reflecting the possibility of contagion. This possibility reduces the incentive to invest in security. Why? Because in isolation investment in security buys the airline complete freedom from risk. With the possibility of contagion it does not. Even after investment there remains a risk of damage emanating from the other airline. Investing in security buys you less when there is the possibility of contagion from others. This solution concept is illustrated below with a numerical example. Suppose that p = .2, q = .1, L = 1000 and c = 185. The matrix in Table H.1 is now represented as Table H.2. One can see that if A2 has protection (S), then it is worthwhile for A1 to also invest in security since its expected losses will be reduced by pL = 200 and it will only have to spend 185 on the security measure. However, if A2 does not invest in security (N), then there is still a chance that A1 will incur a loss. Hence the benefits of security to A1 will only be pL(1 − q) = 180 which is less than the cost of the protective measure. So A1 will not want to invest in protection. In other words, either both airlines invest in security or neither of them does so. These are the two Nash equilibria for this problem. THE MULTI-AGENT IDS CASE The results for the two-agent case carry over to the most general settings with some increase in complexity. In this section we review briefly the main features of the general cases, without providing detailed proofs of the results. These can be found in Kunreuther and Heal (2003). There are two key points that emerge from the discussion of the general case with respect to the IDS problem. One is TABLE H.2 Expected Costs Associated with Investing and Not Investing in Airline Security: Illustrative Example Airline 2 (A1) S N Airline 1 (A2) S Y − 185, Y − 185 Y − 285, Y − 200 N Y − 200, Y − 285 Y − 280, Y − 280 NOTE: S, screening of baggage; N, no screening.
OCR for page 119
Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change that the main feature of the two-agent case carries over to n agents: the incentive that any agent faces to invest in security depends on how many other agents there are and on whether or not they are investing. Other agents who do not invest reduce the expected benefits from one’s own protective actions and hence reduce an agent’s incentive to invest. Secondly there is a new possibility that emerges from the multi-agent case. There is the possibility of a tipping phenomenon.1 In some cases there may be one firm occupying such a strategic position that if it changes from not investing to investing in protection, then all others will find it in their interests to follow suit. And even if there is no single firm that can exert such leverage, there may be a small group. Heal and Kunreuther (2007) show when this can happen and how to characterize the agents with great leverage. Obviously this point has considerable implications for policy-making. It suggests that there are some key players whom one needs to persuade to manage risks carefully. EXTENDING THE ANALYSIS The choice of whether to protect against events where there is interdependence between your actions and those of others raises a number of interesting theoretical and empirical questions. We mention some of these in this section. Differential Costs and Risks The nature of Nash equilibria for the problems considered above and the types of policy recommendations may change as one introduces differential costs across the agents who are considering whether or not to invest in security. Consider each airline deciding whether to invest in a baggage security system. In Heal and Kunreuther (2007) we have shown that if there are differential costs and/or risks between companies, we would expect to find some airlines investing in baggage security systems and others who would not. Furthermore, as we discussed above, the airline which creates the largest negative externalities for others should be encouraged to invest in protective behavior not only to reduce these losses but also to make it profitable for other airlines to follow suit, thus inducing tipping behavior. Multi-Period and Dynamic Models Deciding whether to invest in security normally involves multi-period considerations since there is an upfront investment cost that needs to be compared with the benefits over the life of the protective measure. An airline that invests in a baggage security system knows that this measure promises to offer benefits for a number of years. Hence one needs to discount these positive returns by an appropriate interest rate and specify the relevant time interval in determining whether or not to invest in these actions. There may be some uncertainty with respect to both of these parameters. From the point of view of dynamics, the decision to invest depends on how many others have taken similar actions. How do you get the process of investing in security started? Should one subsidize or provide extra benefits to those willing to be innovators in this regard to encourage others to take similar actions? Endogenous Probabilities The above analysis assumed that the risks faced by the airlines are independent of their own behavior. In reality if some airlines are known to be more security-conscious than others, they are presumably less likely to be terrorist targets. In this sense the problem of investing in security has similarities to the problem of theft protection: if a house announces that it has installed an alarm, then burglars are likely to turn to other houses as targets. In the case of airline security, terrorists are more likely to focus on targets that are less well protected. This is the phenomenon of displacement or substitution, documented in Sandler (2005). Keohane and Zeckhauser (2003) and Bier (2007) also consider the case of endogenous terrorist risks. For the case of endogenous probabilities in the airline security problem, Heal and Kunreuther (2007) show that an airline is more likely to invest in security when probabilities are endogenous than when these probabilities are exogenous because of the increased likelihood of being a target when others invest in protection. In addition, if one makes the reasonable assumption that the total externality imposed on any non-investing firm decreases as the number of investing firms increases, then this should lead more firms to invest in protection. For both these reasons it should also now be easier for a coalition to tip the other firms into investing in security than if the probabilities were exogenous. Future research should examine how changes in endogenous probabilities impact on IDS solutions and the appropriate strategies for improving individual and social welfare. Behavioral Considerations The models discussed above all assumed that individuals made their decisions by comparing their expected benefits with and without protection to the costs of investing in security. This is a rational model of behavior. As pointed out in Chapter 2 of this report, there is a growing literature in behavioral economics that suggests that individuals make choices in ways that differ from the rational model of choice. With respect to protective measures there is evidence from controlled field studies and laboratory experiments that many individuals are not willing to invest in security for a number of reasons that include myopia, high discount rates and budget constraints (Kunreuther et al., 1998). In the models 1 See Schelling (1978) for a characterization of a number of tipping problems.
OCR for page 120
Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change considered above there were also no internal positive effects associated with protective measures. Many individuals invest in security to relieve anxiety and worry about what they perceive might happen to them or to others so as to gain peace of mind (Baron et al., 2000). A more realistic model of interdependent security that incorporated these behavioral factors as well as people’s misperceptions of the risk may suggest a different set of policy recommendations than a rational model of choice. FUTURE RESEARCH ON RISK MANAGEMENT STRATEGIES FOR IDS PROBLEMS We conclude by suggesting a set of problems that involve interdependent security and suggesting the types of risk management strategies that could be explored for addressing them. Types of Problems The common features of IDS problems are the possibility that other agents can contaminate you and your inability to reduce this type of contagion through investing in security. You are thus discouraged from adopting protective measures when you know others have decided not to take this step. Here are some problems that fit into this category, some of which have been discussed in this paper: Investing in airline security Protecting against bioterrorist attacks Protecting against chemical and nuclear reactor accidents Making buildings more secure against attacks Investing in sprinkler systems to reduce the chance of a fire in one’s apartment Making computer systems more secure against terrorist attacks Investing in protective measures for each part of an interconnected infrastructure system such as electricity, water or gas so that services can be provided to victims of a disaster In each of these examples there are incentives for individual units or agents not to take protective measures but there are large potential losses to the unit making a decision (e.g., individual, organization, city) as well as to society. In the case of bioterrorism, if each unit takes protective action it will create positive externalities to others in the system and to society. Furthermore, the losses from these events are sufficiently high that they are considered to be non-additive. One can only get a specific disease once (e.g., smallpox, anthrax), an airplane can only be destroyed once; a building can only collapse once. You can only die once! These IDS problems can be contrasted with others that do not have these features. One that is discussed in more detail in Kunreuther and Heal (2003) is theft protection where there are negative externalities to others from your taking protection. In the case of theft protection, if you install an alarm system that you announce publicly with a sign, the burglar will look for greener pastures to invade.2 Risk Management Strategies For each IDS problem there are a range of risk management strategies that can be pursued by the private and public sectors for encouraging agents to invest in cost-effective protective measures. Collecting information on the risk and costs (e.g., constructing a scenario so that one can estimate p, q, L, and c with greater accuracy); Developing more accurate catastrophe models for examining the risk of terrorist attacks and other large-scale disasters;3 Designing incentive systems (e.g., subsidies or taxes) to encourage investment by agents in protective measures; Developing insurance programs for encouraging investment in protective measures when firms are faced with contagion; Structuring the liability system to deal with the contagion effects of IDS; Carefully designed standards (e.g., building codes for high-rises to withstand future terrorist attacks) that are well enforced through mechanisms such as third-party inspections; Introducing federal reinsurance or state-operated pools to provide protection against future losses from terrorist attacks to supplement private terrorist insurance. It may be desirable to integrate several of these measures through public-private risk management partnerships. For example, banks and financial institutions could require that firms adopt security measures as a condition for a loan or mortgage. To ensure that these measures are adopted there may be a need for third party inspections or audits by the private sector. Firms who reduce their risks can be rewarded through lower insurance premiums. If there are federal or state reinsurance pools at reasonable prices to cover large losses from a future terrorist attack, then private insurers may be able to provide terrorist coverage at affordable premiums. 2 One could make a similar argument with respect to cities taking protective measures against bioterrorism. For example, if certain cities were equipped with sensors to detect biological attacks, the terrorist might focus his or her attention on those urban areas that did not have this form of protection. 3 For more details on the challenges in developing catastrophe models and appropriate strategies for dealing with them, see Grossi and Kunreuther (2005).
OCR for page 121
Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change REFERENCES Baron, J., J. Hershey, and H. Kunreuther. 2000. “Determinants of Priority for Risk Reduction: The Role of Worry.” Risk Analysis 20(4):413-427. Bier, V. 2007. “Choosing What to Protect.” Risk Analysis 27 (June):607-620. Grossi, P., and H. Kunreuther. 2005. Catastrophe Modeling: A New Approach to Managing Risk. New York: Springer. Heal, G., and H. Kunreuther. 2006. “You Can Only Die Once: Interdependent Security in an Uncertain World.” In The Economic Impacts of Terrorist Attacks, H.W. Richardson, P. Gordon, and J.E. Moore, III (eds.). Northampton, Mass.: Edward Elgar. Heal, G., and H. Kunreuther. 2007. “Modeling Interdependent Risks.” Risk Analysis 27(3):621-633. Kearns, M. 2005. “Economics, Computer Science and Policy.” Issues in Science and Technology, Winter: pp. 37-47. Keohane, N., and R. Zeckhauser. 2003. “The Ecology of Terror Defense.” Journal of Risk and Uncertainty. Special Issue on Terrorist Risks 26(2/3):201-229. Kunreuther, H., and G. Heal. 2003. “Interdependent Security.” Journal of Risk and Uncertainty, Special Issue on Terrorist Risks 26(2/3):231-249. Kunreuther, H., A. Onculer, and P. Slovic. 1998. “Time Insensitivity for Protective Measures.” Journal of Risk and Uncertainty 16(3):279-299. Sandler, T. 2005. “Collective Action and Transnational Terrorism.” The World Economy 26(6):779-802. Schelling, T. 1978. Micromotives and Macrobehavior. New York: Norton.