Challenges to Risk Analysis for Homeland Security
This chapter discusses challenges facing the Department of Homeland Security (DHS) in the two domains of natural hazard and terrorism risk analysis. The analysis of natural hazard risks is reasonably mature and is derived from both historical data and physics-based modeling. The analysis of terrorism risk is less mature, and it lacks both historical validating data and sociological theory on which to base quantitative models. A summary of these challenges is presented in Box 3-2 at the end of the chapter.
COMPARISON OF RISK ASSESSMENT OF NATURAL HAZARDS AND OF TERRORISM
Compared to risk analysis for countering terrorism, the analysis of natural hazard risks is well understood. For natural hazards there typically exist large historical datasets—although climate change, urbanization, evolution in the constructed environment, and so on can undercut the usefulness of those data—and a partial understanding of the physical processes involved. There are standard statistical techniques for these problems and decades of validation. There exists an understanding of model limitations, uncertainties, and the applicability of risk methodology to policy-relevant questions.1 For example, social consequences of natural hazards are poorly understood, but research is under way to close the gap.2 To a large extent, risk analysis methods for natural hazards reflect the principles of good practice embodied in National Research Council (NRC) reports on risk analysis and management in federal agencies (e.g., NRC, 1983).
This is not to say that the risk analyses are always straightforward. For some rare yet highly consequential events, such as compounding cascading hazards, the analysis of some natural hazard risk can become very challenging because historical data are inadequate and/or minor changes in assumed conditions can lead to orders-of-magnitude differences in risk (see Box 3-1).
Cascading Natural Hazard Risk Can Pose Analysis Problems as Challenging as those Associated with Terrorist Risk: The Case of the Sacramento-San Joaquin Bay Delta
The Sacramento-San Joaquin Bay Delta lies at the confluence of the Sacramento and San Joaquin rivers east of San Francisco. It comprises a low-lying agricultural district some 100 km by 50 km in extent, through which flows most of the runoff of the western slopes of the Sierra Nevada range. Passing through the delta, the Sierra runoff flows into the brackish San Francisco Bay and finally to the Pacific. The delta hosts some 1,800 km of levees and sloughs (the local term for canals), constructed mostly in the late nineteenth century by immigrant farmers, that are extremely fragile and create a system of below sea-level islands and wetlands.
The peat soils of the delta make the region among the most fertile agricultural areas in the world, contributing billions of dollars annually to the nation’s economy. The delta also hosts the intake forebays for the California Aqueduct, carrying 60 percent of the fresh water supply for the desert-like Los Angeles region. Without the fresh water originating from the delta, Southern California would face a desperate potable water supply situation.
The delta also lies alongside the San Andreas Fault belt and is potentially subject to large peak ground accelerations should earthquakes occur along the eastern side of that belt. This natural event, which is not improbable compared to many natural hazards, would likely breach many kilometers of fragile levees; foster rapid saltwater intrusion into the delta from Suisun Bay, the easternmost extension of San Francisco Bay; and potentially compromise the quality of water entering the aqueduct. If that intrusion were sufficiently saline, a shutdown of the intakes would be necessary.
Were this calamity to happen after the spring melt in the Sierras, it could be nine months before water transfer to Los Angeles resumed. The economic and social impact of this cascading natural event would be unprecedented. Being unprecedented, it is not an event for which there are adequate historical data from which to assess risk. It is a low-probability, high-consequence risk in the natural domain, and analyzing the risk shares many features with risk analysis for counter-terrorism.
In contrast, for risk assessment of terrorism threats, particularly with respect to exceedingly rare or never-observed events, the historical record is essentially nonexistent, and there is poor understanding of the sociological forces from which to develop assessment techniques. Because of the presence of a thinking (intelligent) adversary, there is an inherent dependence among the three terms, threats, vulnerabilities, and consequences (T, V, and C), and threat is difficult to express as a simple probability. An intelligent adversary will exploit opportunities where vulnerabilities and consequences are high; thus, probabilities of the threats change as we take actions to harden targets or protect the public. This
substantially complicates the risk analysis. Where threats, vulnerabilities, and consequences are not independent, risk analysis must estimate a joint probability of the correlated T, V, and C terms, substantially complicating the estimation of risk and its uncertainty. In this case risk can be evaluated as Risk = f(T,V,C) but cannot be evaluated as the simpler product Risk = T × V × C.
Vulnerability studies for natural hazards are, in principle, little different from those used for terrorism risks. In the natural hazards case, vulnerability studies deal with the effects of wind, water, fire, or ground shaking, et cetera, on the built environment. In the terrorism case, the vulnerability studies deal with the effects of blast, vehicle impacts, et cetera. In both cases, the techniques of vulnerability evaluation are well understood.
The approach to risk management for some natural hazards might hold lessons for terrorism risk analysis. For example, our ability to define the threat from earthquakes has largely resisted the best efforts of scientists: we can estimate their likelihoods, based on historic records, but we cannot find signals that allow us to predict when and where one will actually strike. Therefore, risk management has focused more on reducing vulnerabilities and increasing resilience. Improvements to resilience increasingly leverage knowledge from the social sciences and include public communications efforts. As more is learned about methods to increase resilience, dual benefits might accrue.
A notable contrast between risk analysis for natural hazards and for counter-terrorism is that public perception of the consequences is distorted. One telling example is that during the same year as the Oklahoma City Federal Building bombing (1995) in which 168 people perished, approximately 600 people died in a five-day period in Chicago due to unseasonable heat. Many Americans can remember where they were at the time of the Murrah Building bombing, but few even recall the deaths in Chicago.
RISK ANALYSIS FOR COUNTERTERRORISM IS INHERENTLY MORE DIFFICULT THAN RISK ANALYSIS FOR NATURAL HAZARDS
Risk analysis for natural hazards is based on a foundation of data. For terrorism risk analysis, neither threats nor consequences are well characterized by data. Risk analysis for terrorism involves an open rather than a closed system (Turner and Pidgeon, 1997): virtually anyone can be a participant (ranging from intentionally malevolent actors, to bystanders who may respond in ways that make a situation either better or worse), and parts of the system can be used in ways that are radically different from those for which they were designed (e.g., aircraft as weapons, rather than means of transportation). Also, terrorism, unlike natural disasters, involves intentional actors. Not only are many terrorist threats low-likelihood events, but their frequency is evolving rapidly over time, as terrorists observe and respond to defenses and to changing political conditions.
Thus, it will rarely be possible to develop statistically valid estimates of attack frequencies (threat) or success probabilities (vulnerability) based on historical data.3
Data scarcity and reliability are serious issues when attempting to assign probabilities to the threat of foreign terrorism. Despite intense efforts by the intelligence community, threat data can be episodic and too general to eliminate uncertainty. While the risk models reviewed during this study assign probabilities based on attack scenarios, it is rare for intelligence reporting to outline the attacker, the target, the technique, and the timing, so pieces of information that imply something about threats have to be found and spliced together.
Challenges of Modeling Intentional Behavior
While terrorist choices may sometimes be modeled as random processes, terrorist events do not in general occurring randomly. Rather, they reflect willful human behavior. Cox (2008) has written about the limitations of viewing threat, vulnerability, and consequences as independent concepts in such circumstances. The community of risk analysts is coming to grips with what this means for risk-analysis methodology (see, for example the Biological Threat Risk Assessment [BTRA] and the NRC review of the BTRA [NRC, 2008]). Some experts believe that there is a place for traditional risk analysis (e.g., quantitative risk analysis [QRA] or probabilistic risk analysis [PRA]) when used with suitable caveats. An example might be (von Winterfeldt and O’Sullivan, 2006) where the attack scenario is constrained enough that probabilistic modeling of the threat is reasonable. Yet others believe that such methods are generally inapplicable to intentional threats and need to be replaced by game-theoretic models (e.g., Bier, 2005; Bier, et al., 2007; Brown et al., 2005, 2006; Zhuang and Bier, 2007).
This is a controversial area within the technical community. Fully game-theoretic methods are not yet developed for use on problems with realistic levels of complexity, and the assumptions of defender-attacker models can be open to question. However, it is also clear that traditional risk methods fail to capture important aspects of intentional attacks and may not be adequate. In particular, risk analyses that do not reflect the ability of terrorists to respond to observed defensive actions tend to overstate the effectiveness of those actions if they ignore the ability of terrorists to switch to different targets and/or attack strategies or understate their effectiveness if they ignore the possibility of deterrence.
Therefore, better methods need to be found for incorporating the intentional nature of terrorist attacks into risk analyses, even if done judgmentally or by iterating the results of the analysis to reflect terrorist responses (Dillon et al.,
2009; Paté-Cornell and Guikema, 2002). Further research on game-theoretic and quasi-game-theoretic methods would also be desirable.
The military has more than 60 years of experience in using such methods to model the decision processes of intelligent adversaries so as to identify reasonable worst-case scenarios that guide defensive preparations. However, determining the actions and reaction of terrorists is harder than the task of military planners facing nation-state actors. Most nation-state adversaries have tactics and techniques that are prescribed in military doctrine, and many have established procedures for attacks and defense that are written down and exercised over long periods of time. It is often feasible to glean the doctrine, tactics, and techniques from open sources and intelligence reporting. By contrast, the doctrines of terrorist groups are more variable and harder to discern. Specific attack scenarios might be selected by a small group of individuals who desire to remain hidden.
The risk analysis discipline is working through differences of opinion about how to model intelligent adversaries, in particular addressing the question of when probabilistic methods are appropriate. Some people argue that probabilistic methods can be extended to encompass deliberate decisions by intelligent adversaries (e.g., Garrick et al., 2004). Yet others make a strong case that probabilistic methods are inappropriate to model the decision process of an intelligent adversary choosing from among alternate attack modes (e.g., Golany et al., 2009, Parnell et al., 2010). A recent report from the Department of Defense (DoD) advisory group JASON (2009, p. 7) goes even farther, concluding that “it is simply not possible to validate (evaluate) predictive models of rare events that have not occurred, and unvalidated models cannot be relied upon …. Reliable models for ameliorating rare events will need to address smaller, well-defined, testable pieces of the larger problem.”
DHS acknowledges the need to pursue incorporating techniques of adaptive behavior in its models. Some of these techniques were recommended in the NRC’s BTRA review (2008). The Homeland Infrastructure Threat and Risk Analysis Center (HITRAC) program has a Risk Analysis Division tasked with following developments in risk modeling. That division’s Risk Development and Modeling Branch is charged with the integration of new theories, applied research, models, and tools. It directs efforts of the National Simulation and Analysis Center (NISAC), which has some 70 to 80 models used in various simulations. Los Alamos, Sandia, and Argonne national laboratories provide direct support.
Recommendation: DHS should consider alternatives to modeling the decisions of intelligent adversaries with fixed probabilities. Models that incorporate game theory, attacker-defender scenarios, or Bayesian methods to predict threat probabilities that evolve over time in response to observed conditions and monitored behavior provide more appropriate ways of representing the decisions of intelligent adversaries and should be explored.
Basis for Threat of Terrorism Risk
The data available to support assessments of threat can be grouped into three categories:
Expert opinions derived from intelligence analyses and formalized elicitation methodologies;
Physical, analytical, and engineering simulations; and
Historical data, including statistics on past terrorist events worldwide, social sciences research into terrorists’ behavior, journalist accounts, and terrorists’ own writings about motivation and intent.
Individual judgments based on an assessment of available data by intelligence or related experts, or formal expert elicitation involving structured questions to assess probabilities across multiple experts, are often the best that can be achieved when rapid response is needed to address security threats. The objective should be to provide those making these judgments with as much objective information and decision support as possible, using our best knowledge from studies of human performance for such tasks. Significant research has been conducted on the performance of expert elicitation (Cooke, 1991; Cooke and Goossens, 2000; Cooke et al., 2007; Coppersmith et al., 2006; European Commission, 2000; Garthwaite et al., 2005; Hora, 1992; Keeney and von Winterfeldt, 1991; MacDonald et al., 2008; Morgan and Henrion, 1990; Morgan and Keith, 1995; O’Hagan et al., 2006; Otway and von Winterfeldt, 1992; Zickfeld et al., 2007). Formal methods attempt to counter biases that commonly arise in both lay and expert assessments of probabilities (Cullen and Small, 2004; NRC, 1996).
Expert elicitation has been used in homeland security applications for threat assessment in the Risk Management Solutions (RMS) Probabilistic Terrorism Model (Willis, 2007; Willis et al., 2005), for vulnerability assessments (see discussion of Critical Infrastructure and Key Resources [CIKR] risk analysis in Chapter 2), for consequence analysis (e.g., Barker and Haimes, 2009), and in other applications.4 However, there has in general been a lack of guidance as to when and how formal expert elicitation techniques should be used in DHS assessments. The Environmental Protection Agency (EPA) published a white paper on the uses of expert elicitation for its regulatory assessments, and DHS should determine the extent to which this or a similar effort would be beneficial in its risk assessment guidance (see http://www.epa.gov/osa/spc/expertelicitation/).
Physical, Analytical, and Engineering Simulations
The most rigorous approaches to risk assessment use structured system models to evaluate vulnerability and consequence. These often take the form of event or fault trees ( Fovino et al., 2009; Sherali et al. 2008; Shindo et al., 2000), and may include predictive models for structural integrity and response (Davidson et al., 2005; Remennikov, 2003); outdoor and indoor air pollution and exposure (Fennelly et al., 2004; Fitch et al., 2003; Settles, 2006; Wang and Chen, 2008); drinking water distribution systems and detection of contamination events (Lindley and Buchberger, 2002; NRC, 2007b; Ostfeld et al., 2008); infrastructure dependence and interoperability (Haimes et al., 2005, 2008; Robert et al., 2008); and other specific system models, depending on the asset and its modes of vulnerability and consequence. While models of this type are always undergoing improvement, their formulation and parameterization remains highly uncertain, and this uncertainty must be explicitly addressed. The NISAC work discussed in Chapter 2 falls into this category.
Statistical analysis of observed data is most applicable in cases where extensive historical data are available. Terrorism risk analysis is hampered by datasets that are too sparse and targeted events that are too situation specific. When limited data are available, Bayesian statistical methods provide a means for updating expert beliefs (which provide prior probabilities) with the evidence of observed data to obtain posterior probabilities that combine the information from both sources (Berry and Stangl, 1996; Iman and Hora, 1989; Greenland, 2001, 2006; Guzzetti et al., 2005; Wolfson et al., 1996).
Challenges Facing Vulnerability Analysis
Vulnerability analyses for terrorism risk analysis also tend to rely heavily on expert judgments. As such, the general comments above regarding methods for ensuring reliable elicitation apply. The quality of a vulnerability analysis depends in part on the thoroughness with which information is gathered and vetted and on the capabilities of those involved to identify vulnerabilities that might not be caught by a standard process. The committee was told that the process used by the Office of Infrastructure Protection (IP) is heavily oriented toward physical security, and that it will not capture all the relevant vulnerabilities for some assets and sectors. There is also a tendency toward false precision, which is discussed in detail in Chapter 4.
Challenges Facing Consequence Analysis
The fundamental challenge for analyzing the consequences of a terrorist event is how to measure the intangible and secondary effects. DHS’s consequence analyses tend to limit themselves to deaths, physical damage, first-order economic effects, and in some cases, injuries and illness. Other effects, such as interdependencies, business interruptions, and social and psychological ramifications, are not always modeled, yet for terrorism events these could have more impact than those consequences that are currently included. This is discussed in Chapter 4. Even though DHS is not responsible for managing all these aspects of risk—for example, the Department of Health and Human Services has the primary responsibility for managing public health risks—it is appropriate and necessary to consider the full spectrum of consequences when performing risk analyses.
Synopsis of Challenges for Risk Analysis in DHS