4
Evaluation of DHS Risk Analysis

In evaluating the quality of Department of Homeland Security’s (DHS’s) approach to risk analysis—element (a) of this study’s Statement of Task—we must differentiate between DHS’s overall conceptualization of the challenge and its many actual implementations. Within the former category, the department has set up processes that encourage disciplined discussions of threats, vulnerabilities, and consequences, and it has established the beginning of a risk-aware culture. For example, the interim Integrated Risk Management Framework (IRMF), including the risk lexicon and analytical guidelines (primers) being developed to flesh it out, represents a reasonable first step. The National Infrastructure Protection Plan (NIPP) has appropriately stipulated the following four “core criteria” for risk assessments: that they be documented, reproducible, defensible, and complete (DHS-IP, 2009, p. 34). Similarly, the Office of Risk Management and Analysis (RMA) has stated that DHS’s integrated risk management should be flexible, interoperable, and transparent and based on sound analysis.

Some of the tools within DHS’s risk analysis arsenal are adequate in principle, if applied well; thus, in response to element (b) of the Statement of Task, the committee concludes that DHS has some of the basic capabilities in risk analysis for some portions of its mission. The committee also concludes that Risk = A Function of Threat, Vulnerability, and Consequences (Risk = f(T,V,C)) is a philosophically suitable framework for breaking risk into its component elements. Such a conceptual approach to analyzing risks from natural and man-made hazards is not new, and the special case of Risk = T × V × C has been in various stages of development and refinement for many years. However, the committee concludes that Risk = T ×V ×C is not an adequate calculation tool for estimating risk in the terrorism domain, for which independence of threats, vulnerabilities, and consequences does not typically hold and feedbacks exist. In principle, it is possible to estimate conditional probability distributions for T, V, and C that capture the interdependencies and can still be multiplied to estimate risk, but the feedbacks—the way choices that affect one factor influence the others—cannot be represented so simply.

Based on the committee’s review of the six methods and additional presentations made by DHS to the committee, there are numerous shortcomings in the implementation of the Risk = f(T,V, C) framework. In its interactions the committee found that many of DHS’s risk analysis models and processes are weak—for example, because of undue complexity that undercuts their transparency and,



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 52
4 Evaluation of DHS Risk Analysis I n evaluating the quality of Department of Homeland Security’s (DHS’s) approach to risk analysis—element (a) of this study’s Statement of Task—we must differentiate between DHS’s overall conceptualization of the challenge and its many actual implementations. Within the former category, the department has set up processes that encourage disciplined discussions of threats, vulnerabilities, and consequences, and it has established the beginning of a risk-aware culture. For example, the interim Integrated Risk Management Framework (IRMF), including the risk lexicon and analytical guidelines (prim- ers) being developed to flesh it out, represents a reasonable first step. The Na- tional Infrastructure Protection Plan (NIPP) has appropriately stipulated the fol- lowing four “core criteria” for risk assessments: that they be documented, re- producible, defensible, and complete (DHS-IP, 2009, p. 34). Similarly, the Of- fice of Risk Management and Analysis (RMA) has stated that DHS’s integrated risk management should be flexible, interoperable, and transparent and based on sound analysis. Some of the tools within DHS’s risk analysis arsenal are adequate in princi- ple, if applied well; thus, in response to element (b) of the Statement of Task, the committee concludes that DHS has some of the basic capabilities in risk analysis for some portions of its mission. The committee also concludes that Risk = A Function of Threat, Vulnerability, and Consequences (Risk = f(T,V,C)) is a phi- losophically suitable framework for breaking risk into its component elements. Such a conceptual approach to analyzing risks from natural and man-made haz- ards is not new, and the special case of Risk = T × V × C has been in various stages of development and refinement for many years. However, the committee concludes that Risk = T ×V ×C is not an adequate calculation tool for estimating risk in the terrorism domain, for which independence of threats, vulnerabilities, and consequences does not typically hold and feedbacks exist. In principle, it is possible to estimate conditional probability distributions for T, V, and C that capture the interdependencies and can still be multiplied to estimate risk, but the feedbacks—the way choices that affect one factor influence the others—cannot be represented so simply. Based on the committee’s review of the six methods and additional presen- tations made by DHS to the committee, there are numerous shortcomings in the implementation of the Risk = f(T,V, C) framework. In its interactions the com- mittee found that many of DHS’s risk analysis models and processes are weak— for example, because of undue complexity that undercuts their transparency and, 52

OCR for page 52
EVALUATION OF DHS RISK ANALYSIS 53 hence, their usefulness to risk managers and their amenability to validation— and are not on a trajectory to improve. The core principles for risk assessment cited above have not been achieved in most cases, especially with regard to the goals that they be documented, reproducible, transparent, and defensible. This chapter begins with the committee’s evaluation of the quality of risk analysis in the six illustrative models and methods that it investigated in depth. Then it discusses some general approaches for improving those capabilities. DETAILED EVALUATION OF THE SIX ILLUSTRATIVE RISK MODELS EXAMINED IN THIS STUDY Natural Hazards Analysis There is a solid foundation of data, models, and scholarship to underpin the Federal Emergency Management Agency’s (FEMA’s) risk analyses for earth- quakes, flooding, and hurricanes which uses the Risk = T × V × C model. This paradigm has been applied to natural hazards, especially flooding, more than a century. Perhaps the earliest use of the Risk = T × V × C model—often referred to as “probabilistic risk assessment” in other fields—dates to its use in forecast- ing flood risks on the Thames in the nineteenth century. In present practice, FEMA’s freely-available software application HAZUS™ provides a widely used analytical model for combining threat information on natural hazards (earthquakes, flooding, and hurricanes) with consequences to existing invento- ries of building stocks and infrastructures as collected in the federal census and other databases (Schneider and Schauer, 2006). For natural hazards, the term “threat” is represented by the annual ex- ceedance probability distribution of extreme events associated with specific physical processes, such as earthquakes, volcanoes, or floods. The assessment of such threats is often conducted by applying statistical modeling techniques to the record of events that have occurred at the site or at sites similar to that of interest. Typically a frequentist approach is employed, where the direct statisti- cal experience of occurrences at the site is used to estimate event frequency. In many cases, evidence of extreme natural events that precede the period of sys- tematic monitoring can be used to greatly extend the period of historical obser- vation. Sometimes regional information from adjacent or remote sites can be used to help define the annual exceedance probability (AEP) of events through- out a region. For example, in estimating flood frequencies in a particular river the historical period of recorded flows may be only 50 to 100 years. Clearly, that record cannot provide the foundation for statistical estimates of 1,000-year events except with very large uncertainty, nor can it represent with certainty probabilities that might be affected by overarching systemic change, such as from climate changes. To supplement the instrumental record, the frequency of

OCR for page 52
54 DEPARTMENT OF HOMELAND SECURITY’S APPROACH TO RISK ANALYSIS paleoflows is inferred from regional geomorphologic evidence and is being in- creasingly used. These are often incorporated in the statistical record using Bayesian methods in which prior, nonstatistical information can be used to en- hance the statistical estimates. Other prior information may arise from physical modeling, expert opinions, or similar non-historical data. The vulnerability term in the Risk = T × V × C model is the conditional probability of protective systems or infrastructure failing to contain a particular hazardous event. For example, the hurricane protection system (HPS) of New Orleans, consisting of levees, flood walls, gates, and pumping stations, was con- structed to protect the city from storm surges caused by hurricanes of a chosen severity (the design storm). In the event of Hurricane Katrina, the HPS failed to protect the city. In some areas the storm surge overtopped the levee system (i.e., the surge was higher than that for which the system was designed), and in some areas the system collapsed at lower water levels than those for which the HPS was designed because the foundation soils were weaker than anticipated. The chance that the protective system fails under the loads imposed by the threat is the vulnerability. For most natural hazard risk assessments such as that performed for New Orleans (Interagency Performance Task Force, 2009), the vulnerability assess- ment is based on engineering or other physics-based modeling. For some prob- lems, such as storm surge protection, these vulnerability studies can be compli- cated and expensive, involving multiple experts, high-performance computer modeling, and detailed statistical analysis. For other problems, such as riverine flooding in the absence of structural protection, the vulnerability assessment requires little more than ascertaining whether flood waters rise to the level of a building. Assessing consequences of extreme events of natural hazards has typically focused on loss of lives, injuries, and resulting economic losses. Such assess- ment provides valuable knowledge about a number of the principal effects of natural disasters and other events. The assessment of natural disaster risks is quite advanced on these dimensions of consequences. These consequences can be estimated based on an understanding of the physical phenomena, such as ground acceleration in an earthquake or the extent and depth of inundation asso- ciated with a flood. Statistical models based on the historical record of conse- quences have become commonly available for many hazards. Statistical models, however, especially for loss of life, usually suffer from limited historical data. For example, according to information from the U.S. Geological Survey (USGS; http://ks.water.usgs.gov/pubs/fact-sheets/fs.024-00.html), only about 20 riverine floods in the United States since 1900 have involved 10 or more deaths. This complicates the validation of models predicting the number of fatalities in a fu- ture flood. As a result, increasing effort is being invested in developing predic- tive models based on geospatial databases (e.g., census or real property data) and simulation or agent-based methods. These techniques are maturing and appear capable of representing at least the economic and loss-of-life conse- quences for natural disasters. The National Infrastructure Simulation and

OCR for page 52
EVALUATION OF DHS RISK ANALYSIS 55 Analysis Center (NISAC) is advancing the state of art of natural hazard conse- quence analysis by studying the resiliency and interrelated failure modes of critical infrastructure. Still, the full range of consequences of natural hazard events includes ef- fects that lie outside current evaluations, such as loss of potable water, housing, and other basic services; diverse effects on communities; impacts on social trust; psychological effects of disasters; distributional inequities; and differential so- cial vulnerability.1 In addition, social and behavioral issues enter into response systems (e.g., community preparedness and human behavior from warnings dur- ing emergencies). Indeed, the second national assessment of natural hazards (Mileti, 1999) listed as one of its major recommendations the need to “take a broader, more generous view of social forces and their role in hazards and disas- ters.” As risk assessment of natural hazards moves forward over the longer term, incorporating social dimensions into risk assessment and risk management will have to be a major priority in building a more complete and robust base of knowledge to inform decisions. While models are constantly being developed and improved, risk analysis associated with natural hazards is a mature activity in which analytical tech- niques are subject to adequate quality assurance and quality control, and verifi- cation and validation procedures are commonly used. Quality control practices are actions taken by modelers or contractors to eliminate mistakes and errors. Quality assurance is the process used by an agency or user to ensure that good quality control procedures have in fact been employed. Verification means that the mathematical representations or software applications used to model risk actually do the calculations and return the results that are intended. Validation means that the risk models produce results that can be replicated in the world. To achieve this last goal, studies are frequently conducted retrospectively to compare predictions with actual observed outcomes. A second indicator that risk analyses for natural hazards are fairly reliable is that the limitations of the constituent models are well known and adequately documented. For example, in seismic risk assessment the standard current model is attributed to Cornell (1968). This model identifies discrete seismic source zones, assigns seismicity rates (events per time) and intensities (probabil- ity distributions of the size of the events) to each zone, simulates the occurrence of seismic events, and for each simulated event, mathematically attenuates peak ground accelerations to the site in question according to one of a number of at- tenuation models. Each component of Cornell’s probabilistic seismic hazard model is based on either statistics or physics. The assumptions of each compo- nent are identified clearly, the parameters are based on historical data or well- documented expert elicitations using standard protocols, and the major limita- tions (e.g., assuming log-linearity of a relationship when data suggest some nonlinearity) have been identified and studied. It is important to note, however, that there are aspects of natural hazard dis- 1 See Heinz Center (2000) for examples of recent research.

OCR for page 52
56 DEPARTMENT OF HOMELAND SECURITY’S APPROACH TO RISK ANALYSIS asters that are less easily quantifiable. With regard to the “vulnerability” com- ponent, a well-established base of empirical research reveals that specific popu- lation segments are more likely to experience loss of life, threatened livelihoods, and mental distress in disasters. Major factors that influence this social vulner- ability include lack of access to resources, limited access to political influence and representation, social capital including social networks and connections, certain beliefs and customs, frail health and physical limitations, the quality and age of building stock, and the type and density of infrastructure and lifelines (NRC, 2006). Natural disasters can produce a range of social and economic consequences. For example, major floods disrupt transportation networks and public health services, and they interfere with business activities creating indi- rect economic impacts. The eastern Canadian ice storm of 1998 and the result- ing power blackout, while having moderate direct economic impact, led to catas- trophic indirect economic and social impacts. Transportation, electricity, and water utilities were adversely affected or shut down for weeks (Mileti, 1999). An indirect impact seldom recognized in planning is that among small busi- nesses shut down by floods, fires, earthquakes, tornadoes, or major storms, a large fraction never reopen.2 An important consideration in judging the reliability of risk analysis proce- dures and models is that the attendant uncertainties in their results be identifiable and quantifiable. For example, in flood risk assessments the analytical process may be divided into four steps, along the Risk = T × V × C model (Table 4-1). The discharge (water volume per time) of the river is the threat. The frequency of various river discharges is estimated from historical instrumental records. This estimates the inherent randomness in natural river flows (i.e., the random- ness of nature) and is called aleatory uncertainty. Because the historical record is limited to perhaps several decades, and is usually shorter than a century, there is statistical error in the estimates of aleatory frequencies. There is also uncer- tainty in the model itself (i.e., the uncertainty due to limited data), which is called epistemic uncertainty. Each of the terms in the TVC model has both alea- tory and epistemic uncertainties. In making risk assessments, good practice is either to state the final aleatory frequency with confidence bounds representing the epistemic uncertainty (typical of the relative frequency approach) or to inte- grate aleatory and epistemic uncertainty together into a single probability distri- bution (typical of the Bayesian approach). An important characteristic of mature risk assessment methods is that the (epistemic) uncertainty is identifiable and quantifiable. This is the case for most natural hazards risk assessments. To say, however, that the uncertainties are identifiable and quantifiable is not to say that they are necessarily small. A range of uncertainty, in terms of a factor ranging from 3 to 10 or even more, is not uncommon in mature risk assessments, not only of natural hazards but of 2 A report from the Insurance Council of Australia, Non-insurance and Under-insurance Survey 2002, estimates that 70 percent of small businesses that are underinsured or unin- sured do not survive a major disaster such as a storm or fire.

OCR for page 52
EVALUATION OF DHS RISK ANALYSIS 57 TABLE 4-1 Risk = T × V × C: TVC components of Natural Hazard Risk Methodologies for Flood Risk Uncertainty (Epistemic) Frequency Risk = Component of (Aleatory) Prediction T×V×C Analysis Routinely Deeper Addressed Uncertainties Nonstationarity Statistical introduced by parameters River Annual watershed Flood of regression discharge exceedance T frequency development equation for (flux) probability and climate stream flow change Model Stage Channel Probability parameters (height) of realignment distribution River of stage- river for hydraulics during extreme for given discharge given dis- floods discharge relationship charge V Probability Does levee that levee Geotechnical Future levee withstands uncertainties Levee withstand maintenance about soil’s performance river water load uncertain of given foundations stage? stage Statistical Enhanced Direct Probability imprecision in protection economic distribution depth- induces flood- loss to of property damage rela- plain develop- Consequences C structures losses for tions from ment and in the flood of historical increased flood-plain given extent data damages industrial hazards as well (e.g., Bedford and Cooke, 2001). These uncertainty bounds are not a result of the risk assessment itself: the uncertainties reside in our historical and physical understandings of the natural and social processes involved. They are present in deterministic design studies as well as in risk as- sessments, although in the former they are masked by factors of safety and other traditional means of risk reduction. Conclusion: DHS’s risk analysis models for natural hazards are near the state of the art. These models—which are applied mostly to earth- quake, flood, and hurricane hazards—are based on extensive data, have been validated empirically, and appear well suited to near-term decision needs.

OCR for page 52
58 DEPARTMENT OF HOMELAND SECURITY’S APPROACH TO RISK ANALYSIS Recommendation: DHS’s current natural hazard risk analysis models, while adequate for near-term decisions, should evolve to support longer- term risk management and policy decisions. Improvements should be made to take into account the consequences of social disruption caused by natural hazards; address long-term systemic uncertainties, such as those arising from effects of climate change; incorporate diverse perceptions of risk im- pacts; support decision making at local and regional levels; and address the effects of cascading impacts across infrastructure sectors. Analyses of Critical Infrastructure and Key Resources (CIKR) DHS has processes in place for eliciting threat and vulnerability informa- tion, for developing consequence assessments, and for integrating threat, vulner- ability, and consequence information into risk information briefings to support decision making. Since CIKR analyses are divided into three component analy- ses (threat, vulnerability, and consequence) the committee reviewed and evalu- ated these component elements. Threat Analyses Based on the committee’s discussions and briefings, DHS does strive to get the best and most relevant terrorism experts to assess threats. However, regular, consistent access to terrorism experts is very difficult. Due to competing priori- ties at other agencies, participation is in reality a function of who is available. This turnover of expertise can help prevent bias, but it does put a premium on ensuring that the training is adequate. Rotation of subject matter experts (SMEs) also puts a premium on documenting, testing, and validating assump- tions, as discussed further below. Importantly, a disciplined and structured process for conducting the threat analysis is needed, but the current process as described to the committee was ad hoc and based on experts’ availability. The Homeland Infrastructure Threat and Risk Analysis Center (HITRAC) has made efforts to inject imagination into risk assessments through processes such as red-team exercises. As a next step, there needs to be a systematic and defensible process by which ideas generated by red teams and through alterna- tive analysis sessions are incorporated into the appropriate models and the de- velopment of new models. Another concern is how the assumptions used by the SMEs are made visible and can be calibrated over time. The assumptions as to intent and capabilities that the SMEs make when assigning probabilities need to be documented. At- tempts must be made to test the validity of these assumptions, as well as track and report on their reliability or correctness over time. Equally important is bringing to light and documenting dissenting views of experts, explaining how

OCR for page 52
EVALUATION OF DHS RISK ANALYSIS 59 they differ (in terms of gaps in information or in assumptions), and applying those results to inform future expert elicitation training and elicitation processes as well as modifying and updating the attack scenarios. Many DHS investments, such as those made through the Homeland Secu- rity Grant Programs (HSGPs) in FEMA and CIKR protection programs, are meant to reduce vulnerabilities or increase resilience for the long term. DHS recognizes that the use of generic attack scenarios (threats) based on today’s knowledge can leave risk analyses vulnerable to the unanticipated “never- before-seen” attack scenario (the black swan) or to being behind the curve in emerging terrorist tactics and techniques.3 (For that reason, FEMA reduced the weighting of threat from 0.20 to 0.10 for some of the HSGPs so that the result- ing risk prioritizations are less dependent on the assumptions about threat.) At- tacks that differ from those DHS is currently defending against might have greater consequences or a higher chance of success. However, it is difficult to design a model to account for new tactics, techniques, or weapons. Asking ex- perts to “judge” what they have not yet observed (or perhaps even conceived, until the question is posed) is fraught with more subjective assumptions and issues. Also, introducing too many speculative threats adds to the assumptions and increases the uncertainties in the models; lengthy lists of speculative attack scenarios could be generated, but the inherent uncertainty can be disruptive, rather than helpful, to planning and decision making. There is a large, unsolved question of how to make a model that can capture emerging or new threats, and how to develop the best investment decisions for threats that might not appear for years. To provide the best possible analyses of terrorism threats, DHS has a goal to incorporate more state and local threat information into its risk assessments and has started numerous outreach programs. I&A, for example, conducts a weekly conference call and an assortment of conferences with state and local partners such as the regional fusion centers, at which threat information is shared or “fused.” I&A plans to use the fusion centers as one of its primary means of dis- seminating information to the local level and collecting information from the local level. DHS has made increasing the number and its resources of the re- gional fusion centers a priority. There are currently 72 fusion centers around the country, and I&A plans to deploy additional intelligence analysts to these cen- ters. Between 2004 and the present, DHS has provided more than $320 million to state and local governments to support the establishment and maturation of fusion centers. In 2007 testimony to the House Committee on Homeland Secu- rity Subcommittee on Intelligence, the HITRAC Director said that as part of this outreach plan, “we are regularly meeting with Homeland Security Advisors and their staffs to integrate State information and their analysis into the creation of 3 The Black Swan Theory refers to high-impact, hard-to-predict, and rare events beyond the realm of normal expectations. Unlike the philosophical “black swan problem,” the “Black Swan Theory” (capitalized) refers only to events of large magnitude and consequence and their dominant role in history. Black Swan events are considered extreme outliers (Taleb, 2007).

OCR for page 52
60 DEPARTMENT OF HOMELAND SECURITY’S APPROACH TO RISK ANALYSIS state critical infrastructure threat assessments. By doing this we hope to gain a more comprehensive appreciation for the threats in the states” (Smislova, 2007). Despite these efforts, information sharing between the national and local levels and among state and local governments still faces many hurdles. The most significant challenges are security policies and clearances, common standards for reporting and data tagging, numbers and skill levels of analysts at the state and local levels, and resources to mature the information technology (IT) archi- tecture. The committee cannot assess the impact of all these efforts to increase the information and dialogue between national and local levels with respect to DHS risk analysis, and specifically with regard to threat assessments and prob- abilities. The majority of information gathered by the fusion centers is on crimi- nal activities, not foreign terrorism. In a 2007 report by the Congressional Re- search Service, a DHS official was quoted as saying that the local threat data still were not being incorporated into threat assessments at the federal level in any systematic or meaningful manner (Masse et al., 2007, p. 13). The commit- tee assumes that the fusion centers and processes need to mature before any sig- nificant impact can be observed or measured. It is important that insights gained during expert elicitation processes about threats, attack scenarios, and data gaps be translated into requests that influence intelligence collection. DHS’s I&A Directorate and HITRAC program have processes that generate collection requirements. Due to security constraints the committee did not receive a full understanding of this process or its adequacy, but such a process should be reviewed by a group of fully cleared outside ex- perts to offer recommendations. Recommendation: The intelligence data gathering and vetting process used by I&A and HITRAC should be fully documented and reviewed by an external group of cleared experts from the broader intelligence community. Such a step would strengthen DHS’s intelligence gathering and usage, im- prove the process over time, and contribute to linkages among the relevant intelligence organizations. Threat analyses can also be improved through more exploration of attack scenarios that are not considered in the generic attack scenarios presented to SMEs. DHS has tackled this problem by creating processes designed for imag- ining the future and trying to emulate terrorist thinking about new tactics and techniques. An example is HITRAC’s analytic red teaming, which contributes new ideas on terrorist tactics and techniques. DHS has even engaged in brain- storming sessions where terrorism experts, infrastructure specialists, technology experts, and others work to generate possibilities. These efforts to inject imagi- nation into risk assessments are necessary. Recommendation: DHS should develop a systematic and defensible process by which ideas generated through red teaming and alternative analysis sessions get incorporated into the appropriate models and the de-

OCR for page 52
EVALUATION OF DHS RISK ANALYSIS 61 velopment of new models. DHS needs to regularly assess what leaps could be taken by terrorist groups and be poised to move new scenarios into the models when their likelihood increases, whether because of a change in- ferred about a terrorist group’s intent or capabilities, the discovery or crea- tion of new vulnerabilities as new technologies are introduced, or an in- crease in the consequences of such an attack. These thresholds need to be established and documented, and a repeatable process must be set up. The committee’s interactions with I&A staff during a committee meeting and a site visit to the Office of Infrastructure Protection (IP) did not reveal any formal processes or decision criteria for updating scenarios or threat analyses. The real payoff in linking DHS risk assessment processes and intelligence community collection operations and analysis lies in developing a shared under- standing for assessing and discussing risk.4 I&A and HITRAC have created many opportunities for DHS analysts to interact with members from agencies in the intelligence community through conferences, daily dialogue among analysts, analytic review processes, and personnel rotations, and all of these efforts are to be applauded. However, there also need to be specific, consistent, repeatable exchanges, and other actions focused just on risk modeling. These interactions with experts from the broader intelligence community—including those respon- sible for collection operations—should be focused on building a common termi- nology and understanding of the goals, limitations, and data needs specific to DHS risk assessments. To forge this link in cultures around the threat basis of risk, even risk- focused exchanges might not be enough. There needs to be some common train- ing and even perhaps joint development of the next generation of national secu- rity-related risk models. Vulnerability Analyses DHS’s work in support of critical infrastructure protection has surely insti- gated and enabled more widespread examination of vulnerabilities, and this a positive move for homeland security. IP’s process for conducting vulnerability analyses appears quite thorough within the constraints of how it has defined “vulnerability,” as evidenced, for example, in its establishment of coordinating groups (Government Coordinating Councils and Sector Coordinating Councils) for each CIKR sector, to effect collaboration across levels of government and between government and the private sector. These councils encourage owners and operators (85 percent of whom are in the private sector) to conduct risk as- sessments and help establish expectations for the scope of those assessments (e.g., what range of threats to consider and how to assess vulnerabilities beyond 4 While DHS is a part of the intelligence community, this subsection is focused on building ties and common understanding between DHS and the complete intelligence community.

OCR for page 52
62 DEPARTMENT OF HOMELAND SECURITY’S APPROACH TO RISK ANALYSIS simply physical security).5 Vulnerability assessment tools have been created for the various CIKR sectors. IP has worked to create some consistency among these, but it appears to be flexible in considering sector-recommended changes. To date, it seems that vulnerability is heavily weighted toward site-based physi- cal security considerations. IP has also established the Protective Security Advisors (PSA) program, which places vulnerability specialists in the field to help local communities carry out site-based vulnerability assessments. The PSAs collect data for HI- TRAC use while working with CIKR site owners or operators on a structured assessment of facility protections, during which PSAs also provide advice and recommendations on how to improve site security. The committee was told by IP that the average time spent on such an assessment is 40 hours, so a significant amount of data is collected for each site and a detailed risk assessment is devel- oped jointly by a PSA and the site owner-operator. Sites identified by HITRAC as high risk are visited at least yearly by a PSA. HITRAC is currently working on a project called the Infrastructure Vulnerability Assessment (IVA) that will integrate this site-specific vulnerability information with vulnerability assess- ments from Terrorism Risk Assessment and Measurement (TRAM), Transporta- tion Security Administration (TSA), MSRAM, and elsewhere to create a more integrated picture of vulnerabilities to guide HITRAC risk assessment and man- agement efforts. However, vulnerability is much more than physical security; it is a complete systems process consisting at least of exposure, coping capability, and longer- term accommodation or adaptation. Exposure used to be the only thing people looked at in a vulnerability analysis; now there is consensus that at least these three dimensions have to be considered. The committee did not hear these sorts of issues being raised within DHS, and DHS staff members do not seem to be drawing from recent books on this subject.6 IP is also working toward combining vulnerability information to create what are called “Regional Resiliency Assessment Projects (RRAPs),” in an at- tempt to measure improvements in security infrastructure by site, by industry sector, and by cluster. RRAPs are analyses of groups of CIKR assets and per- haps their surrounding areas (what is known as the buffer zone, which is eligible for certain FEMA grants). The RRAP process uses a weighted scoring method to estimate a “regional index vulnerability score” for a cluster of CIKR assets, so that their risks can be managed jointly. Examples of such clusters are New York City bridges, facilities surrounding Exit 14 on the New Jersey Turnpike, the Chicago Financial District, Raleigh- Durham Research Triangle, and the Tennessee Valley Authority. However, the complexity of the RRAP methodology seems incommensu- 5 Critical Infrastructure Protection: Sector Plans and Sector Councils Continue to Evolve, GAO-07-706R (July 10, 2007), evaluated the capabilities of a sample of these councils and found mixed results. 6 See for example Ayyub et al., 2003; Bier and Azaiez, 2009; Bier et al., 2008; Haimes, 2008, 2009; McGill et al., 2007; and Zhuang and Bier, 2007.

OCR for page 52
EVALUATION OF DHS RISK ANALYSIS 77 used to produce biennial assessments of bioterrorism risks since 2006, was thor- oughly reviewed by a 2008 NRC report. The primary recommendation of that report reads as follows (NRC, 2008, p. 5): The BTRA should not be used as a basis for decision making until the deficiencies noted in this report have been addressed and cor- rected. DHS should engage an independent, senior technical advisory panel to oversee this task. In its current form, the BTRA should not be used to assess the risk of biological, chemical, or radioactive threats (NRC, 2008, p.5). The complexity of this model precludes transparency, and the present com- mittee does not know how it could be validated. The lack of transparency poten- tially obscures large degrees of disagreement about and even ignorance of an- ticipated events. Even sensitivity analysis is difficult with such a complex model. Finally, the model’s complexity means it can only be run by its develop- ers at Battelle Memorial Institute, not by those responsible for bioterrorism risk management. While DHS reports that it is responding to most of the recommendations in the 2008 NRC report, the response is incremental, and a much deeper change is necessary. The proposed responses will do little to reduce the great complexity of the BTRA model. That complexity requires many more SME estimates than can be justified by the small pool of relevant experts and their base of existing knowledge. The proposed response does not move away from the modeling of intelligent adversaries with estimated probabilities. Whether meant as a static assessment of risks or a tool for evaluating risk management options, these shortcomings undermine the method’s reliability. Finally, the proposed re- sponse does not simplify the model and its instantiation in software in a way that would enable it to be of greater use to decision makers and risk managers (e.g., by allowing near-real-time exploration of what-if scenarios by stakeholders). Therefore, the committee has serious doubts about the usefulness and reliability of BTRA. The committee’s concerns about BTRA are echoed by the Department of Health and Human Services (DHHS), which declined to rely on the results of the 2006 or 2008 BTRA assessments, a Chemical Terrorism Risk Assessment (CTRA), or the 2008 integrated Chemical, Biological, Radiological, and Nuclear (CBRN) assessment that builds on BTRA and CTRA.18 DHHS is responsible for the nation’s preparedness to withstand and respond to a bioterror attack, and in order to learn more about how DHS coordinates with another federal agency in managing a homeland security risk, a delegation of committee members made a site visit to DHHS’s Biomedical Advanced Research and Development Au- thority (BARDA), which is within the office of the Assistant Secretary for Pre- 18 Staff from Biomedical Advanced Research and Development Authority (BARDA), De- partment of Health and Human Services, at the committee members’ site visit to DHHS. May 6, 2009, Washington, D.C.

OCR for page 52
78 DEPARTMENT OF HOMELAND SECURITY’S APPROACH TO RISK ANALYSIS paredness and Response. According to BARDA’s web site, it “provides an inte- grated, systematic approach to the development and purchase of the necessary vaccines, drugs, therapies, and diagnostic tools for public health medical emer- gencies. BARDA manages Project BioShield, which includes the procurement and advanced development of medical countermeasures for chemical, biological, radiological, and nuclear agents, as well as the advanced development and pro- curement of medical countermeasures for pandemic influenza and other emerg- ing infectious diseases that fall outside the auspices of Project BioShield.” BARDA provides subject matter input to DHS risk analyses and relies on DHS for threat analyses and risk assessments. One fundamental reason that BARDA declined to rely on these DHS prod- ucts is that it received only a document, without software or the primary data inputs, and thus could not conduct its own “what-if” analyses that could guide risk mitigation decision making. For example, DHHS would like to group those outcomes that require respiratory care, because then certain steps could provide preparedness against a range of agents. Neither BTRA nor CTRA allows a risk manager to adjust the output. More generally, BARDA staff expressed “frustra- tion” because DHS provides such a limited set of information in comparison to the complex strategic decisions that DHHS must make. Staff gave the following examples:  The amount of agent included in an aerosol release was not stated in the CTRA report. This lack of transparency about basic assumptions undermines the credibility of the information coming out of the models that bear on conse- quence management.  BARDA’s SMEs believe that DHS was asking the wrong questions for an upcoming CTRA, but DHS was not willing to change the questions.  The consequence modeling in CTRA and BTRA is getting increasingly complicated. When DHHS pointed this out, it was told that simplifications are being slated for upgrades several years in the future. One of the BARDA staff members said that if BARDA had DHS’s database and intermediate results, DHHS would be able to make “much better decisions” than it can seeing just the end results of an analysis. Under those preferred con- ditions, BARDA could vary the inputs (i.e., conduct a sensitivity analysis) to make better risk management decisions. For example, it could see where in- vestments would make the most difference—that is, which countermeasures provide the best return on investment. More generally, BARDA staff would like to see a process with more interaction at the strategic level, allowing DHS and DHHS staff to jointly identify risks in a more qualitative manner. This should include a better definition of the threat space, with DHS defining scenarios and representing the threats for which DHHS and other stakeholders must prepare for. From BARDA’s perspective, the risk modeling is less important than get- ting key people together and red-teaming a particular threat. BARDA clearly has a compelling need for reliable assessments of bioterror-

OCR for page 52
EVALUATION OF DHS RISK ANALYSIS 79 ism risk, and it is a primary customer for the BTRA. At the committee’s site visit to BARDA, a DHS representative noted that Food and Drug Administra- tion, the Department of Defense, and the White House Homeland Security Council are also customers of BTRA and that they are satisfied with the prod- uct.19 However, the committee does not believe that that is a reason to disregard the valid concerns of DHHS. Integrated Risk Management Framework The committee can develop only a preliminary impression of DHS’s adop- tion of the Integrated Risk Management Framework because, as a developing process rather than a model, it is not yet in its final state. That is normal: in- stantiations of Enterprise Risk Management (ERM) in the private sector may take several years of work, and the results might be difficult to judge until even later. ERM is still an evolving field of management science. Companies from regulated sectors (finance, insurance, utilities) are the leaders in ERM sophisti- cation, but those in nonregulated sectors (e.g., General Motors, Reynolds Corp., Delta Airlines, Home Depot, Wal-Mart, Bristol-Myers Squibb) are also practic- ing elements of ERM.20 Other federal agencies have also begun exploring the applicability of ERM to their own internal management challenges.21 It is an appealing construct be- cause of its potential for breaking down the stovepipes that afflict many agen- cies. However, absent the profit-making motive of private industry (which gives companies a motivation for judiciously taking on some known risks), ERM does not map directly onto the management of a public entity. Government agencies recognize that they cannot simply adopt best practices in risk management from industry. In addition to the less-obvious “upside” of risk, government agencies might have more heterogeneous missions than a typical private sector corpora- tion, and they might have responsibility to plan for rare events for which few data exist. In addition, societal expectations are vastly different for a govern- ment agency compared to a private firm, and many more stakeholder perspec- tives must be taken into account by government agencies when managing their risks. Successful implementations of ERM in the private sector employee proc- esses reveal risks and emerging challenges early and then manage them proac- tively. These processes are shared across units where possible, both to minimize the resources required and to enable comparison of risks and sharing of mitiga- tion steps. Such ERM programs need not be large and their resource require- 19 Statement from Steven Bennett, DHS-RMA at site visit to DHHS, May 6, 2009, Washing- ton, D.C. 20 For additional background, see, for example, United Kingdom Treasury, 2004; Office of Government Commerce, United Kingdom Cabinet Office, 2002. 21 For example, a Government Accountability Office summit on ERM was held on October 21, 2009.

OCR for page 52
80 DEPARTMENT OF HOMELAND SECURITY’S APPROACH TO RISK ANALYSIS ments can be minimal because they leverage existing activities. However, it is often the case that the group implementing ERM must have clear “marching orders” from top management. Many corporations and businesses have identi- fied a senior executive (i.e., chief financial officer, chief executive officer, or chief risk officer) and provided that person with explicit responsibility for over- seeing the management of all risks across the enterprise. At present, DHS’s ERM efforts within the Office of Risk Management and Analysis appear to be on the right track.22 RMA has established a Risk Steering Committee (RSC) for governance, it has inventoried current practices in risk analysis and risk management, it has begun working on coordination and com- munication, and it is developing longer-term plans. RMA is a modest-size of- fice, and it “owns” very few of the risks within DHS’s purview. In terms of identifying and managing risks that cut across DHS directorates, formation and management of the RSC is a key enabler. In practice so far, most committee meetings seem to involve a Tier 3 RSC that consists of lower-level staff who have been delegated responsibilities for their components. Agendas for two recent meetings of the Tier 3 RSC suggest that those two-hour meetings were focused on updates and information sharing. A NUMBER OF ASPECTS OF DHS RISK ANALYSIS NEED ATTENTION Based on its examination of these six illustrative risk analysis models and processes, the committee came to the following primary conclusion, which ad- dresses element (a) of the Statement of Task: Conclusion: DHS has established a conceptual framework for risk analysis (risk is a function of threat (T), vulnerability (V), and consequence (C), or R = f(T,V,C) ) that, generally speaking, appears appropriate for de- composing risk and organizing information, and it has built models, data streams, and processes for executing risk analyses for some of its various missions. However, with the exception of risk analysis for natural disaster preparedness, the committee did not find any DHS risk analysis capabilities and methods that are yet adequate for supporting DHS decision making, because their validity and reliability are untested. Moreover, it is not yet clear that DHS is on a trajectory for development of methods and capability that is sufficient to ensure reliable risk analyses other than for natural dis- asters. Recommendation: To develop an understanding of the uncertainties in its terrorism-related risk analyses (knowledge that will drive future im- 22 The committee was informed at its meeting of November 2008 that the U.S. Coast Guard and Immigration and Customs Enforcement are also developing ERM processes that span just those component agencies, but the committee did not examine those processes.

OCR for page 52
EVALUATION OF DHS RISK ANALYSIS 81 provements), DHS should strengthen its scientific practices, such as docu- mentation, validation, and peer review by technical experts external to DHS. This strengthening of its practices will also contribute greatly to the transparency of DHS’s risk modeling and analysis. DHS should also bolster its internal capabilities in risk analysis as part of its upgrading of scientific practices. The steps implied by this primary conclusion are laid out in the next chap- ter. The focus on characterizing uncertainties is of obvious importance to deci- sion makers and to improving the reliability of risk models and analysis. The treatment of uncertainty is recognized as a critical component of any risk as- sessment activity (Cullen and Small, 2004; NRC, 1983, 1994, 1996, 2008; an overview is presented in Appendix A). Uncertainty is always present in our ability to predict what might occur in the future, and it is present as well in our ability to reconstruct and understand what has happened in the past. This uncer- tainty arises from missing or incomplete observations and data; imperfect under- standing of the physical and behavioral processes that determine the response of natural and built environments and the people within them; subjectivity embed- ded within analyses of threat and vulnerability and in the judgments of what to measure among consequences; and our inability to synthesize data and knowl- edge into working models able to provide predictions where and when we need them. Proper recognition and characterization of both variability and uncertainty are important in all elements of a risk assessment, including effective interpreta- tion of vulnerability, consequence, intelligence, and event occurrence data as they are collected over time. Some DHS risk work reflects an understanding of uncertainties—for example, the uncertainty in the FEMA floodplain maps is well characterized, and the committee was told that TRAM can produce output with an indication of uncertainty (though this is usually suppressed in accor- dance with the perceived wishes of the decision makers and was not shown to the committee). However, DHS risk analysts rarely mentioned uncertainty to the committee, and DHS appears to be in a very immature state with respect to characterizing uncertainty and considering its implications for ongoing data col- lection and prioritization of efforts to improve its methods and models. Closely tied with the topic of uncertainty is that of reflecting properly the precision of risk analyses. The committee saw a pattern of DHS personnel and contractors’ putting too much effort into quantification and trusting numbers that are highly uncertain. Similarly, the committee observed a tendency to make risk analyses more complex than needed or justified. Examples were given earlier in this chapter with respect to TRAM and the Regional Resiliency Assessment Project in IP. Another example arose during the committee’s site visit to IP, wherein a briefing provided examples of protective measures assigned by SMEs to different security features. For example, a high metal fence with barbed wire received a Protective Measure Index of 71, while a 6-foot wooden fence was given a PMI of 13. None of the DHS personnel could say whether the ratio

OCR for page 52
82 DEPARTMENT OF HOMELAND SECURITY’S APPROACH TO RISK ANALYSIS 71/13 had any meaning with respect to vulnerability, yet the presentation con- tinued with several graphics comparing the PMIs of various types of physical security measures as examples of the analyses provided by DHS. Another ex- ample comes from a presentation at the committee’s first meeting on the Na- tional Maritime Strategic Risk Assessment, which included a Risk Index Num- ber with four significant figures. These numbers are apparently used to compare different types of risk. There was no indication, however, that the National Maritime Strategic Risk Assessment relied on that false precision. Uncertainty characterization cannot be addressed without clearer under- standing (such as that obtained through documentation and peer review by tech- nical experts external to DHS), strengthening of the internal skill base, and ad- herence to good scientific practices. Those topics are taken up in Chapter 5. The remainder of this chapter addresses other cross-cutting needs that are evi- dent from the risk models and processes discussed above. Comparing Risks Across DHS Missions DHS is working toward risk analyses that are more and more comprehen- sive, in an attempt to enable the comparison of diverse risks faced by the de- partment. For example, HSPD-18 directed DHS to develop an integrated risk assessment that covered terrorism with biological, chemical, radiological, and nuclear (CBRN) weapons. The next generation of TRAM is being developed to include the ability to represent a range of hazards—human-initiated, technologi- cal, and natural—and measure and compare risks using a common scale.23 More generally, RMA created the Integrated Risk Management Framework in order to support disparate types of risk assessment and management within DHS and eventually across the homeland security enterprise.24 The DHS Risk Steering Committee’s vision for integrated risk management is as follows: “…to enable individual elements, groups of elements, or the entire homeland security enter- prise to simultaneously and effectively assess, analyze, and manage risk from multiple perspectives across the homeland security mission space” (DHS-RSC, 2009). The concept calls for risk to be assessed and managed in a consistent man- ner from multiple perspectives, specifically: (a) managed across missions with- in a single DHS component (e.g., within the Immigration and Customs En- forcement agency); (b) assessed by hazard type (e.g., bioterrorism or chemical terrorism); (c) managed by homeland security functions (e.g., physical and in- formation-based screening of travelers and cargoes); and (d) managed by secu- rity domain (e.g., aviation security strategies) (DHS-RSC, 2009). If an approach 23 Chel Stromgren (SAIC) presentation to the committee, November 24-25, 2008, Washing- ton, D.C. 24 Tina Gabbrielli (RMA) presentation to the committee, November 24-25, 2008, Washing- ton, D.C.

OCR for page 52
EVALUATION OF DHS RISK ANALYSIS 83 to integrated risk management can be successfully developed and implemented, the opportunities for improving the quality and utility of risk analyses carried out by many components of DHS and by many partners should be extensive. In the committee’s view, this emphasis on integrated risk analyses is unwise given DHS’s current capabilities in risk analysis and the state of the science. Integrated risk analysis collects analyses for all potential risks facing an entity, here DHS, and combines those risks into one complete analysis using a common metric. This is contrasted with comparative risk analysis which omits the last step. In comparative risk analysis, potential risks to the entity from many differ- ent sources are analyzed and the risks then compared (or contrasted), but no at- tempt is made to put them into a common metric. As previously noted, there are major differences in carrying out risk analyses for (1) natural disasters, which may rest on the availability of considerable amounts of historical data to help determine the threat (e.g., flood data derived from years of historical records produced from a vast network of stream gages, earthquake-related data concern- ing locations and frequency of occurrence of seismic disturbances), and (2) ter- rorist attacks, which may have no precedents and are carried out by intelligent adversaries resulting in a threat that is difficult to predict or even to conceptual- ize (e.g., biological attacks). Whereas natural disasters can be modeled with relative effectiveness, terrorist disasters cannot. A recent report by a group of experts concludes that it is simply not possible to validate (evaluate) predictive models of rare events that have not occurred, and unvalidated models cannot be relied upon” (JASON, 2009, p. 7). The balancing of priorities between natural hazards and terrorism is far more than an analytic comparison of effects such as health outcomes. Political factors are major in such balancing, and these are affected by public and gov- ernment (over)reaction to terrorism. Even though many DHS components are using quantitative, probabilistic risk assessment models based to some extent on Risk = f(T,V,C), the details of the modeling, and the embedded assumptions, vary significantly from applica- tion to application. Aggregation of the models into ones that purport to provide an integrated view is of dubious value. For example, NISAC sometimes manu- ally integrates elements from the output of various models. There is large room for error when the outputs of one model serve as inputs for another. It might be wiser to build and improve CIKR interdependent systems simulation models by designing modules and integrating elements over time, rather than taking the current collection of models and “jamming them together.” Decision support systems that provide risk analysis must be designed with particular decisions in mind, and it is sometimes easier to build new integrated models rather than try- ing to patch together a collection of risk models developed over time and for various purposes. A decision support system designed from the beginning to be integrated minimizes the chance of conflicting assumptions and even mechani- cal errors that can accrue when outputs are manually merged. While corporations that practice ERM do integrate some risk models from across the enterprise and have developed disciplined approaches for managing

OCR for page 52
84 DEPARTMENT OF HOMELAND SECURITY’S APPROACH TO RISK ANALYSIS portfolios of risk, their risks are relatively homogeneous in character. They are not dealing with risks as disparate as the ones within the purview of DHS. It is not clear that DHS recognizes the fundamental limitations to true inte- gration. A working paper from DHS-RMA, “Terrorism Risk and Natural Haz- ard Risk: Cross-Domain Risk Comparisons” (March 25, 2009; updated July 14, 2009), concludes that “it is possible to compare terrorism risk with natural haz- ard risk” (p. 6). The working paper goes on to say that “the same scale, how- ever, may be an issue” (p. 6). The committee agrees; however, it does not agree with the implication in the RMA paper (which is based on an informal polling of self-selected risk analysts) that such a comparison should be done. It is clear from the inputs of the polled risk analysts that this is a research frontier, not an established capability. There does not exist a general method that will allow DHS to compare risks across domains (e.g., weighing investments to counterter- rorism risk versus those to prepare for natural disasters or reduce the risk of ille- gal immigration). Even if one moves away from quantitative risk assessment, the problem does not disappear. There are methods in qualitative risk analysis for formally eliciting advice for decision making, such as Delphi analysis, scoring methods, and expert judgment, that can be used to compare risks of very different types. There is a well established literature on comparative risk analysis that can be used to apply the Risk = f(T,V,C) approach to different risk types (Davies, 1996; EPA, 1987; Finkel and Golding, 1995). However, the results are likely to in- volve substantially different metrics that cannot be compared directly. How- ever, the scope and diversity in the metrics can themselves be very informative for decision making. Therefore, in response to element (d) of the Statement of Task, the commit- tee makes the following recommendation: Recommendation: The risks presented by terrorist attack and natural disasters cannot be combined in one meaningful indicator of risk, so an all- hazards risk assessment is not practical. DHS should not attempt an inte- grated risk assessment across its entire portfolio of activities at this time because of the heterogeneity and complexity of the risks within its mission. The risks faced by DHS are too disparate to be amenable to quantitative comparative analysis. The uncertainty in the threat is one reason, the difficulty of analyzing the social consequences of terrorism is a second, and the difference in assessment methods is yet another. One key distinguishing characteristic is that in a terrorist event, there is an intelligent adversary intending to do harm or achieve other goals. In comparison, “Mother Nature” does not cause natural hazard events to occur in order to achieve some desired goal. Further, the intel- ligent adversary can adapt as information becomes available or as goals change; thus the likelihood of a successful terrorism event (T × V) changes over time. Even comparing risks of, say, earthquakes and floods on a single tract of land raises difficult questions. As a general principle, a fully integrated analysis that

OCR for page 52
EVALUATION OF DHS RISK ANALYSIS 85 aggregates widely disparate risks by use of common metric is not a practical goal and in fact is likely to be inaccurate or misleading given the current state of knowledge of methods used in quantitative risk analysis. The science of risk analysis does not yet support the kind of reductions in diverse metrics that such a purely quantitative analysis would require. The committee is more optimistic about using an integrated approach if the subject of the analysis is a set of alternative risk management options, for exam- ple, in an analysis of investments to improve resilience. The same risk man- agement option might have positive benefits across different threats—for exam- ple, for both a biological attack and an influenza pandemic or an attack on a chemical facility and an accidental chemical release. In these cases, the same risk management option might have the ability to reduce risks from a number of sources such as natural hazards and terrorism. The analysis of the alternative risk management options that could mitigate risks to a set of activities or assets could be analyzed in a single quantitative model in much the same way that cost-effectiveness analysis can be used to select the least-cost investment in situation in which benefits are generally incommensurate. An example might be an analysis of emergency response requirements to reduce the consequences of several disparate risks. Leading thinkers in public health and medicine have argued that prepared- ness and response systems for bioterrorism have the dual benefit of managing the consequences of new and emerging infections and food-borne outbreaks. Essential to management of both biological attacks and naturally occurring out- breaks is a robust public health infrastructure (Henderson, 1999; IOM, 2003). Among the shared requirements for outbreak response (whether the event is in- tentional or natural in origin) are a public health workforce schooled in the de- tection, surveillance, and management of epidemic disease; a solid network of laboratory professionals and diagnostic equipment; physicians and nurses trained in early detection of novel disease who are poised to communicate with health authorities; and a communication system able to alert the public to any danger and describe self-protective actions (Henderson, 1999; IOM, 2003). Recent evidence supports this dual-use argument. State and local public health practitioners who have received federal grants for emergency prepared- ness and response over the past decade exhibit an enhanced ability to handle “no-notice” health events, regardless of cause (CDC, 2008; TFAH, 2008). Ar- guably, the additional epidemiologists and other public health professionals hired through the preparedness grants—the number of which doubled from 2001 to 2006—have improved the overall functioning of health departments, not sim- ply their emergency capabilities (CDC, 2008). Hospitals, too, that have received federal preparedness grants originally targeted to the low-probability, high- consequence bioterrorist threat report an enhanced state of resilience and in- creased capacity to respond to “common medical disasters” (Toner et al., 2009). Among the gains catalyzed by the federally funded hospital preparedness pro- gram are the elaboration of emergency operation plans, implementation of communication systems, adoption of hospital incident command system con-

OCR for page 52
86 DEPARTMENT OF HOMELAND SECURITY’S APPROACH TO RISK ANALYSIS cepts, and development of memoranda of understanding between facilities for sharing resources and staff during disasters (Toner et al., 2009). A parallel example can be found in managing the risks of hazardous chemi- cals. Improved emergency preparedness, emergency response, and disaster re- covery can help contain the consequences of a chemical release, thus lessening the appeal of the chemical infrastructure as a target for terrorists. At the same time, such readiness, response, and recovery capabilities can better position a community to mitigate the effects of a chemical accident (NRC, 2006). An NRC report (2006) that evaluated the current state of social science re- search into hazards and disasters noted that there has been no systematic scien- tific assessment of how natural, technological, and willful hazards agents vary in their threats and characteristics, thus “requiring different pre-impact interven- tions and post-impact responses by households, businesses, and community haz- ard management organizations” (p. 75). That report continued (NRC, 2006, pp. 75-76): In the absence of systematic scientific hazard characterization, it is difficult to determine whether—at one extreme—natural, technological, and willful hazards agents impose essentially identical disaster de- mands on stricken communities—or at the other extreme—each hazard is unique. Thorough examination of the similarities and differences among hazard agents would have significant implications for guiding the societal management of these hazards. Recommendation: In light of the critical importance of knowledge about societal responses at various levels to risk management decisions, the committee recommends that DHS include within its research portfolio stud- ies of how population protection and incident management compare across a spectrum of hazards. It is possible to compare two disparate risks using different metrics, and that might be the direction in which DHS can head, but this requires great care in presentation, and there is a high risk that the results will be misunderstood. The metrics one applies to any one risk might be completely different from some other risk, and any attempt to squeeze them to fit on the same plot is likely to introduce too much distortion. Rather, the committee encourages comparative risk analysis25 (which is distinct from integrated or enterprise risk analysis26) that is structured within a decision framework but without trying to force the risks onto the same scale. One of the key assumptions in integrated or enterprise risk management (particularly for financial services firms) is that there is a single aggregate risk measure such as economic capital (Bank for International Settlements, 2006). 25 See Davies, 1996; EPA, 1987; Finkel and Golding (ed.), 1995. 26 See, for example, Committee of the Sponsoring Organizations of the Treadway Commis- sion, 2004; Doherty, 2000.

OCR for page 52
EVALUATION OF DHS RISK ANALYSIS 87 Economic capital is the estimated amount of money that a firm must have available to cover ongoing operations, deal with worst-case outcomes, and survive. While many of the concepts of integrated or enterprise risk management can also be applied to DHS (particularly governance, process, and culture), there is currently no single measure of risk analogous to economic capital that is appropriate for DHS use. Thus, DHS must use comparative risk management—specifically multiple metrics to understand and evaluate risks. It is worth noting that most nonfinancial services firms implementing ERM adopt the philosophical concepts but have several metrics for comparative risk across operations. For example, nonfinancial services firms evaluate risks using comparative metrics such as time to recover operations, service-level impact over time, potential economic loss of product or service, number of additional temporary staffing (reallocated resources) to restore operations to normal levels, and other factors. These metrics are much more in line with DHS’s needs to focus on response and recovery. Comparative analysis works because the conceptual breakdown of risk into threat, vulnerability, and consequence can be applied to any risk. Rather than seeking an optimal balance of investments—and certainly rather than trying to do so through one complex quantitative model—DHS should instead use analytical methods to identify options that are adequate to various stakeholders and then choose among them based on the best judgment of leadership. It seems feasible for DHS to consider a broad collection of hazards, sketch out mitigation options, examine co-benefits, and develop a set of actions to reduce vulnerability and increase resilience. A good collection of risk experts could help execute a plan like that.