2: Indicators and Triggers

On a spring evening, a paramedic witnesses a tornado touch down in town. Debris is flying. The tornado seems to be a perfect indicator (providing discrete information that is certain, and can be easily acted on) to trigger emergency medical services (EMS) and health care organization disaster plan activation. This may be true in a small community. In a large community, additional information is required before making this decision. How big was the tornado? Where did the tornado touch down? Did it primarily affect an industrial park on a Saturday, or a school on a weekday?

The storm system that generated the massive tornado that struck Joplin, Missouri, in 2011 (which appropriately and immediately triggered contingency and crisis responses in the community) also spawned a tornado that struck a neighborhood in Minneapolis, Minnesota. No EMS agencies or hospitals activated their disaster plans as news footage from the scene and early EMS reports indicated mostly minor injuries, all within the scope of conventional operations. Thus, even seemingly ideal indicators may require some processing to determine if a “trigger” threshold has been reached, and these decisions may be directly tied to the resources available in the community. This is why the agency and stakeholder discussions of indicators and triggers outlined in this paper are critical to help understand how indicators can be used to support operational decision making, and when triggers can be automatically activated (scripted), versus those that may require expert analysis prior to a decision (non-scripted).

This chapter examines important concepts and considerations related to indicators and triggers. The material in this chapter will help provide background to the toolkit discussions. The chapter begins by providing definitions and examples of indicators and triggers. Next, the chapter discusses how to develop useful and appropriate indicators and triggers. Following this, the chapter presents some limitations and issues related to indicators. Finally, the chapter discusses systems-level considerations and provides several examples of existing data systems.

WHAT ARE INDICATORS AND TRIGGERS?

Key points: Indicators are measures or predictors of changes in demand and/or resource availability; triggers are decision points. Indicators and triggers guide transitions along the continuum of care, from conventional to contingency to crisis and in the return to conventional.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 41
2: Indicators and Triggers On a spring evening, a paramedic witnesses a tornado touch down in town. Debris is flying. The tornado seems to be a perfect indicator (providing discrete information that is certain, and can be easily acted on) to trigger emergency medical services (EMS) and health care organization disaster plan activation. This may be true in a small community. In a large community, additional information is required before making this decision. How big was the tornado? Where did the tornado touch down? Did it primarily affect an industrial park on a Saturday, or a school on a weekday? The storm system that generated the massive tornado that struck Joplin, Missouri, in 2011 (which appropriately and immediately triggered contingency and crisis responses in the community) also spawned a tornado that struck a neighborhood in Minneapolis, Minnesota. No EMS agencies or hospitals activated their disaster plans as news footage from the scene and early EMS reports indicated mostly minor injuries, all within the scope of conventional operations. Thus, even seemingly ideal indicators may require some processing to determine if a “trigger” threshold has been reached, and these decisions may be directly tied to the resources available in the community. This is why the agency and stakeholder discussions of indica- tors and triggers outlined in this paper are critical to help understand how indicators can be used to support operational decision making, and when triggers can be automatically activated (scripted), versus those that may require expert analysis prior to a decision (non-scripted). This chapter examines important concepts and considerations related to indicators and triggers. The material in this chapter will help provide background to the toolkit discussions. The chapter begins by providing definitions and examples of indicators and triggers. Next, the chapter discusses how to develop useful and appropriate indicators and triggers. Following this, the chapter presents some limitations and issues related to indicators. Finally, the chapter discusses systems-level considerations and provides several examples of existing data systems. WHAT ARE INDICATORS AND TRIGGERS? Key points: Indicators are measures or predictors of changes in demand and/or resource availability; triggers are decision points. Indicators and triggers guide transitions along the continuum of care, from conventional to contin- gency to crisis and in the return to conventional. 41

OCR for page 41
Indicators and triggers represent the information and actions taken at specific thresholds that guide incident recognition, response, and recovery. Box 2-1 provides definitions; the concepts behind the defini- tions are discussed in greater detail below. Indicator information may be available in many forms. Sample indicators and associated triggers and tactics are listed in Table 2-1. More detailed descriptions are available in the discipline-specific discussion toolkits (Chapters 4-9). When specific indicators cross a threshold that is recognized by the community to require action, this represents a trigger point, with actions determined by community plans. These include plans for activation of a general disaster plan, which often occurs at the threshold between conventional and contingency care, and activation of crisis standards of care (CSC) plans, which would occur at the threshold between contingency and crisis care. DEVELOPING USEFUL INDICATORS AND TRIGGERS Key points: It can be challenging to identify useful indicators and triggers from among the large and varied sources of available data. Specific numeric “bright line” thresholds for indicators and triggers are concrete and attrac- tive because they are easily recognized, but for many situations the community/agency actions are not as clear-cut or may require significant data analysis before action. Rather than creating a laundry list of possible indicators and triggers, it may be helpful to consider four steps: (1) identify key response strategies and actions, (2) identify and examine potential indicators, (3) determine trigger points, and (4) determine tactics. The amount of information available in health care today is enormous and expanding. It is attractive to look at many metrics and consider their use as indicators. However, multiple factors may make data monitor- ing less useful than it originally appears, and it can be challenging to detect or characterize an evolving event amid usual variability in large and complex sets of data (see the “Indicators Limitations and Issues” sec- tion below). Specific numeric “bright line” thresholds for indicators and triggers are concrete and attractive because they are easily recognized, and for certain situations they are relatively easy to develop (e.g., a single case of anthrax). However, for many situations the community/agency actions are not as clear-cut or may require significant data analysis to determine the point at which a reasonable threshold may be established (e.g., multiple cases of diarrheal illness in a community). The accompanying toolkits provide discipline-specific tables and materials to discuss potential indica- tors and triggers that guide CSC implementation. This section presents key concepts that will help inform the development of these discipline-, agency-, and organization-specific indicators and triggers. Rather than creating a laundry list of possible indicators and triggers, it may be helpful to consider the following four steps. These steps should be considered at the threshold from conventional to contingency care, from contingency to crisis care, and in the return to conventional care. They should also be considered for both slow-onset and no-notice incidents. Subsequent discussion below expands on these steps. 1. Identify key response strategies and actions that the facility or agency would use to respond to an inci- dent. (Examples include disaster declaration, establishment of an emergency operations center [EOC] and multiagency coordination, establishment of alternate care sites, and surge capacity expansion.) 42 CRISIS STANDARDS OF CARE: A TOOLKIT FOR INDICATORS AND TRIGGERS

OCR for page 41
BOX 2-1 Definitions Indicator: A measurement, event, or other data that is a predictor of change in de- mand for health care service delivery or availability of resources. This may warrant further monitoring, analysis, information sharing, and/or select implementation of emergency response system actions. Actionable indicator: An indicator that can be impacted through actions taken within an organization or a component of the emergency response system (e.g., a hospital detecting high patient census). Predictive indicator: An indicator that cannot be impacted through actions taken within an organization or component of the emergency response system (e.g., a hospital receiving notification that a pandemic virus has been detected). Certain data: Data that require minimal verification and analysis to initiate a trigger. Uncertain data: Data that require interpretation to determine appropriate triggers and tactics. Threshold: “A level, point, or value above which something is true or will take place and below which it is not or will not” (Merriam-Webster Dictionary, 2013). A trigger point may be designed to occur at a threshold recognized by the community or agency to require a specific response. Trigger points and thresholds may be the same in many circumstances, but each threshold does not necessarily have an associated trigger. Trigger: A decision point based on changes in the availability of resources that re- quires adaptations to health care services delivery along the care continuum (contin- gency, crisis, and return toward conventional). Crisis care trigger: The point at which the scarcity of resources requires a tran- sition from contingency care to crisis care, implemented within and across the emergency response system. This marks the transition point at which resource allocation strategies focus on the community rather than the individual. Scripted trigger: A predefined decision point that can be initiated immediate- ly upon recognizing an associated indicator. Scripted triggers lead to scripted tactics. Non-scripted trigger: A decision point that requires analysis and leads to imple- mentation of non-scripted tactics. Scripted tactic: A tactic that is predetermined (i.e., can be listed on a checklist) and is quickly implemented by frontline personnel with minimal analysis. Non-scripted tactic: A tactic that varies according to the situation; it is based on analysis, multiple or uncertain indicators, recommendations, and, in certain circum- stances, previous experience. INDICATORS AND TRIGGERS 43

OCR for page 41
TABLE 2-1 Sample Indicators, Triggers, and Tactics by Discipline Discipline Indicator Trigger Tactic Emergency National Weather NWS forecasts Category Issue evacuation/shelter orders, management Service (NWS) watches/ 4 hurricane landfall in determine likely impact, support warnings 96 hours hospital evacuations with transportation resources, risk communication to public about event impact Public health Epidemiology Predicted cases exceed Risk communication, information epidemic threshold consideration of need for medical countermeasures/ alternate care site planning, establish situational awareness and coordination with EMS/hospitals/ long-term care facilities Emergency medical 911 call X casualties Automatic assignment of X services (EMS) ambulances, supervisor, assignment of incident-specific radio talk group Inpatient Emergency department ED wait times exceed X Increase staffing, diversion of (ED) wait times hours patients to clinics/urgent care, activate inpatient plans to rapidly accommodate pending admissions Outpatient Demand forecasting/ Unable to accommodate Expand hours and clinic staffing, epidemiology number of requests for prioritize home care service information appointments/ provision, increase phone support service Behavioral health Crisis hotline call volume Unable to accommodate Activate additional mental health call volume hotline resources, “immunization” via risk communication, implement psychological first aid (PFA) techniques and risk assessment screening in affected areas 2. Identify and examine potential indicators that inform the decision to initiate these actions. (Indicators may be comprised of a wide range of data sources, including, for example, bed availability, a 911 call, or witnessing a tornado.) 3. Determine trigger points for taking these actions. Scripted triggers may be derived from certain indi- cators. If scripted triggers are inappropriate because the indicators require additional assessment and analysis, it will be important to determine the process for arriving at non-scripted triggers (i.e., who is notified/briefed, who provides the assessment and analysis, and who makes the decision to implement the tactic). 4. Determine tactics that could be implemented at these trigger points. Scripted triggers may appropri- ately lead to scripted tactics and a rapid, predefined response. Predicting every disaster scenario (and related key response strategies, actions, and tactics) is impossible, but following these steps can help focus on key sources of information that act as indicators, and determine whether or not the information supports decisions taken to implement (trigger) specific tactics. These four steps form the basis of the approach taken in this report and will be expanded on in the toolkit with informa- tion and examples for each major component of the emergency response system. 44 CRISIS STANDARDS OF CARE: A TOOLKIT FOR INDICATORS AND TRIGGERS

OCR for page 41
Identify Key Response Strategies and Actions Key point: In planning, organizations and other entities should first determine the response strategies and actions that will be taken in response to an incident. Rather than jumping straight into enumeration of indicators and triggers, it is valuable to first identify key response strategies and actions, and then consider what indicators and triggers would be most helpful in deciding to implement these response strategies and actions. Key response strategies and actions are deter- mined by community plans: • Agency/facility triggers into contingency care generally involve activation of facility or agency disaster plans, which produces additional surge capacity that cannot be achieved in conventional response (Barbisch and Koenig, 2006; Hick et al., 2008; Kaji et al., 2006). They are usually agency-/ facility-specific due to variability in facility size and resources. • System-based triggers for coalition, region, or health care system situational awareness, informa- tion sharing, and resource management should be established, for example, when more than one coalition facility declares a disaster, when victims are taken to more than three hospitals, or when staff, space, or supply issues are anticipated. There may be significant concordance between regions and coalitions on these triggers, though geographic differences need to be factored in. • Crisis care triggers tend to be based on exhaustion of specific operational resources that requires a community, rather than an individual, view be taken in regard to resource allocation strategies. Though the threshold may be crossed at an individual facility, it is critical that a system-based response be initiated whenever this occurs in order to diffuse the resource demands and ensure that as consistent a level of care as possible is provided. Most of these triggers will be consistent between facilities and regions and will revolve around lack of appropriate staff, space, or specific supplies. It is important to appreciate that an institutional/agency goal is to avoid reaching a crisis care trigger whenever possible by proactive incident management (i.e., National Incident Manage- ment System [NIMS], Hospital Incident Command System [HICS]), and logistics efforts in the facility and region (EMSA, 2007; FEMA, 2013a). A community may have many more triggers than those noted here that are incorporated in existing emergency response plans (e.g., criteria for second alarm fire, indications for medical director notification, VIP patient protocols). To avoid confusion, trigger discussions should be clarified within the specific opera- tional context (e.g., “crisis care trigger”). Different communities and facilities will clearly have different thresholds based on their resources, and thus similarity of triggers across communities and facilities cannot be assumed; during an incident it is far more helpful to inquire or share details about the specific needs of the facility rather than simply note that a trigger event has occurred (e.g., a circuit breaker trip does not tell the building supervisor what the problem is, just that there may be a problem). Contextual information is important to help frame the specific issue of concern. INDICATORS AND TRIGGERS 45

OCR for page 41
Identify and Examine Potential Indicators Key points: After an agency or a facility determines what actions or strategies are key to its responsibilities during an incident, it should examine and optimize indicator data sources that inform initiation of these actions. Indicator data may be categorized using two primary distinctions: predictive versus actionable and certain versus uncertain. Predictive indicators cannot be directly impacted by actions taken by the agency/facility; actionable indicators are under the control of the agency/facility. An indicator that is actionable for one agency may be predictive for another. Certain data require less analysis before action; uncertain data require interpretation before action. Understanding these characteristics of indicators helps inform decisions about how best to use them. Indicators and triggers can lead to decisions to implement response tactics along two primary path- ways. These two pathways are illustrated in Figure 2-1. One pathway begins with an actionable indicator based on certain data, which could appropriately lead to a scripted1 trigger and associated scripted (specific, predetermined) tactics. Examples of this first pathway would be a hospital trauma team activation or a first alarm response to report of a fire in a building. A second pathway begins with a predictive indicator based on uncertain data, which would require additional analysis and assessment to reach a non-scripted trigger decision and employment of non-scripted (variable) tactics. An example of this second pathway would be the pathway leading to the declaration of an influenza pandemic. Regardless of the certainty of the data, each pathway passes through a “filter” process in which information is analyzed, assessed, and validated. This pro- cess occurs even in the context of certain data, although the filtering requirements are far less than for uncer- tain data. The remainder of this section uses the figure as a basis for additional discussion of these concepts. Indicator data may be categorized using two primary distinctions: predictive versus actionable and certain versus uncertain. Predictive indicators can be monitored, but cannot be directly impacted through actions taken within an organization or component of the emergency response system. Examples include monitoring of weather, epidemiologic data, or other such information. Data monitoring at more than one site generally yields information that is predictive, and data monitoring in aggregate may be of use from a system coordination viewpoint (e.g., epidemiology data that drive treatment decision making, system capacity in a large health care system) rather than at the facility level, where data monitoring is less likely to yield information that is not already evident to the providers. In contrast, actionable indicators are under the control of an agency or a facility (and usually only action- able at that level; the more these data are aggregated, generally the less specific and actionable they become). Examples of these types of data are staffed hospital bed capacity, emergency department (ED) wait times, and other operational data that may be affected directly by actions such as increasing staffed beds or activat- ing call-back of personnel. An indicator that is actionable for one agency may be predictive for another. For example, prolonged ED wait times at a local hospital are actionable for the hospital itself, but they are predictive for the local public health agency (as the agency cannot directly influence the indicator). 1  Inbusiness and engineering, these are often referred to as programmed/non-programmed triggers. The committee believed that because these terms did not have wide usage in the public and medical preparedness communities, they should be tied to the scripted and non- scripted tactics for consistency and ease of understanding. See Box 1-5 in Chapter 1 for additional discussion about decision making in crises. 46 CRISIS STANDARDS OF CARE: A TOOLKIT FOR INDICATORS AND TRIGGERS

OCR for page 41
Actionable Indicator Scripted Scripted (Usually Certain Data) Trigger Tactics Outcome(s) Predictive Indicator Non-Scripted Non-Scripted (Usually Uncertain Data) Analyze Trigger Tactics Assess Validate* Monitor FIGURE 2-1 Relationships among indicators, triggers, and tactics. Figure S-1 and 2-1 *Interpret indicators, other available data, impact, and resources—this may occur over minutes (e.g., developing an initial response to a fire) or days (e.g., developing a response to the detection of a novel virus). NOTE: In this figure, an indicator is comprised of either certain data, sufficient to activate a trigger, or uncertain data, which require additional analysis prior to action. It is important to note several characteristics that may be helpful in shaping planning: •  ll actions require at least minimal validation of data or processing of data—the triangle at the center of the figure shows the relative amount of A processing expertise and time required (i.e., the thicker base of the triangle represents more processing required). •  ndicators that are actionable typically involve certain data that can lead to scripted triggers that staff can initiate without further analysis (e.g., if a I mass casualty incident involves >20 victims, the mass casualty incident [MCI] plan is activated). • Indicators that are predictive (e.g., epidemiology data) typically involve uncertain data that require interpretation prior to “trigger” action. • The smaller the community or the fewer resources available, the more certain and scripted the triggers can become. •  he larger the community (or state/national level) and the more resources available, the less certain the data become as they do not reflect significant T variability in resource availability at the local level—thus, the more expert interpretation is often required prior to action (e.g., state level data may reveal available beds, but select facilities and/or jurisdictions may be far beyond conventional capacity). •  he larger or more direct the impact, the more certain the data (e.g., when the tornado hits your hospital, there is no question you should trigger your T disaster plan and implement contingency or crisis care tactics as required). •  cripted triggers are quickly implemented by frontline personnel with minimal analysis—the disadvantage is that the scripted tactics may commit S too few or too many resources to the incident (e.g., first alarm response to report of a fire in a building). •  on-scripted triggers are based on expert analysis rather than a specific threshold and allow implementation of tactics that are tailored to the N situation (non-scripted tactics). Trigger decisions may be based on expertise, experience, indicator(s) interpretation, etc., and may be made quickly or take significant time based on the information available. • Ongoing monitoring and additional analysis of indicators will help assess the current situation and the impact of the tactics. The data on which indicators are based may be certain (requiring less analysis) or uncertain (requiring interpretation prior to action). Most predictive indicators tend to be based on uncertain data, though in some cases enough certain data are provided to make immediate decisions (e.g., tornado directly hits a hospital). Actionable indicators usually are based on certain data. It is important to note that decision making in crises often requires acting on uncertain information. The fact that information is uncertain means that additional assessment and analysis may be required, but this should not impede the ability to plan and act. The utility of the indicator should be considered separately from the utility of the available data; for example, while bed availability may be a useful indicator, the available data in a community may not be useful if they are of poor quality. Indicator and data limitations are discussed further below. When data are required to make decisions, the following issues may help frame higher-level or interagency discussion. The discipline-specific discussions later in the report provide more specific key questions. • What are the key agency decisions and actions relative to disaster declarations and entering crisis standards of care? INDICATORS AND TRIGGERS 47

OCR for page 41
• What is the rationale for the use of data to inform these decisions and actions? • When are data needed (prior to the incident, during, or both)? • Are the data currently available? (If not, how easily are they gathered and reported? If so, from what source, and how timely are the data?) • Will the data be accurate? (E.g., do data rely on active data entry, or are they passively collected from electronic systems such as electronic medical records? Are they being reported the same way from all entities?) • How will the data be collected/used/shared/processed/analyzed (including consideration of issues of proprietary information, concerns about the ability of state agencies to “take” reported assets, etc.)? • How do the data drive actions? If the data do not affect agency/facility actions, they likely are not worthwhile collecting unless they are of greater benefit to public health in aggregate (and the facility will receive feedback on the information provided). Determine Triggers and Tactics Key points: After an agency or a facility has determined potential indicators, the facility or agency should identify trigger points and actions that should be taken when the trigger is reached. This includes considering the extent to which the indicators need to be analyzed prior to action and determining whether scripted (predetermined) trig- gers and tactics are appropriate or whether the triggers should be non-scripted and customized to the situation. It is important to strike a balance between enabling quick action when time is of the essence, but not “overscripting” when time will allow the tactics to be more closely tailored to the situation. It is also important to define who is notified about indicators, who analyzes the indicator data, and who can act on that information. This section discusses the analysis, assessment, and validation of indicators, and outlines considerations for determining whether there are scripted triggers and tactics that can be employed, or whether the triggers and tactics should be non-scripted and incident-specific. Analyze, Assess, and Validate All data require some validation or interpretation, however minimal, prior to activating a trigger based on the data. This may be as simple as understanding the reliability of a data feed, making a phone call to con- firm, or asking additional questions of a 911 caller. Some data require significant validation. For example, an indicator of gastroenteritis in a community that achieves a threshold may require significant epidemiological investigation just to determine whether the presence of disease in the community is a valid indicator of a sentinel event, or simply represents a coincidence or normal variant. For no-notice disaster incidents, the initial indicator is often a 911 call reporting a mass casualty inci- dent, and all that remains is determining a threshold for the dispatcher to trigger the mass casualty plan for the agency. For slow-onset (e.g., pandemic, flood, hurricane) incidents it may not be as simple, and multiple factors may have to be considered when weighing decisions about clinical care, hospital evacuation, etc. Defining who analyzes and can act on the uncertain data (and how the indicator comes to their atten- tion) is very important. These personnel should have sufficient expertise to consider resources available, time 48 CRISIS STANDARDS OF CARE: A TOOLKIT FOR INDICATORS AND TRIGGERS

OCR for page 41
of day, etc., in making their decision—for example, a hospital physician with authority to activate the facility disaster plan hears that a tornado has touched down somewhere in the community. In a large community with multiple hospitals, no disaster plan activation may be needed on a Tuesday at 3 p.m., for example, but if media reports show major damage and it is Saturday evening, the trigger for the hospital disaster plan should be pulled. Scripted and Non-Scripted Triggers Indicators that provide rationale for informed decision making may lead to the ability to set thresholds for analysis or trigger actions. The following questions are useful to consider for each indicator that is considered relevant to agency/facility actions: • Is there a relevant trigger threshold for this resource/category? • Is it based on an incident report, or based on resource use/capacity? • Is it predictable enough to act as a trigger? • How often will the trigger threshold be reached? (If the trigger threshold is rarely reached, a certain degree of oversensitivity/overresponse is appropriate.) • What actions are required when the trigger is reached (activation of disaster plan, opening of EOC, triage of resources)? • Are these actions congruent with other agencies/facilities in the area? (Triggers will not be identi- cal due to differences in facility/agency resources, but the actions taken should be congruent—see further discussion below.) It is important to strike a balance with triggers of taking appropriate action, but not to “overscript” triggers when time is not of the essence. An example of this can be seen in the decision taken by the World Health Organization during the 2009 H1N1 pandemic. They chose not to declare H1N1 a pandemic for some time, even though it met all of the established criteria. They withheld the declaration because of the limited severity of the disease (Garrett, 2009; WHO, 2011). This can create confusion and inconsistencies, and thus a range of response options should be specified when the actions taken require a level of analysis, and the impact and data are less certain. Triggers may be scripted or non-scripted; Table 2-2 presents a comparison of the properties of each type of trigger. Scripted triggers are very helpful when time is of the essence. They are usually based on informa- tion that is certain enough for frontline personnel to take action without significant analysis. For example, checklists and standard operating procedures may specify scripted “if/then” actions and tactics such as • Fire on a hospital unit = evacuate patients to adjacent smoke-free compartment • Mass casualty incident (MCI) involving more than 10 victims = activate EMS MCI plan • Health alert involving novel illness = notify emergency management group The disadvantage of scripted triggers is that they sometimes will not match the resources to the incident well. Scripted triggers should be designed in a conservative fashion so that they are more likely to over­ commit, rather than undercommit, resources relative to the scope of the incident. This is acceptable when INDICATORS AND TRIGGERS 49

OCR for page 41
TABLE 2-2 Properties of Scripted and Non-Scripted Triggers Scripted Non-Scripted Actionable (or select Predictive (rarely actionable, predictive indicators, especially when multiple data Indicator usually in extreme incidents) streams or unclear impact) Data Certain Uncertain Analysis Minimal Significant Simplicity Yes No Speed Yes Noa Tactics Scripted Non-scripted Demand/resource matching No Yes   a Though processing time may be brief, depending on the situation. the activation is rare, and if delay has a high potential to have a negative impact on life safety. The more often the trigger is used, the more refinement is required so the scripted tactics better match resources to the historical incident demands. It is important to note that the trigger action may simply be that a line employee provides scripted emergency notifications to a team or an individual that will then determine further actions (rather than the trigger activating the actual response actions). Box 2-2 provides an example of how a medium-sized health care coalition region might approach determining a dispatch-based scripted trigger threshold for activation of disaster plans. Non-scripted triggers are more appropriate when at least one of the following is present: • There is time to make an analytical decision (e.g., usually not no-notice, or at least some processing of information required); • Multiple indicators are involved; • Demand/resource analysis is required; • Tiered response is possible which can tailor the resources to achieve the desired outcome(s) (demand/resource matching) and does not introduce unacceptable delay; and/or • Expertise is required to interpret the potential impact of the indicator. Scripted and Non-Scripted Tactics Facility-level crisis care triggers should activate resources and plans rather than specific actions (e.g., not automatically implement triage of resources). For example, though no available ventilators may be a crisis care trigger, it does not mean that ventilator triage should immediately commence. The trigger action should reflect that incident command should immediately work with subject matter experts, logistics, and support- ing agencies to determine • Time frame for obtaining additional resources; • Potential to transfer patients to facilities with ventilators; • Utility of bag-valve ventilation or other potential strategies; and • Process for triage of resources if appropriate. 50 CRISIS STANDARDS OF CARE: A TOOLKIT FOR INDICATORS AND TRIGGERS

OCR for page 41
BOX 2-2 EMS Example Dispatch-Based Scripted Trigger Threshold This table provides an example of how a medium-sized region might approach determining a dispatch-based scripted trigger threshold for activation of disaster plans. It is not all-inclusive and does not reflect specifics of all jurisdictions. School bus, wheelchair, and other vehicles may need to be included. HAZMAT and other complicating factors may change assumptions. Regulatory and other processes may need to be addressed when activating a mass casualty incident (MCI) plan. These calculations are provided as an example only. Agency Region Emergency medical services (EMS) units staffed 15 200 EMS units unstaffed 2 15 Mass casualty incident (MCI) buses 0   2 (20 patients per bus) Private basic life support (BLS) units 0 12 Assumptions: •  Day and night staffing and delay time to staff unstaffed units may have to be factored in •  Unit hour utilization data show 1/3 of units on average are available at a given time = 5 units agency, approximately 60 units regionally •  Other agency units should be able to clear within 45 minutes = 10 •  Each ambulance can transport two patients in a disaster •  Round-trip time = 45 minutes per unit Agency capacity of 44 patients in first 90 minutes—but initial capacity of 10— second wave of transports depends on mutual aid to respond or backfill usual calls. Regional capacity approximately 120 patients in first 60 minutes (assumes longer response time for mutual aid units). With activation of disaster plan + 40 for MCI buses + 24 for privates = 164 patients/45 minutes after first 45 minutes (assume activation time for MCI buses and private units, and MCI bus turnaround time 90 minutes due to longer loading/unloading time). Thus consider >10 significantly injured victims as trigger for agency disaster plan and >125 patients trigger for regional plan (would exceed ability to address with simple mutual aid response). De Boer defines a “Medical Assistance Chain” from medical rescue to medical transport and hospital care where for EMS capacity is estimated by N × S / C. N is number of injured, S is severity (nonambulatory), and C is transport capacity. This con- struct may be helpful to frame discussion around transport methods and resources (de Boer) and has been refined by Bayram and colleagues—both of these theoretical frameworks include potentially valuable considerations for hospitals as well (Bayram and Zuabi, 2012; Bayram et al., 2012; de Boer, 1999). In a longer-duration incident, conditions of contingency and crisis are likely to fluctuate across multiple variables, specifically time, disciplines, and resources. For example, EMS agencies during nighttime hours may be operating under contingency or even conventional response conditions, but during daytime peak hours they are consistently applying crisis care tactics. Another example in hospitals or the outpatient setting INDICATORS AND TRIGGERS 51

OCR for page 41
BOX 2-5 Examples and Analysis of Indicators and Triggers in Existing CSC Plans In a 2012 report on the allocation of scarce resources during mass casualty events, it was noted that few state plans contained “operational frameworks for shifting to crisis standards of care” (Timbie et al., 2012, p. ES-9). The committee searched for and com- piled 18 available jurisdictional plans that discussed triggers for crisis care or pandemic influenza.1-18 Six of these discussed lab or World Health Organization criteria-based trig- gers for pandemic influenza and not relevant to crisis care.6-8,11-12,16 A few states included state declarations of emergency as the trigger for increased information sharing and coordination, but not for triage.1,4 One state referenced “unusual events” rather than triggers, which prompt enhanced information exchange within the system.2 These were defined as events that significantly impact or threaten public health, environmental health, or medical services; are projected to require resources from outside the region; are politically sensitive or of high visibility; or otherwise require enhanced information exchange between partners or the state. One state approached the “trigger” for crisis care from a process standpoint—that if a facility did not have a resource, could not get it, and could not transfer the patient, the situation met preexisting criteria for crisis care and resource allocation.18 These pre­ existing criteria have been described in prior work by the IOM and the American College of Chest Physicians and should be incorporated in the decision-making process, if not in the trigger.19-20 The advantage of this approach is that it offers an all-inclusive process for resource shortfalls. The disadvantage to be considered is that, due to lack of specificity, it may result in less proactive decision making or anticipation of potential trigger events. This is a common trade-off with indicators and triggers—the less specific, the easier the de- velopment, but the less sensitive and less specific to the response. The more specific the indicators and triggers, the harder the development work, but with potentially improved system performance. Other states and entities identified factors that were considered as “triggers” for re- source triage, though these were categorical rather than specific aside from a specific staffing threshold in two plans (which may be more relevant to certain job classes or facilities of certain size—no validation of these numbers or references were noted; using expert-based indicators and triggers is the current state of the science, and a systematic approach to evaluation would be useful).1-2,4-5,10,13-15,17-18 • Equipment shortages—including ventilators, beds, blood products, antivirals, an- tibiotics, operating room capacity, personal protective equipment (PPE), includ- ing supply chain disruption or recall/contamination, emergency medical services (EMS) units; • Staff/personnel triggers—subspecialty staff, security, trauma team, EMS; • Space triggers—unable to accommodate all patients requiring hospitalization de- spite maximal surge measures, doubling of patient rooms; • nfrastructure—including loss of facilities, essential services, or isolation of a facility I due to flooding or other access problems; • umbers of patients in excess of planned health care facility capacity, or an excep- N tional surge in number and severity over a short period of time; • se of alternate care facilities; U • arked increase in proportion of patients who are critically ill and unlikely to M  survive; • bnormally high percentage of hospitals on divert for EMS; A • I ncrease in influenza hospitalizations and deaths reported or other surveillance or forecasting data suggesting surge in excess of resources; 64 CRISIS STANDARDS OF CARE: A TOOLKIT FOR INDICATORS AND TRIGGERS

OCR for page 41
• arked increase in staff or school absenteeism (two specifying 20-30 percent or M >30 percent thresholds); •  Increased emergency medical dispatch call volumes; •  I  ncreased requests for mutual aid or activation of statewide mutual aid agreements; • epletion of state assets; D • navailability of assets from other states; and U • epletion of federal assets. D Of note, one county’s pandemic flu plan specified 30 elements of intensive care patient data gathering. Though the specific dataset elements have not been validated and potentially could be optimized, the gathering of clinical data in real time in order to provide aggregate information about severity of disease and treatment effect is a key gap in current national planning for infectious disease incidents. Available plans tended to list indicators, for the most part without specific thresh- olds.2-5,7,9,10,13-14,19 This is actually consistent with the fact that most of the plans were state level, and thus unlikely to identify indicators of sufficient certainty to establish triggers, aiming primarily to identify the key resources expected to be in shortage and potential indicators from available systems data or functional thresholds (alternate care site use, etc.) marking the transition to crisis care. This is likely to be as specific as state-level plans can be, though national planning should include guidance for shortages of anti- virals, vaccine, or PPE, for which basic assumptions and triggers for policy and clinical guidance should be developed. The types of indicators and triggers may be less specific at higher tiers, but should be linked to the actions that would be taken by each tier. At the state level, a lack of specificity is acceptable at the state level because much of the data is uncertain and requires analysis and a non-scripted response from state agencies. Triggers that may be appropriate at the state level (opening of alternate care sites) are unhelpful at the local level because they will occur too late to be of assistance in the early management of an escalating incident. Local triggers should be as concrete as possible and provide enough advance warning to take action, rather than only triggering when a crisis situa- tion has already occurred (i.e., better to have an early scripted trigger for notification of an emergency management group to assess a situation rather than a late trigger when the system runs out of ventilators). SOURCES  1Alaskan Health Care Providers and Medical Emergency Preparedness–Pediatrics (MEP-P) Project [draft], 2008.  2California Department of Public Health and California Emergency Medical Services Authority, 2011.  3City of Albuquerque, Office of Emergency Management, 2005.  4Colorado Department of Public Health and Environment, 2009.  5Florida Department of Health, 2011.  6Indiana State Department of Health, 2009.  7Kansas Department of Health and Environment, 2013.  8Kentucky Department of Public Health, Division of Epidemiology and Health Planning, Cabinet for Health and Family Services, 2007.  9King County, Seattle Health Care Coalition, and Northwest Healthcare Response Network, 2009.  10Minnesota Department of Health, Office of Emergency Preparedness, 2012.  11New Hampshire Department of Health and Human Services, 2007.  12New York State Department of Health, 2008.  13Northern Utah Healthcare Coalition, 2010.  14Ohio Department of Health and Ohio Hospital Association, 2012.  15State of Michigan, 2012b.  16Tennessee Department of Health, 2009.  17Utah Hospitals and Health Systems Association for the Utah Department of Health, 2009.  18Wisconsin Hospital Association, Inc., 2010.  19Devereaux et al., 2008.  20IOM, 2012a. INDICATORS AND TRIGGERS 65

OCR for page 41
(MSSS) is “a real-time surveillance system that tracks chief presenting complaints from emergent care set- tings, enabling public health officials and providers to monitor trends and investigate unusual increases in symptom presentations” (State of Michigan, 2012a, 2013). Health care facilities have enrolled to participate voluntarily on an ongoing basis since the system was launched in 2003; currently, 89 facilities submit data electronically to the MSSS.3 The system continues to evolve to support public health and information tech- nology needs. In 2013, the MSSS will be able to receive data from health care professionals in settings other than hospital emergency departments, in support of Meaningful Use, which involves using electronic health record technology to ensure complete and accurate information, better access to information, and patient empowerment (CMS, 2013; HealthIT.gov, 2013). In 2012, the MSSS processed more than 4.3 million ED registrations. The chief complaints from ED registrations are categorized using a free-text complaint coder. Trends in the categorical groups are analyzed using an adaptive recursive least squares algorithm, and alerts are sent to Michigan public health officials when unusual increases in symptom presentations are detected. In addition, the MSSS supports enhanced surveillance that is conducted during high-profile events (e.g., local NCAA basketball tournament games, World Series, Super Bowl, and North American International Auto Show), with findings distributed to stakeholders. Access to the MSSS interface is role based: participating health care facilities can visualize and report on their own data, including the ability to run ad hoc queries. The local health departments can view data from within their jurisdictions, and key Michigan Department of Community Health staffs have full statewide access. Since 2008, MSSS data con- tributions have informed national influenza surveillance via the Distribute Project and national syndromic surveillance efforts via Biosense, soon to be resumed with the redesigned Biosense 2.0 (see CDC, 2012a). The benefits and costs of creating new surveillance systems that are highly dependent on technology or labor for data entry should be carefully considered. For a discussion of the benefits, limitations, and resource requirements of syndromic surveillance, see IOM and NRC (2011). Indicators and Triggers in U.S. Department of Veterans Affairs Medical Centers (VAMCs) and Military Treatment Facilities (MTFs) The coordination of VAMCs and MTFs into planning efforts and response to catastrophic disaster events is of vital importance to the two constituencies served by these unique health care organizations. Both can be considered to be “closed” systems, focused on the delivery of care to specific patient populations that they are entrusted to serve: namely, veterans and active-duty military and their dependents. But both systems are also recognized to be important components of the local and regional health care communities in which they are located, particularly for a disaster response. At the local level, VAMC and MTF leadership are given the authorization to provide care to the communities in which they are situated, invoking principles of humani- tarian assistance to ensure that patient care needs are addressed when the entire community is under duress. In the evolving efforts to better organize health care entities to respond to disaster events, VAMC and MTF facilities have been encouraged to become members of health care coalitions. For example, the Washington, DC, VAMC, the former Walter Reed Army Medical Center, and Bethesda National Naval Medical Center (now combined as the Walter Reed National Military Medical Center at Bethesda, Maryland) have been 3  Unpublished work; information from committee member Linda Scott. 66 CRISIS STANDARDS OF CARE: A TOOLKIT FOR INDICATORS AND TRIGGERS

OCR for page 41
a central component of the DC Hospital Coalition. In Northern Virginia, DeWitt Army Hospital at Fort Belvoir was a founding member of the Northern Virginia Hospital Alliance.4 In this regard, the functions of VAMC and MTF facilities during disaster events are best considered to be component parts of the larger, regional health care system. Therefore, they will be expected to use similar indicators, triggers, and tactics as those used by their public- and private-sector counterparts. In the case of mature health care coalitions that have included these facilities within their membership, the use of situ- ational awareness tools in place across the community are likely to provide this information to all member hospitals, including those in the Department of Veterans Affairs (VA) and DoD. In those communities in which the development of health care coalitions is still evolving, the VA and DoD facilities may be in posi- tion to help coordinate and facilitate the sharing of key information. This is particularly true given their connectivity to a network of information systems, supply chains, and health care facilities that are located outside of the immediate community, all part of a national health care system. One of the difficulties that VAMC and MTF leadership may face under catastrophic response condi- tions will be determining how to parse available resources between two distinct mission profiles: service to their members and provision of care to the community at large. In this respect, there may be “internal” indi- cators specific to the VA or DoD system that will have to be evaluated in addition to those usual measures being used to determine local and regional capabilities. The community and the VA/DoD system may have different data needs and community and national systems indicators may vary, so the systems used to collect these may not be standardized. These facilities walk a fine line in a crisis situation, as it is not to anyone’s best interest if the level of care provided at the institution is inconsistent with that being provided in the community, but these are not “community facilities.” For example, VA facilities may have substantially more burdens than community hospitals during influenza epidemics affecting the elderly, while military facilities have substantially less; balancing these demands against a local coalition’s resources may be very helpful in easing strain on the system, and proactive ways to accomplish this should be explored with the facilities (e.g., a local VA might prefer to accept those with prior service connections in preference to non-selected patients during a community crisis). Consideration of CSC planning by leadership located at the Veterans Integrated Service Network level, Veterans Health Administration, and Defense Health Headquarters (DoD) will be crucial to the successful implementation of the tactics derived from the analysis of key indicators. Legal Indicators and Triggers Detailed examination of legal issues is outside of the scope of this project, although there may be interest- ing issues regarding legal indicators and triggers that deserve additional attention (see Box 2-65). For more discussion and details about the ethical principles and legal issues, see the Institute of Medicine’s previous reports on crisis standards of care (IOM, 2009, 2012a). 4  Unpublished work; information from committee co-chair Dan Hanfling. 5  The committee would like to thank James Hodge of Arizona State University for his comments on the material in Box 2-6. INDICATORS AND TRIGGERS 67

OCR for page 41
BOX 2-6 Legal Indicators and Triggers Indicators and triggers may need to be invoked in the legal and regulatory realms to facilitate provision of health services. During the 2012-2013 seasonal influenza epidemic, for example, some local and state governments took the proactive step of declaring emergencies to facilitate their response efforts, including vaccine adminis- tration (City of Boston, 2013; State of New York Executive Chamber, 2013). Such issues vary state by state and require jurisdictional analysis and assessment of the need for emergency activation for purposes of •  I ncreasing visibility of the incident (risk communication); •  I nvolving emergency management and additional organizations; •  Interagency coordination; • E  nhancing staff availability and deploying volunteers; • equiring additional social distancing measures; R  • llowing interstate licensure reciprocity; A  • I ncreasing vaccine availability; • E  xpanding scopes of practice for relevant health care personnel (e.g., pharma- cists authorized to provide pediatric influenza vaccine); • obilizing specific resources; and/or M  • I  ssuing waivers of specific statutory or regulatory requirements that may impede response efforts. In many cases, such declarations are political in nature or made to address specific regulatory requirements. Even with a national public health emergency declaration, resulting state or local inconsistencies across a geographic region related to timing and breadth of emergency powers requires careful assessment and clear explana- tions to practitioners and the public. Note that a federal public health emergency declaration does not mean that states will make such a declaration, and vice versa. A consistent and proactive approach using indicators of disease prevalence and difficul- ties in the delivery of conventional response to health care needs, as well as triggers related to allocation of specific resources in shortage, may be helpful. SUMMARY In planning, facilities and agencies should first identify the key response strategies they will use. Second, data sources and information that inform these thresholds should be examined and optimized. Third, actions to be taken when the trigger is reached should be determined; are they scripted or non-scripted? Fourth, are there scripted tactics that can be employed, or should the tactics be non-scripted and incident-specific? Determination of indicators and triggers can seem daunting. However, discussing these issues at all tiers of the emergency response system will help clarify and develop indicators and triggers that will inform deci- sion making and help deliver the best possible care during a disaster, given the circumstances. The toolkit in the subsequent chapters will facilitate these conversations. 68 CRISIS STANDARDS OF CARE: A TOOLKIT FOR INDICATORS AND TRIGGERS

OCR for page 41
REFERENCES AHRQ (Agency for Healthcare Research and Quality). 2005. National hospital available beds for emergencies and disasters (HAvBED) System. Rockville, MD: AHRQ. http://archive.ahrq.gov/prep/havbed/index.html (accessed April 13, 2013). Alaskan Health Care Providers and Medical Emergency Preparedness-Pediatrics (MEP-P) Project. 2008 [draft]. Medi- cal Alaskan technical recommendations for pediatric medical triage and resource allocation in a disaster. Alaskan Health Care Providers and Medical Emergency Preparedness-Pediatrics (MEP-P) Project. http://a2p2.com/oldsite/mep-p/ethics/ MEP-P_Technical_Recommendations_with_Appendices_DRAFT_7-08.PDF (accessed February 14, 2013). Asplin, B. R., T. J. Flottemesch, and B. D. Gordon. 2006. Developing models for patient flow and daily surge capacity research. Academic Emergency Medicine 13(11):1109-1113. ASPR (Assistant Secretary for Preparedness and Response). 2013. MedMap. Washington, DC: Department of Health and Human Services. https://medmap.hhs.gov (accessed April 3, 2013). Barbisch, D. F., and K. L. Koenig. 2006. Understanding surge capacity: Essential elements. Academic Emergency Medicine 13(11):1098-1102. Bayram, J. D., and S. Zuabi. 2012. Disaster metrics: Quantification of acute medical disasters in trauma related multiple casualty events through modeling of the Acute Medical Severity Index. Prehospital and Disaster Medicine 27(2):130-135. Bayram, J. D., S. Zuabi, and I. Subbarao. 2011. Disaster metrics: Quantitative benchmarking of hospital surge capacity in trauma-related multiple casualty incidents. Disaster Medicine and Public Health Preparedness 5(2):117-124. Bayram, J. D., S. Zuabi, and M.J. Sayed. 2012. Disaster metrics: Quantitative estimation of the number of ambulances required in trauma-related multiple casualty events. Prehospital and Disaster Medicine 27(5):445-451. Bradt, D. A., P. Aitken, G. Fitzgerald, R. Swift, G. O’Reilly, and B. Bartley. 2009. Emergency department surge capacity: Rec- ommendations of the Australasian Surge Strategy Working Group. Academic Emergency Management 16(12):1350-1358. Brownstein, J. S., C. C. Freifeld, and L. C. Madoff. 2009. Digital disease detection—harnessing the Web for public health surveillance. New England Journal of Medicine 360(21):2153-2157. Buehler, J. W., A. Sonricker, M. Paladini, P. Soper, and F. Mostashari. 2008. Syndromic surveillance practice in the United States: Findings from a survey of state, territorial, and selected local health departments. Advances in Disease Surveillance 6(3):1-20. http://www.isdsjournal.org/articles/2618.pdf (accessed June 11, 2013). Buehler, J. W., E. A. Whitney, D. Smith, M. J. Prietula, S. H. Stanton, and A. P. Isakov. 2009. Situational uses of syndromic surveillance. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science 7(2):165-177. Burkle, F.M., E. B. Hsu, M. Loehr, M. D. Christian, D. Markenson, L. Rubinson, and F. L. Archer. 2007. Definition and functions of Health Unified Command and Emergency Operations Centers for large-scale bioevent disasters within the existing ICS. Disaster Medicine and Public Health Preparedness 1(2):135-141. Butler, D. 2013. When Google got flu wrong. Nature 494(7436):155-156. California Department of Public Health and California Emergency Medical Services Authority. 2011. California public health and medical emergency operations manual. http://www.emsa.ca.gov/disaster/files/EOM712011.pdf (accessed February 14, 2013). Carneiro, H. A., and E. Mylonakis. 2009. Google trends: A Web-based tool for real-time surveillance of disease outbreaks. Clinical Infectious Diseases 49(10):1557-1564. CDC (Centers for Disease Control and Prevention). 2003. Mass casualties predictor. Atlanta, GA: CDC. http://www.bt.cdc. gov/masscasualties/predictor.asp (accessed March 11, 2013). CDC. 2010. Blast injuries: Fact sheets for professionals. Atlanta, GA: CDC. http://www.bt.cdc.gov/masscasualties/pdf/blast_ fact_sheet_professionals-a.pdf (accessed March 10, 2013). CDC. 2012a. BioSense program. Atlanta, GA: CDC. http://www.cdc.gov/biosense (accessed June 10, 2013). CDC. 2012b. Overview of influenza surveillance in the United States. Atlanta, GA: CDC. http://www.cdc.gov/flu/pdf/weekly/ overview.pdf (accessed March 5, 2013). CDC. 2013. FluView: 2012-2013 Influenza season week 14 ending April 6, 2013. Atlanta, GA: CDC. http://www.cdc.gov/flu/ weekly (accessed March 5, 2013). Challen, K., and D. Walter. 2006. Accelerated discharge of patients in the event of a major incident: Observational study of a teaching hospital. BMC Public Health 6(1):108. INDICATORS AND TRIGGERS 69

OCR for page 41
City of Albuquerque, Office of Emergency Management. 2005. A strategic guide for the city-wide response to and recovery from major emergencies and disasters (Annex 6—health and medical). Albuquerque, NM: City of Albuquerque, Office of Emergency Management. http://www.cabq.gov/police/emergency-management-office/documents/Annex6Healthand Medical.pdf (accessed February 14, 2013). City of Boston. 2013 ( January 9). Mayor Menino declares public health emergency as flu epidemic worsens. http://www. cityofboston.gov/news/Default.aspx?id=5922 (accessed March 11, 2013). CMS (Centers for Medicare & Medicaid Services). 2013. Meaningful use. Baltimore, MD: CMS. http://www.cms.gov/ Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Meaningful_Use.html (accessed May 3, 2013). Colorado Department of Public Health and Environment. 2009. Guidance for alterations in the healthcare system during moderate to severe influenza pandemic. Denver, CO: Colorado Department of Public Health and Environment. Cook, S., C. Conrad, A. L. Fowlkes, and M. H. Mohebbi. 2011. Assessing Google Flu Trends performance in the United States during the 2009 influenza virus A (H1N1) pandemic. PLoS ONE 6(8):e23510. Davidson, S. J., K. L. Koenig, and D. C. Cone. 2006. Daily patient flow is not surge: Management is prediction. Academic Emergency Medicine 13(11):1095-1096. de Boer, J. 1999. Order in chaos: Modeling medical management in disasters. European Journal of Emergency Medicine 6(2):141-148. DeLia, D. 2006. Annual bed statistics give a misleading picture of hospital surge capacity. Annals of Emergency Medicine 48(4):384-388. Devereaux, A. V., J. R. Dichter, M. D. Christian, N. N. Dubler, C. E. Sandrock, J. L. Hick, T. Powell, J. A. Geiling, D. E. Amundson, T. E. Baudendistel, D. A. Braner, M. A. Klein, K. A. Berkowitz, J.R. Curtis, and L. Rubinson. 2008. Defini- �������������������������������������� tive care for the critically ill during a disaster: A framework for allocation of scarce resources in mass critical care. From a Task Force for Mass Critical Care summit meeting, January 26-27, 2007, Chicago, IL. Chest 133(Suppl 5):S51-S66. http://www.ceep.ca/resources/Definitive-Care-Critically-Ill-Disaster.pdf (accessed March 4, 2013). Dugas, A. F., Y. H. Hsieh, S. R. Levin, J. M. Pines, D. P. Mareiniss, A. Mohareb, C. A. Gaydos, T. M. Perl, and R. E. Rothman. 2012. Google Flu Trends: Correlated with emergency department influenza rates and crowding metrics. Clini- cal Infectious Diseases 54(4):463-469. Dugas, A. F., M. Jalalpour, Y. Gel, S. Levin, F. Torcaso, and T. Igusa. 2013. Influenza forecasting with Google Flu Trends. PLoS ONE 8(2):e56176. EMSA (California Emergency Medical Services Authority). 2007. Disaster Medical Services Division—Hospital Incident Com- mand System (HICS). http://www.emsa.ca.gov/hics (accessed March 11, 2013). Espino, J., W. Hogan, and M. Wagner. 2003. Telephone triage: A timely data source for surveillance of influenza-like diseases. AMIA Annual Symposium Proceedings 2003:215-219. FEMA (Federal Emergency Management Agency). 2013a. National Incident Management System (NIMS). http://www.fema. gov/emergency/nims (accessed March 11, 2013). FEMA. 2013b. Multiagency coordination systems. http://www.fema.gov/multiagency-coordination-systems (accessed May 15, 2013). Florida Department of Health. 2011 (April 5). Pandemic influenza: Triage and scarce resource allocation guidelines. Tallahassee: Florida Department of Health. http://www.doh.state.fl.us/demo/bpr/PDFs/ACS-GUIDE-Ver10-5.pdf (accessed Feb- ruary 14, 2013). Flu Near You. 2013. Flu near you. https://flunearyou.org (accessed March 5, 2013). Furbee, P. M., J. H. Coben, S. K. Smyth, W. G. Manley, D. E. Summers, N. D. Sanddal, T. L. Sanddal, J. C. Helmkamp, R. L. Kimble, R. C. Althouse, and A. T. Kocsis. 2006. Realities of rural emergency medical services disaster preparedness. Prehospital Disaster Medicine 21(2):64-70. Garrett, L. 2009 ( June 12). Interview. Hurdles in declaring swine flu a pandemic. Council on Foreign Relations. http://www.cfr. org/public-health-threats/hurdles-declaring-swine-flu-pandemic/p19617 (accessed March 11, 2013). GFT (Google Flu Trends). 2013. Google flu trends. http://www.google.org/flutrends (accessed March 5, 2013). Ginsberg, J., M. H. Mohebbi, R. S. Patel, L. Brammer, M. S. Smolinksi, and L. Brilliant. 2009. Detecting influenza epidemics using search engine query data. Nature 457(7232):1012-1014. Gotham, I. J., D. L. Sottolano, M. E. Hennessy, J. P. Napoli, G. Dobkins, L. H. Le, R. H. Burhans, and B. I. Fage. 2007. An integrated information system for all-hazards health preparedness and response: New York State Health Emergency Response Data System. Journal of Public Health Management and Practice 13(5):486-496. 70 CRISIS STANDARDS OF CARE: A TOOLKIT FOR INDICATORS AND TRIGGERS

OCR for page 41
Gursky, E., T. V. Inglesby, and T. O’Toole. 2003. Anthrax 2001: Observations on the medical and public health response. Biosecurity and Bioterrorism 1(2):97-110. Handler, J. A., M. Gillam, T. D. Kirsch, and C. F. Feied. 2006. Metrics in the science of surge. Academic Emergency Medicine 13(11):1173-1178. Hanfling, D. 2011. Public health response to terrorism and bioterrorism: Inventing the wheel. In Remembering 9/11 and anthrax: Public health’s role in national defense. Washington, DC: Trust for America’s Health. http://healthyamericans.org/ assets/files/TFAH911Anthrax10YrAnnvFINAL.pdf (accessed May 3, 2013). HealthIT.gov. 2013. Meaningful use. http://www.healthit.gov/policy-researchers-implementers/meaningful-use (accessed May 15, 2013). HealthMap. 2013. HealthMap. http://healthmap.org/en (accessed April 3, 2013). Hick, J. L., K. L. Koenig, D. Barbisch, and T. A. Bey. 2008. Surge capacity concepts for health care facilities: The CO-S-TR model for initial incident assessment. Disaster Medicine and Public Health Preparedness 2(Suppl 1):S51-S57. Hirschberg, A., G. S. Bradford, T. Granchi, M. J. Wall, K. L. Mattox, and M. Stein. 2005. How does casualty load affect trauma care in urban bombing incidents? A quantitative analysis. Journal of Trauma 58(4):686-695. Indiana State Department of Health. 2009. Pandemic influenza outbreak plan. Indianapolis, IN: Indiana State Department of Health. http://www.state.in.us/isdh/files/PandemicInfluenzaPlan.pdf (accessed February 14, 2013). IOM (Institute of Medicine). 2007a. Emergency medical services: At the crossroads. Washington, DC: The National Academies Press. http://www.nap.edu/catalog.php?record_id=11629 (accessed June 7, 2013). IOM. 2007b. Hospital-based emergency care: At the breaking point. Washington, DC: The National Academies Press. http:// www.nap.edu/catalog.php?record_id=11621 (accessed June 7, 2013). IOM. 2009. Guidance for establishing crisis standards of care for use in disaster situations: A letter report. Washington, DC: The National Academies Press. http://www.nap.edu/catalog.php?record_id=12749 (accessed April 3, 2013). IOM. 2012a. Crisis standards of care: A systems framework for catastrophic disaster response. Washington, DC: The National Acad- emies Press. http://www.nap.edu/openbook.php?record_id=13351 (accessed April 3, 2013). IOM. 2012b. Public engagement on facilitating access to antiviral medications and information in an influenza pandemic: Workshop series summary. Washington, DC: The National Academies Press. http://www.nap.edu/catalog.php?record_id=13404 (accessed May 31, 2013). IOM and NRC (National Research Council). 2011. BioWatch and public health surveillance: Evaluating systems for the early detection of biological threats. Abbreviated version. Washington, DC: The National Academies Press. http://www.nap.edu/ catalog.php?record_id=12688 (accessed June 7, 2013). Israel Ministry of Health. 1976. Sahar Committee for Hospital Preparedness. Tel Aviv: Israel Ministry of Health. Jenkins, J. L., R. E. O’Connor, and D. C. Cone. 2006. Differentiating large-scale surge versus daily surge. Academic Emergency Medicine 13(11):1169-1172. Kaji, A., K. L. Koenig, and T. Bey. 2006. Surge capacity for healthcare systems: A conceptual framework. Academic Emergency Medicine 13(11):1157–1159. Kansas Department of Health and Environment. 2013. Kansas pandemic influenza preparedness and response plan. Topeka: Kansas Department of Health and Environment. http://www.kdheks.gov/cphp/download/KS_PF_Plan.pdf (accessed February 14, 2013). Kanter, R. K. 2007. Strategies to improve pediatric disaster surge response: Potential mortality reduction and tradeoffs. Critical Care Medicine 35(12):2837-2842. Kelen, G. D., C. K. Kraus, M. L. McCarthy, E. Bass, E. B. Hsu, G. Li, J. J. Scheulen, J. B. Shahan, J. D. Brill, and G. B. Green. 2006. Inpatient disposition classification for the creation of hospital surge capacity: A multiphase study. Lancet 368(9551):1984-1990. Kelen, G. D., M. L. McCarthy, C. K. Kraus, R. Ding, E. B. Hsu, G. Li, J. B. Shahan, J. J. Scheulen, and G. B. Green. 2009. Creation of surge capacity by early discharge of hospitalized patients at low risk for untoward events. Disaster Medicine and Public Health Preparedness 3(Suppl 2):S10-S16. Kellermann, A. L., A. P. Isakov, R. Parker, M. T. Handrigan, and S. Foldy. 2010. Web-based self-triage of influenza-like illness during the 2009 H1N1 influenza pandemic. Annals of Emergency Medicine 56(3):288-294. INDICATORS AND TRIGGERS 71

OCR for page 41
Kentucky Department of Public Health, Division of Epidemiology and Health Planning. 2007. Kentucky pandemic influenza preparedness plan. Frankfort: Kentucky Department of Public Health, Division of Epidemiology and Health Planning. http://chfs.ky.gov/nr/rdonlyres/6cd366d2-6726-4ad0-85bb-e83cf769560e/0/kypandemicinfluenzapreparednessplan.pdf (accessed February 14, 2013). King County, Seattle Health Care Coalition, and Northwest Healthcare Response Network. 2009 (unpublished). H1N1 ICU data questions. Kirsch, T., L. Sauer, and D. Guha Sapir. 2012. Analysis of the international and US response to the Haiti earthquake: Recom- mendations for change. Disaster Medicine and Public Health Preparedness 6(3):200-208. Koonin, L. M., and D. Hanfling. 2013. Broadening access to medical care during a severe influenza pandemic: The CDC nurse triage line project. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science 11(1):75-80. Kosashvili, Y., L. Aharonson-Daniel, K. Peleg, A. Horowitz, D. Laor, and A. Blumenfeld. 2009. Israeli hospital prepared- ness for terrorism-related multiple casualty incidents: Can the surge capacity and injury severity distribution be better predicted? Injury 40(7):727-731. Lombardo, J. S., H. Burkom, and J. Pavlin. 2004. Essence II and the framework for evaluating syndromic surveillance systems. Morbidity and Mortality Weekly Report 53(Suppl):159-165. Lurie, N. 2012 ( June 7). Nicole Lurie to John Parker and members of the National Biodefense Science Board (NBSB). Letter. Wash- ington, DC: ASPR. http://www.phe.gov/Preparedness/legal/boards/nbsb/Documents/sa-evaluation.pdf (accessed June 10, 2013). Magruder, S. F., S. H. Lewis, A. Najmi, and E. Florio. 2004. Progress in understanding and using over-the-counter pharma- ceuticals for syndromic surveillance. Mortality and Morbidity Weekly Report 53(Suppl):117-122. Manley, W. G., P. M. Furbee, J. H. Coben, S. K. Smyth, D. E. Summers, R. C. Althourse, R. L. Kimble, A. T. Kocsis, and J. C. Helmkamp. 2006. Realities of disaster preparedness in rural hospitals. Disaster Management and Response 4(3):80-87. MappyHealth. 2013. MappyHealth. http://mappyhealth.com (accessed March 5, 2013). McCarthy, M. L., D. Aronsky, and G. D. Kelen. 2006. The measurement of daily surge and its relevance to disaster prepared- ness. Academic Emergency Medicine 13(11):1138-1141. McCarthy, M. L., S. L. Zeger, R. Ding, D. Aronsky, N. R. Hoot, and G. D. Kelen. 2008. The challenge of predicting demand for emergency department services. Academic Emergency Medicine 15(4):337-346. Merriam-Webster Dictionary. 2013. Definition of “threshold.” Springfield, MA: Encyclopaedia Britannica. http://www. merriam-webster.com/dictionary/threshold (accessed April 3, 2013). MIEMSS (Maryland Institute for Emergency Medical Services Systems). 2013. EMRC/SYSCOM. http://www.miemss.org/ home/Departments/EMRCSYSCOM/tabid/139/Default.aspx (accessed February 1, 2013). Minnesota Department of Health, Office of Emergency Preparedness. 2012. Minnesota healthcare system preparedness program. St. Paul: Minnesota Department of Health, Office of Emergency Preparedness. http://www.publichealthpractices.org/ sites/cidrappractices.org/files/upload/372/372_protocol.pdf (accessed February 14, 2013). NBSB (National Biodefense Science Board). 2013. An evaluation of our nation’s public health and healthcare situational aware- ness: A brief report from the National Biodefense Science Board (NBSB). Washington, DC: ASPR. http://www.phe.gov/ Preparedness/legal/boards/nbsb/Documents/sa-evaluation.pdf (accessed June 10, 2013). New Hampshire Department of Health and Human Services. 2007. Influenza pandemic public health preparedness and response. Concord: New Hampshire Department of Health and Human Services. http://www.dhhs.state.nh.us/dphs/cdcs/avian/ documents/pandemic-plan.pdf (accessed February 14, 2013). New York State Department of Health. 2008. Pandemic influenza plan. Albany: New York State Department of Health. http:// www.health.ny.gov/diseases/communicable/influenza/pandemic/plan/docs/pandemic_influenza_plan.pdf (accessed Feb- ruary 14, 2013). Northern Utah Healthcare Coalition. 2010. Northern Utah regional medical surge capacity plan. http://www.brhd.org/index. php?option=com_content&task=view&id=457&Itemid=31 (accessed February 14, 2013). Ohio Department of Health and Ohio Hospital Association. 2012 [draft]. Ohio medical coordination plan: Emergency medical service annex. Columbus: Ohio Department of Health. Ortiz, J. R., H. Zhou, D. K. Shay, K. M. Neuzil, A. L. Fowlkes, and C. H. Goss. 2011. Monitoring influenza activity in the United States: A comparison of traditional surveillance systems with Google Flu Trends. PLoS ONE 6(4):e18687. Peleg, K., and A. L. Kellermann. 2009. Enhancing hospital surge capacity for mass casualty events. Journal of the American Medical Association 302(5):565-567. 72 CRISIS STANDARDS OF CARE: A TOOLKIT FOR INDICATORS AND TRIGGERS

OCR for page 41
Perry, A. G., K. M. Moore, L. E. Levesque, W. L. Pickett, and M. J. Korenberg. 2010. A comparison of methods for forecast- ing emergency department visits for respiratory illness using Telehealth Ontario calls. Canadian Journal of Public Health 101(6):464-469. Polgreen, P. M., Y. Chen, D. M. Pennock, F. D. Nelson, and R. A. Weinstein. 2008. Using Internet searches for influenza surveillance. Clinical Infectious Diseases 47(11):1443-1448. Price, R. A., D. Fagbuyi, R. Harris, D. Hanfling, F. Place, T. B. Todd, and A. L. Kellermann. 2013. Feasibility of web-based self-triage by parents of children with influenza-like illness: A cautionary tale. Journal of the American Medical Association Pediatrics 167(2):112-118. Rivara, F. P., A. B. Nathens, G. J. Jurkovich, and R. V. Maier. 2006. Do trauma centers have the capacity to respond to disas- ters? Journal of Trauma 61(4):949-953. Rolland, E., K. Moore, V. A. Robinson, and D. McGuiness. 2006. Using Ontario’s “telehealth” health telephone helpline as an early-warning system: A study protocol. BMC Health Services Research 6:10-16. Satterthwaite, P. S., and C. J. Atkinson. 2012. Using “reverse triage” to create hospital surge capacity: Royal Darwin Hospital’s response to the Ashmore Reef disaster. Emergency Medicine Journal 29(2):160-162. Schmidt, C. W. 2012. Using social media to predict and track disease outbreaks. Environmental Health Perspectives 120(1):A31-A33. Schull, M. J. 2006. Hospital surge capacity: If you can’t always get what you want, can you get what you need? Annals Emer- gency Medicine 48(4):389-390. Schweigler, L. M., J. S. Desmond, M. L. McCarthy, K. J. Bukowski, E. L. Ionides, and J. G. Younger. 2009. Forecasting models of emergency department crowding. Academic Emergency Medicine 16(4):301-308. Sickweather. 2013. Sickweather. http://www.sickweather.com (accessed March 5, 2013). Signorini, A., A. M. Segre, and P. M. Polgreen. 2011. The use of Twitter to track levels of disease activity and public concern in the U.S. during the Influenza A H1N1 pandemic. PLoS ONE 6(5):e19467. Sprung, C. L., J. L. Zimmerman, M. D. Christian, G. M. Joynt, J. L. Hick, B. Taylor, G. A. Richards, C. Sandrock, R. Cohen, and B. Adini. 2010. Recommendations for intensive care unit and hospital preparations for an influenza epidemic or mass disaster: Summary report of the European Society of Intensive Care Medicine’s Task Force for intensive care unit triage during an influenza epidemic or mass disaster. Intensive Care Medicine 36(3):428-443. State of Michigan. 2012a. The Michigan Syndromic Surveillance System (MSSS)—Electronic syndromic submission to the Michigan Department of Community Health: Background and electronic syndromic surveillance reporting detail for MSSS. Lansing, MI: Department of Community Health. http://michiganhit.org/docs/Syndromic%20Submission%20Guide.pdf (accessed May 21, 2013). State of Michigan. 2012b. Guidelines for Ethical Allocation of Scarce Medical Resources and Services during Public Health Emergen- cies in Michigan. Lansing, MI: Department of Community Health, Office of Public Health Preparedness. http://www. mymedicalethics.net/Documentation/Michigan%20DCH%20Ethical%20Scarce%20Resources%20Guidelines%20 v.2.0%20rev.%20Nov%202012%20Guidelines%20Only.pdf (accessed February 14, 2013). State of Michigan. 2013. Michigan Emergency Department Syndromic Surveillance System. Lansing, MI: Department of Com- munity Health. http://www.michigan.gov/mdch/0,4612,7-132-2945_5104_31274-107091--,00.html (accessed April 12, 2013). State of New York Executive Chamber. 2013 ( January 12). Declaring a disaster emergency in the state of New York and tem- porarily authorizing pharmacists to immunize children against seasonal influenza. http://www.governor.ny.gov/executive order/90 (accessed March 11, 2013). Subbarao, I., M. K. Wynia, and F. M. Burkle. 2010. The elephant in the room: Collaboration and competition among relief organizations during high-profile disasters. Journal of Clinical Ethics 21(4):328-334. Tadmor, B., J. McManus, and K. L. Koenig. 2006. The art and science of surge: Experience from Israel and the U.S. military. Academic Emergency Medicine 13(11):1130-1134. Tennessee Department of Health. 2009. Pandemic influenza response plan. Nashville: Tennessee Department of Health. http:// health.state.tn.us/ceds/PDFs/2006_PanFlu_Plan.pdf (accessed February 14, 2013). Timbie, J. W., J. S. Ringel, D. S. Fox, D. A. Waxman, F. Pillemer, C. Carey, M. Moore, V. Karir, T. J. Johnson, N. Iyer, J. Hu, R. Shanman, J. W. Larkin, M. Timmer, A. Motala, T. R. Perry, S. Newberry, and A. L. Kellermann. 2012. Allocation of scarce resources during mass casualty events. Rockville, MD: AHRQ. http://www.ncbi.nlm.nih.gov/books/NBK98854/pdf/ TOC.pdf (accessed June 6, 2013). INDICATORS AND TRIGGERS 73

OCR for page 41
Utah Hospitals and Health Systems Association for the Utah Department of Health. 2009. Utah pandemic influenza hospital and ICU triage guidelines. Salt Lake City: Utah Hospitals and Health Systems Association for the Utah Department of Health. http://pandemicflu.utah.gov/plan/med_triage081109.pdf (accessed February 14, 2013). van Dijk, A., D. McGuiness, E. Rolland, and K. M. Moore. 2008. Can Telehealth Ontario respiratory call volume be used as a proxy for emergency department respiratory visit surveillance by public health? Canadian Journal of Emergency Medicine 10(1):18-24. WHO (World Health Organization). 2011. Strengthening response to pandemics and other public-health emergencies. Report of the review committee on the functioning of the international health regulations (2005) and on pandemic influenza (H1N1) 2009. Geneva, Switzerland: WHO. http://www.who.int/ihr/publications/RC_report/en/index.html (accessed March 11, 2013). Wiler, J. L., R. T. Griffey, and T. Olsen. 2011. Review of modeling approaches for emergency department patient flow and crowding research. Academic Emergency Medicine 18(12):1371-1379. Wisconsin Hospital Association, Inc. 2010. Wisconsin executive summary: Allocation of scarce resources project. Madison: Wiscon- sin Hospital Association, Inc. http://www.wha.org/scarceResources.aspx (accessed February 14, 2013). Wolf, Y. I., A. Nikolskaya, J. L. Cherry, C. Viboud, E. Koonin, and D. J. Lipman. 2010. Projection of seasonal influenza sever- ity from sequence and serological data. PLoS Currents 2:RRN1200. 74 CRISIS STANDARDS OF CARE: A TOOLKIT FOR INDICATORS AND TRIGGERS