On a spring evening, a paramedic witnesses a tornado touch down in town. Debris is flying. The tornado seems to be a perfect indicator (providing discrete information that is certain, and can be easily acted on) to trigger emergency medical services (EMS) and health care organization disaster plan activation. This may be true in a small community. In a large community, additional information is required before making this decision. How big was the tornado? Where did the tornado touch down? Did it primarily affect an industrial park on a Saturday, or a school on a weekday?
The storm system that generated the massive tornado that struck Joplin, Missouri, in 2011 (which appropriately and immediately triggered contingency and crisis responses in the community) also spawned a tornado that struck a neighborhood in Minneapolis, Minnesota. No EMS agencies or hospitals activated their disaster plans as news footage from the scene and early EMS reports indicated mostly minor injuries, all within the scope of conventional operations. Thus, even seemingly ideal indicators may require some processing to determine if a “trigger” threshold has been reached, and these decisions may be directly tied to the resources available in the community. This is why the agency and stakeholder discussions of indicators and triggers outlined in this paper are critical to help understand how indicators can be used to support operational decision making, and when triggers can be automatically activated (scripted), versus those that may require expert analysis prior to a decision (non-scripted).
This chapter examines important concepts and considerations related to indicators and triggers. The material in this chapter will help provide background to the toolkit discussions. The chapter begins by providing definitions and examples of indicators and triggers. Next, the chapter discusses how to develop useful and appropriate indicators and triggers. Following this, the chapter presents some limitations and issues related to indicators. Finally, the chapter discusses systems-level considerations and provides several examples of existing data systems.
Key points: Indicators are measures or predictors of changes in demand and/or resource availability; triggers are decision points. Indicators and triggers guide transitions along the continuum of care, from conventional to contingency to crisis and in the return to conventional.
Indicators and triggers represent the information and actions taken at specific thresholds that guide incident recognition, response, and recovery. Box 2-1 provides definitions; the concepts behind the definitions are discussed in greater detail below.
Indicator information may be available in many forms. Sample indicators and associated triggers and tactics are listed in Table 2-1. More detailed descriptions are available in the discipline-specific discussion toolkits (Chapters 4-9). When specific indicators cross a threshold that is recognized by the community to require action, this represents a trigger point, with actions determined by community plans. These include plans for activation of a general disaster plan, which often occurs at the threshold between conventional and contingency care, and activation of crisis standards of care (CSC) plans, which would occur at the threshold between contingency and crisis care.
Key points: It can be challenging to identify useful indicators and triggers from among the large and varied sources of available data. Specific numeric “bright line” thresholds for indicators and triggers are concrete and attractive because they are easily recognized, but for many situations the community/agency actions are not as clear-cut or may require significant data analysis before action. Rather than creating a laundry list of possible indicators and triggers, it may be helpful to consider four steps: (1) identify key response strategies and actions, (2) identify and examine potential indicators, (3) determine trigger points, and (4) determine tactics.
The amount of information available in health care today is enormous and expanding. It is attractive to look at many metrics and consider their use as indicators. However, multiple factors may make data monitoring less useful than it originally appears, and it can be challenging to detect or characterize an evolving event amid usual variability in large and complex sets of data (see the “Indicators Limitations and Issues” section below). Specific numeric “bright line” thresholds for indicators and triggers are concrete and attractive because they are easily recognized, and for certain situations they are relatively easy to develop (e.g., a single case of anthrax). However, for many situations the community/agency actions are not as clear-cut or may require significant data analysis to determine the point at which a reasonable threshold may be established (e.g., multiple cases of diarrheal illness in a community).
The accompanying toolkits provide discipline-specific tables and materials to discuss potential indicators and triggers that guide CSC implementation. This section presents key concepts that will help inform the development of these discipline-, agency-, and organization-specific indicators and triggers. Rather than creating a laundry list of possible indicators and triggers, it may be helpful to consider the following four steps. These steps should be considered at the threshold from conventional to contingency care, from contingency to crisis care, and in the return to conventional care. They should also be considered for both slow-onset and no-notice incidents. Subsequent discussion below expands on these steps.
1. Identify key response strategies and actions that the facility or agency would use to respond to an incident. (Examples include disaster declaration, establishment of an emergency operations center [EOC] and multiagency coordination, establishment of alternate care sites, and surge capacity expansion.)
Indicator: A measurement, event, or other data that is a predictor of change in demand for health care service delivery or availability of resources. This may warrant further monitoring, analysis, information sharing, and/or select implementation of emergency response system actions.
Actionable indicator: An indicator that can be impacted through actions taken within an organization or a component of the emergency response system (e.g., a hospital detecting high patient census).
Predictive indicator: An indicator that cannot be impacted through actions taken within an organization or component of the emergency response system (e.g., a hospital receiving notification that a pandemic virus has been detected).
Certain data: Data that require minimal verification and analysis to initiate a trigger.
Uncertain data: Data that require interpretation to determine appropriate triggers and tactics.
Threshold: “A level, point, or value above which something is true or will take place and below which it is not or will not” (Merriam-Webster Dictionary, 2013). A trigger point may be designed to occur at a threshold recognized by the community or agency to require a specific response. Trigger points and thresholds may be the same in many circumstances, but each threshold does not necessarily have an associated trigger.
Trigger: A decision point based on changes in the availability of resources that requires adaptations to health care services delivery along the care continuum (contingency, crisis, and return toward conventional).
Crisis care trigger: The point at which the scarcity of resources requires a transition from contingency care to crisis care, implemented within and across the emergency response system. This marks the transition point at which resource allocation strategies focus on the community rather than the individual.
Scripted trigger: A predefined decision point that can be initiated immediately upon recognizing an associated indicator. Scripted triggers lead to scripted tactics.
Non-scripted trigger: A decision point that requires analysis and leads to implementation of non-scripted tactics.
Scripted tactic: A tactic that is predetermined (i.e., can be listed on a checklist) and is quickly implemented by frontline personnel with minimal analysis.
Non-scripted tactic: A tactic that varies according to the situation; it is based on analysis, multiple or uncertain indicators, recommendations, and, in certain circumstances, previous experience.
Sample Indicators, Triggers, and Tactics by Discipline
|Emergency management||National Weather Service (NWS) watches/ warnings||NWS forecasts Category 4 hurricane landfall in 96 hours||Issue evacuation/shelter orders, determine likely impact, support hospital evacuations with transportation resources, risk communication to public about event impact|
|Public health||Epidemiology information||Predicted cases exceed epidemic threshold||Risk communication, consideration of need for medical countermeasures/ alternate care site planning, establish situational awareness and coordination with EMS/hospitals/long-term care facilities|
|Emergency medical services (EMS)||911 call||X casualties||Automatic assignment of X ambulances, supervisor, assignment of incident-specific radio talk group|
|Inpatient||Emergency department (ED) wait times||ED wait times exceed X hours||Increase staffing, diversion of patients to clinics/urgent care, activate inpatient plans to rapidly accommodate pending admissions|
|Outpatient||Demand forecasting/ epidemiology information||Unable to accommodate number of requests for appointments/ service||Expand hours and clinic staffing, prioritize home care service provision, increase phone support|
|Behavioral health||Crisis hotline call volume||Unable to accommodate call volume||Activate additional mental health hotline resources, “immunization” via risk communication, implement psychological first aid (PFA) techniques and risk assessment screening in affected areas|
2. Identify and examine potential indicators that inform the decision to initiate these actions. (Indicators may be comprised of a wide range of data sources, including, for example, bed availability, a 911 call, or witnessing a tornado.)
3. Determine trigger points for taking these actions. Scripted triggers may be derived from certain indicators. If scripted triggers are inappropriate because the indicators require additional assessment and analysis, it will be important to determine the process for arriving at non-scripted triggers (i.e., who is notified/briefed, who provides the assessment and analysis, and who makes the decision to implement the tactic).
4. Determine tactics that could be implemented at these trigger points. Scripted triggers may appropriately lead to scripted tactics and a rapid, predefined response.
Predicting every disaster scenario (and related key response strategies, actions, and tactics) is impossible, but following these steps can help focus on key sources of information that act as indicators, and determine whether or not the information supports decisions taken to implement (trigger) specific tactics. These four steps form the basis of the approach taken in this report and will be expanded on in the toolkit with information and examples for each major component of the emergency response system.
Identify Key Response Strategies and Actions
Key point: In planning, organizations and other entities should first determine the response strategies and actions that will be taken in response to an incident.
Rather than jumping straight into enumeration of indicators and triggers, it is valuable to first identify key response strategies and actions, and then consider what indicators and triggers would be most helpful in deciding to implement these response strategies and actions. Key response strategies and actions are determined by community plans:
• Agency/facility triggers into contingency care generally involve activation of facility or agency disaster plans, which produces additional surge capacity that cannot be achieved in conventional response (Barbisch and Koenig, 2006; Hick et al., 2008; Kaji et al., 2006). They are usually agency/ facility-specific due to variability in facility size and resources.
• System-based triggers for coalition, region, or health care system situational awareness, information sharing, and resource management should be established, for example, when more than one coalition facility declares a disaster, when victims are taken to more than three hospitals, or when staff, space, or supply issues are anticipated. There may be significant concordance between regions and coalitions on these triggers, though geographic differences need to be factored in.
• Crisis care triggers tend to be based on exhaustion of specific operational resources that requires a community, rather than an individual, view be taken in regard to resource allocation strategies. Though the threshold may be crossed at an individual facility, it is critical that a system-based response be initiated whenever this occurs in order to diffuse the resource demands and ensure that as consistent a level of care as possible is provided. Most of these triggers will be consistent between facilities and regions and will revolve around lack of appropriate staff, space, or specific supplies. It is important to appreciate that an institutional/agency goal is to avoid reaching a crisis care trigger whenever possible by proactive incident management (i.e., National Incident Management System [NIMS], Hospital Incident Command System [HICS]), and logistics efforts in the facility and region (EMSA, 2007; FEMA, 2013a).
A community may have many more triggers than those noted here that are incorporated in existing emergency response plans (e.g., criteria for second alarm fire, indications for medical director notification, VIP patient protocols). To avoid confusion, trigger discussions should be clarified within the specific operational context (e.g., “crisis care trigger”). Different communities and facilities will clearly have different thresholds based on their resources, and thus similarity of triggers across communities and facilities cannot be assumed; during an incident it is far more helpful to inquire or share details about the specific needs of the facility rather than simply note that a trigger event has occurred (e.g., a circuit breaker trip does not tell the building supervisor what the problem is, just that there may be a problem). Contextual information is important to help frame the specific issue of concern.
Identify and Examine Potential Indicators
Key points: After an agency or a facility determines what actions or strategies are key to its responsibilities during an incident, it should examine and optimize indicator data sources that inform initiation of these actions. Indicator data may be categorized using two primary distinctions: predictive versus actionable and certain versus uncertain. Predictive indicators cannot be directly impacted by actions taken by the agency/facility; actionable indicators are under the control of the agency/facility. An indicator that is actionable for one agency may be predictive for another. Certain data require less analysis before action; uncertain data require interpretation before action. Understanding these characteristics of indicators helps inform decisions about how best to use them.
Indicators and triggers can lead to decisions to implement response tactics along two primary pathways. These two pathways are illustrated in Figure 2-1. One pathway begins with an actionable indicator based on certain data, which could appropriately lead to a scripted1 trigger and associated scripted (specific, predetermined) tactics. Examples of this first pathway would be a hospital trauma team activation or a first alarm response to report of a fire in a building. A second pathway begins with a predictive indicator based on uncertain data, which would require additional analysis and assessment to reach a non-scripted trigger decision and employment of non-scripted (variable) tactics. An example of this second pathway would be the pathway leading to the declaration of an influenza pandemic. Regardless of the certainty of the data, each pathway passes through a “filter” process in which information is analyzed, assessed, and validated. This process occurs even in the context of certain data, although the filtering requirements are far less than for uncertain data. The remainder of this section uses the figure as a basis for additional discussion of these concepts.
Indicator data may be categorized using two primary distinctions: predictive versus actionable and certain versus uncertain.
Predictive indicators can be monitored, but cannot be directly impacted through actions taken within an organization or component of the emergency response system. Examples include monitoring of weather, epidemiologic data, or other such information. Data monitoring at more than one site generally yields information that is predictive, and data monitoring in aggregate may be of use from a system coordination viewpoint (e.g., epidemiology data that drive treatment decision making, system capacity in a large health care system) rather than at the facility level, where data monitoring is less likely to yield information that is not already evident to the providers.
In contrast, actionable indicators are under the control of an agency or a facility (and usually only actionable at that level; the more these data are aggregated, generally the less specific and actionable they become). Examples of these types of data are staffed hospital bed capacity, emergency department (ED) wait times, and other operational data that may be affected directly by actions such as increasing staffed beds or activating call-back of personnel.
An indicator that is actionable for one agency may be predictive for another. For example, prolonged ED wait times at a local hospital are actionable for the hospital itself, but they are predictive for the local public health agency (as the agency cannot directly influence the indicator).
1 In business and engineering, these are often referred to as programmed/non-programmed triggers. The committee believed that because these terms did not have wide usage in the public and medical preparedness communities, they should be tied to the scripted and non-scripted tactics for consistency and ease of understanding. See Box 1-5 in Chapter 1 for additional discussion about decision making in crises.
Relationships among indicators, triggers, and tactics.
*Interpret indicators, other available data, impact, and resources—this may occur over minutes (e.g., developing an initial response to a fire) or days (e.g., developing a response to the detection of a novel virus).
NOTE: In this figure, an indicator is comprised of either certain data, sufficient to activate a trigger, or uncertain data, which require additional analysis prior to action. It is important to note several characteristics that may be helpful in shaping planning:
• All actions require at least minimal validation of data or processing of data—the triangle at the center of the figure shows the relative amount of processing expertise and time required (i.e., the thicker base of the triangle represents more processing required).
• Indicators that are actionable typically involve certain data that can lead to scripted triggers that staff can initiate without further analysis (e.g., if a mass casualty incident involves >20 victims, the mass casualty incident [MCI] plan is activated).
• Indicators that are predictive (e.g., epidemiology data) typically involve uncertain data that require interpretation prior to “trigger” action.
• The smaller the community or the fewer resources available, the more certain and scripted the triggers can become.
• The larger the community (or state/national level) and the more resources available, the less certain the data become as they do not reflect significant variability in resource availability at the local level—thus, the more expert interpretation is often required prior to action (e.g., state level data may reveal available beds, but select facilities and/or jurisdictions may be far beyond conventional capacity).
• The larger or more direct the impact, the more certain the data (e.g., when the tornado hits your hospital, there is no question you should trigger your disaster plan and implement contingency or crisis care tactics as required).
• Scripted triggers are quickly implemented by frontline personnel with minimal analysis—the disadvantage is that the scripted tactics may commit too few or too many resources to the incident (e.g., first alarm response to report of a fire in a building).
• Non-scripted triggers are based on expert analysis rather than a specific threshold and allow implementation of tactics that are tailored to the situation (non-scripted tactics). Trigger decisions may be based on expertise, experience, indicator(s) interpretation, etc., and may be made quickly or take significant time based on the information available.
• Ongoing monitoring and additional analysis of indicators will help assess the current situation and the impact of the tactics.
The data on which indicators are based may be certain (requiring less analysis) or uncertain (requiring interpretation prior to action). Most predictive indicators tend to be based on uncertain data, though in some cases enough certain data are provided to make immediate decisions (e.g., tornado directly hits a hospital). Actionable indicators usually are based on certain data. It is important to note that decision making in crises often requires acting on uncertain information. The fact that information is uncertain means that additional assessment and analysis may be required, but this should not impede the ability to plan and act.
The utility of the indicator should be considered separately from the utility of the available data; for example, while bed availability may be a useful indicator, the available data in a community may not be useful if they are of poor quality. Indicator and data limitations are discussed further below. When data are required to make decisions, the following issues may help frame higher-level or interagency discussion. The discipline-specific discussions later in the report provide more specific key questions.
• What are the key agency decisions and actions relative to disaster declarations and entering crisis standards of care?
• What is the rationale for the use of data to inform these decisions and actions?
• When are data needed (prior to the incident, during, or both)?
• Are the data currently available? (If not, how easily are they gathered and reported? If so, from what source, and how timely are the data?)
• Will the data be accurate? (E.g., do data rely on active data entry, or are they passively collected from electronic systems such as electronic medical records? Are they being reported the same way from all entities?)
• How will the data be collected/used/shared/processed/analyzed (including consideration of issues of proprietary information, concerns about the ability of state agencies to “take” reported assets, etc.)?
• How do the data drive actions? If the data do not affect agency/facility actions, they likely are not worthwhile collecting unless they are of greater benefit to public health in aggregate (and the facility will receive feedback on the information provided).
Determine Triggers and Tactics
Key points: After an agency or a facility has determined potential indicators, the facility or agency should identify trigger points and actions that should be taken when the trigger is reached. This includes considering the extent to which the indicators need to be analyzed prior to action and determining whether scripted (predetermined) triggers and tactics are appropriate or whether the triggers should be non-scripted and customized to the situation. It is important to strike a balance between enabling quick action when time is of the essence, but not “overscripting” when time will allow the tactics to be more closely tailored to the situation. It is also important to define who is notified about indicators, who analyzes the indicator data, and who can act on that information.
This section discusses the analysis, assessment, and validation of indicators, and outlines considerations for determining whether there are scripted triggers and tactics that can be employed, or whether the triggers and tactics should be non-scripted and incident-specific.
Analyze, Assess, and Validate
All data require some validation or interpretation, however minimal, prior to activating a trigger based on the data. This may be as simple as understanding the reliability of a data feed, making a phone call to confirm, or asking additional questions of a 911 caller. Some data require significant validation. For example, an indicator of gastroenteritis in a community that achieves a threshold may require significant epidemiological investigation just to determine whether the presence of disease in the community is a valid indicator of a sentinel event, or simply represents a coincidence or normal variant.
For no-notice disaster incidents, the initial indicator is often a 911 call reporting a mass casualty incident, and all that remains is determining a threshold for the dispatcher to trigger the mass casualty plan for the agency. For slow-onset (e.g., pandemic, flood, hurricane) incidents it may not be as simple, and multiple factors may have to be considered when weighing decisions about clinical care, hospital evacuation, etc.
Defining who analyzes and can act on the uncertain data (and how the indicator comes to their attention) is very important. These personnel should have sufficient expertise to consider resources available, time
of day, etc., in making their decision—for example, a hospital physician with authority to activate the facility disaster plan hears that a tornado has touched down somewhere in the community. In a large community with multiple hospitals, no disaster plan activation may be needed on a Tuesday at 3 p.m., for example, but if media reports show major damage and it is Saturday evening, the trigger for the hospital disaster plan should be pulled.
Scripted and Non-Scripted Triggers
Indicators that provide rationale for informed decision making may lead to the ability to set thresholds for analysis or trigger actions. The following questions are useful to consider for each indicator that is considered relevant to agency/facility actions:
• Is there a relevant trigger threshold for this resource/category?
• Is it based on an incident report, or based on resource use/capacity?
• Is it predictable enough to act as a trigger?
• How often will the trigger threshold be reached? (If the trigger threshold is rarely reached, a certain degree of oversensitivity/overresponse is appropriate.)
• What actions are required when the trigger is reached (activation of disaster plan, opening of EOC, triage of resources)?
• Are these actions congruent with other agencies/facilities in the area? (Triggers will not be identical due to differences in facility/agency resources, but the actions taken should be congruent—see further discussion below.)
It is important to strike a balance with triggers of taking appropriate action, but not to “overscript” triggers when time is not of the essence. An example of this can be seen in the decision taken by the World Health Organization during the 2009 H1N1 pandemic. They chose not to declare H1N1 a pandemic for some time, even though it met all of the established criteria. They withheld the declaration because of the limited severity of the disease (Garrett, 2009; WHO, 2011). This can create confusion and inconsistencies, and thus a range of response options should be specified when the actions taken require a level of analysis, and the impact and data are less certain.
Triggers may be scripted or non-scripted; Table 2-2 presents a comparison of the properties of each type of trigger. Scripted triggers are very helpful when time is of the essence. They are usually based on information that is certain enough for frontline personnel to take action without significant analysis. For example, checklists and standard operating procedures may specify scripted “if/then” actions and tactics such as
• Fire on a hospital unit = evacuate patients to adjacent smoke-free compartment
• Mass casualty incident (MCI) involving more than 10 victims = activate EMS MCI plan
• Health alert involving novel illness = notify emergency management group
The disadvantage of scripted triggers is that they sometimes will not match the resources to the incident well. Scripted triggers should be designed in a conservative fashion so that they are more likely to overcommit, rather than undercommit, resources relative to the scope of the incident. This is acceptable when
Properties of Scripted and Non-Scripted Triggers
|Indicator||Actionable (or select predictive indicators, usually in extreme incidents)||Predictive (rarely actionable, especially when multiple data streams or unclear impact)|
the activation is rare, and if delay has a high potential to have a negative impact on life safety. The more often the trigger is used, the more refinement is required so the scripted tactics better match resources to the historical incident demands. It is important to note that the trigger action may simply be that a line employee provides scripted emergency notifications to a team or an individual that will then determine further actions (rather than the trigger activating the actual response actions). Box 2-2 provides an example of how a medium-sized health care coalition region might approach determining a dispatch-based scripted trigger threshold for activation of disaster plans.
Non-scripted triggers are more appropriate when at least one of the following is present:
• There is time to make an analytical decision (e.g., usually not no-notice, or at least some processing of information required);
• Multiple indicators are involved;
• Demand/resource analysis is required;
• Tiered response is possible which can tailor the resources to achieve the desired outcome(s) (demand/resource matching) and does not introduce unacceptable delay; and/or
• Expertise is required to interpret the potential impact of the indicator.
Scripted and Non-Scripted Tactics
Facility-level crisis care triggers should activate resources and plans rather than specific actions (e.g., not automatically implement triage of resources). For example, though no available ventilators may be a crisis care trigger, it does not mean that ventilator triage should immediately commence. The trigger action should reflect that incident command should immediately work with subject matter experts, logistics, and supporting agencies to determine
• Time frame for obtaining additional resources;
• Potential to transfer patients to facilities with ventilators;
• Utility of bag-valve ventilation or other potential strategies; and
• Process for triage of resources if appropriate.
EMS Example Dispatch-Based Scripted Trigger Threshold
This table provides an example of how a medium-sized region might approach determining a dispatch-based scripted trigger threshold for activation of disaster plans. It is not all-inclusive and does not reflect specifics of all jurisdictions. School bus, wheelchair, and other vehicles may need to be included. HAZMAT and other complicating factors may change assumptions. Regulatory and other processes may need to be addressed when activating a mass casualty incident (MCI) plan. These calculations are provided as an example only.
|Emergency medical services (EMS) units staffed||15||200|
|EMS units unstaffed||2||15|
|Mass casualty incident (MCI) buses||0||2 (20 patients per bus)|
|Private basic life support (BLS) units||0||12|
• Day and night staffing and delay time to staff unstaffed units may have to be factored in
• Unit hour utilization data show 1/3 of units on average are available at a given time = 5 units agency, approximately 60 units regionally
• Other agency units should be able to clear within 45 minutes = 10
• Each ambulance can transport two patients in a disaster
• Round-trip time = 45 minutes per unit
Agency capacity of 44 patients in first 90 minutes—but initial capacity of 10— second wave of transports depends on mutual aid to respond or backfill usual calls.
Regional capacity approximately 120 patients in first 60 minutes (assumes longer response time for mutual aid units). With activation of disaster plan + 40 for MCI buses + 24 for privates = 164 patients/45 minutes after first 45 minutes (assume activation time for MCI buses and private units, and MCI bus turnaround time 90 minutes due to longer loading/unloading time).
Thus consider >10 significantly injured victims as trigger for agency disaster plan and >125 patients trigger for regional plan (would exceed ability to address with simple mutual aid response).
De Boer defines a “Medical Assistance Chain” from medical rescue to medical transport and hospital care where for EMS capacity is estimated by N × S / C. N is number of injured, S is severity (nonambulatory), and C is transport capacity. This construct may be helpful to frame discussion around transport methods and resources (de Boer) and has been refined by Bayram and colleagues—both of these theoretical frameworks include potentially valuable considerations for hospitals as well (Bayram and Zuabi, 2012; Bayram et al., 2012; de Boer, 1999).
In a longer-duration incident, conditions of contingency and crisis are likely to fluctuate across multiple variables, specifically time, disciplines, and resources. For example, EMS agencies during nighttime hours may be operating under contingency or even conventional response conditions, but during daytime peak hours they are consistently applying crisis care tactics. Another example in hospitals or the outpatient setting
may be encountered when an organization faces initial limitations of basic supplies, followed by later restrictions in staff availability. The importance of specific triggers under these dynamic conditions becomes less relevant, as the resources available are used to their maximum benefit in the context of an ongoing incident management process. New and incident-specific triggers may be created during this process if required (e.g., if flood crest forecast exceeds 20 feet, commence facility evacuation) and are always best if advance planning can be implemented based on a Hazard Vulnerability Analysis.
Education and Training
Key point: Implementation of actions depends on the level of training and authority and requires appropriate education.
All of the following groups must be integrated into CSC planning and response:
• Frontline employees: Awareness—actions should be scripted at specific thresholds and be made as concrete as possible (e.g., activate EMS disaster plan for MCI involving >10 victims). Awareness may also be an appropriate goal for elected officials and executive officers.
• Supervisors: Knowledge—initial triggers and tactics should be scripted, but with some flexible interpretation of the trigger threshold (disaster declaration for hospital by nursing supervisor or ED physician) and perhaps simple, phased-response options.
• Managers/directors: Proficiency—trigger should be scripted for notification and activation of incident management process, but tactics can be non-scripted and based on expert analysis of the situation with subject matter expert input. This often requires regional/coalition consistency and coordination (e.g., decisions about how to manage limited availability of N95 masks).
Return to Conventional Care
Key point: As conditions improve, it is important to plan and watch for indicators that the system can move back toward conventional care status.
Indicators of return to conventional care may be incident-specific and not included in an agency’s usual data or list of indicators. Examples of these indicators are listed in the discipline-specific tables in the toolkit section and may include
• Decreasing call volumes or demands for services;
• Restored systems (utilities, etc.); and
• Decreasing use of hotlines, dispensing sites, alternate care centers, etc.
These variables may fluctuate over the course of a disaster response, as noted in the EMS example above—so return to conventional may be temporary or episodic. Return to conventional care status is not the same as recovery, although it may be an indicator of transition into the recovery phase. Recovery implies
a more permanent return to normal operating status and the restoration of the impacted systems and communities. Thus, demobilization of resources should not be dependent on scripted triggers because a return to conventional operations or a decrease in volume of hotline calls or other markers may be temporary, and may be affected by high-profile illness deaths, or other factors.
A more difficult decision-making process occurs when the resources that are supplied to the disaster area provide more resources than were present prior to the incident (e.g., critical care services after the Haiti earthquake, or unified health, medical, mental health, and social work support at shelters for disaster victims). The decisions to withdraw these resources can be difficult, and especially in these cases, thresholds for demobilization should be considered early in the incident; every effort should be made to provide services that can be sustained after the departure of the assets (Kirsch et al., 2012; Subbarao et al., 2010).
As the discussion above makes clear, the use of indicators and data is not always straightforward. This section briefly presents a number of limitations and issues associated with indicators that stakeholders should keep in mind when developing plans for indicators and associated triggers.
Key point: Indicators are only as valid as the accuracy of the data being considered.
If the data are bad (outdated, not accurate—due to not having the same understanding of what to report or poor data entered—or simply not reported), then they cannot inform good decisions. As noted above, validating data before acting is important, even if this step is done very quickly.
Reporting Data in a Dynamic and Complex Environment
Key points: When developing and using indicators, it is important to be aware of the “rules of reporting,” of naming conventions, and that the data are being reported in a dynamic and complex environment.
The “rules of reporting” used and the naming conventions applied during an incident may affect the value of indicators. For example, only a few intensive care beds may be available in a given city, but by activation of surge plans, many more beds may be made available just by staffing currently unstaffed beds or using postanesthesia care units (Devereaux et al., 2008; Rivara et al., 2006; Sprung et al., 2010). Or zero ventilators may be listed as available, but this may not consider the availability and use of transport ventilators, anesthesia machines, or other resources. So, even certain numbers based on actionable data do not necessarily yield scripted triggers for crisis care (though for both of these examples, reaching such a threshold should still prompt action to assess and address the situation, as these are still relevant predictive indicators of system capacity problems, and proactive management decisions are strongly preferred to reactive ones made when there is no option left but crisis care). Who is alerted in these situations, performs this analysis, and decides whether to initiate a trigger based on the information is a key component of agency/facility plans. When
indicators are compared or aggregated, the definitions must be the same: If, under “beds available,” one jurisdiction counts unstaffed beds and another does not, or if critical access hospitals list monitored beds in the “ICU” category, the dataset is far less useful.
Similarly, it is important to be aware of the dynamic environment in which data reporting is occurring. Even in data-sharing systems that are considered to provide “real-time” data to support situational awareness and crisis decision making (examples are discussed below), there is an important caveat. In each of these systems, there is a time lag between acquiring primary data points, verification of the data received, and reporting that information. Many emergency operations centers and health care coalitions are maturing to the point of developing an information clearinghouse function that can serve to collect and collate such information, but the reporting must be recognized as representing static data points in what is very often a very dynamic environment. This can be illustrated using the same “bed reporting” example used above. The description of actual bed numbers in a preincident collection of data usually reflects either licensed or “staffed” beds (conventional surge response), but not what might be available under contingency or crisis response (DeLia, 2006). For example, intensive care units (ICUs) that run at or near capacity most of the time will only ever report a few beds under conventional (preincident) conditions. But if an incident, sudden or not, were to occur, additional ICU beds located in shuttered units, surgical recovery, or “step-down” units would all be quickly available (assuming staff would also be rapidly mobilized to support the care of patients in these areas), and selected patients would be moved out of ICU beds to intermediate care areas. The information that the local and state authorities choose to gather should be best oriented toward functional reporting, rather than resource reporting, whenever possible. For example, functional capability regarding health care facility response may include reports not just of “beds,” but the resources that accompany the placement of patients in those beds—specialized staff, necessary equipment, supplies, and pharmaceuticals.
Separating Signal from Noise
Key point: In considering a data source as an indicator, it is important to consider whether it is feasible to extract actionable information or detect an evolving event from the data source.
For some indicators, separating signal from noise (i.e., detecting actionable information or characterizing an evolving event amid standard variability in large and complex sets of data) can be challenging, particularly for incidents that develop slowly, such as a pandemic influenza. Boxes 2-3 and 2-4 discuss the promise and perils of using technology, modeling, and social media to predict and detect a surge in demand in real-time. Box 2-3 discusses modeling to predict and detect surge in hospitals, and Box 2-4 discusses these issues as related to influenza pandemic.
Time Required for Reporting
Key point: Automating information exchange and focusing on key information that drives actions will help reduce the demand on staff time during a response.
Requests for resource information is often a distraction from response efforts, or may end up serving as an unintentional drain of limited staff-hours available to attend to specific requests. The less value the facility
or agency sees in reporting the data, the less likely the data will be timely or accurate. Automating information exchange, where key data can be pulled or pushed without having to use significant human resources to prepare such data, would help avoid this concern. This issue would be addressed in information-sharing systems by ensuring the interoperability of the data captured by these systems, and by minimizing differences in vendors and proprietary systems that interfere with the exchange of key information.
Key point: Integrated planning among all major components of the emergency response system is critical for an effective and coordinated response.
This section outlines system-level considerations for indicators and triggers; Chapter 1 provides additional discussion of the systems approach to catastrophic disaster response.
Use of Indicators and Data at Different Levels of the Emergency Response System
Key points: Data that may be very actionable at the agency or facility level may be only of limited use in regional aggregate. Data that are valuable at one tier of the medical response may not have immediate value at another level.
Bed occupancy and other data that may be very actionable at the agency or facility level may be of only limited use in regional aggregate, especially when facilities are disproportionately affected (e.g., children’s hospitals) and these stresses are not reflected in overall system data or shared among the health care coalitions statewide. However, all data do not have to be used for indicators and triggers in order to be valuable. The data may still have significant value, particularly for overall system capacity monitoring during an incident. Those responsible for regional- or state-level assessment and monitoring should understand that most of the data available to them will be predictive, and that their indicators and triggers may be different from those for the local community. Regional entities, particularly those elements that serve as the command and control function for health care coalitions, such as the Regional Medical Coordination Center, must also assume the role of ensuring the timeliness and validity of data provided (Burkle et al., 2007). In this manner, the coalition serves the important function of providing a clearinghouse for vetting and exchanging useful information.
Data that are valuable at one tier of the medical response may not have immediate value at another level. For example, during the 2009 H1N1 pandemic, King County, Washington, collected a 30-item dataset on intensive care patients with influenza (King County et al., 2009). These data for the most part would not aid the facility response and would be of limited utility at the community level. However, had the same data been collected in real time statewide or nationwide and analyzed by subject matter experts, it might have provided critical treatment information for future cases that could have been shared nationally to influence the overall response. This is why stakeholder collaborative discussion is critical to understanding what data are useful at what level, and requires commitment to supplying the data according to the documented needs and use.
Agencies and facilities supplying data should have a clear understanding of who can access their data, how the data are used, and how the agencies and facilities will benefit by providing the data. In some cases,
Promise and Limitations 1: Hospital Surge Capacity
Extensive work has been done on recognizing and forecasting emergency department (ED) daily surge and crowding (Schweigler et al., 2009; Wiler et al., 2011). Although the emergency and trauma care system is often stretched and temporary surges may exacerbate issues such as chronic ED crowding, boarding, and ambulance diversion, and may stress resources and staff, hospitals generally maintain usual standards of care during these times (IOM, 2007a,b). It is recognized that the use of the term surge capacity in mass casualty incidents is not equated with daily variations in ED volume, although there may be some relationship (Davidson et al., 2006; Handler et al., 2006; Jenkins et al., 2006). One key problem has been extrapolating factors to daily fluctuations in patient surge—for which there are good data—to disaster situations, where the data are sparse.
Unfortunately, although there is some increasingly useful information about how ED throughput is affected by other factors such as inpatient capacity and rate of presentation, it is clear that many interdependent variables exist, and that daily management of surge is not disaster (and certainly not catastrophic disaster) modeling (Jenkins et al., 2006; McCarthy et al., 2008).
Handler and colleagues (2006) proposed 13 potential data points for studying surge capacity, though this was expert opinion–based and these data points have not been tested for validity. They recognized the deficit of data that are available and can be shared, concluding that they recognize
the need to make data available to clinicians, administrators, public health officials, and internal and external systems; the importance of real-time data, data standards, and electronic transmission; seamless integration of data capture into the care process; the value of having data available from a single point of access through which data mining, forecasting, and modeling can be performed; and the basic necessity of a criterion standard metric for quantifying surge capacity. (Handler et al., 2006, p. 1173)
Seven years after these conclusions were published, no progress has been made toward these goals.
Furthermore, there are really three types of surge that require different assumptions and responses (Jenkins et al., 2006):
1. Large numbers of patients presenting over a brief period;
2. Sustained increases in volume; and
3. Small numbers of patients with extensive demands for complex, resource-intensive specialty services.
Most of the modeling that is helpful operationally for hospitals revolves around the first of these types of surge. The rate of ED arrivals has been discussed as a key metric (Bayram
et al., 2011; Bradt et al., 2009; Hirschberg et al., 2005) and daily marker (McCarthy et al., 2006) of surge, though it is clear that inpatient capacity has a significant effect as well, causing efficiency to decrease as census increases (Asplin et al., 2006).
There seems to be some modeling concordance around 15 seriously injured patients/ hour (or ED beds/3.75) as being severely stressful on a trauma center (Bayram et al., 2011; Hirschberg et al., 2005), which aligns with work by de Boer (1999) that estimated 2-3 patients/100 hospital beds/hour could be accommodated with hospital disaster plan activation. Notably, this rate carried over 6 hours approximates 20 percent of hospital bed capacity for most centers, which is consistent with Israeli planning targets (Israel Ministry of Health, 1976; Kosashvili et al., 2009; Tadmor et al., 2006) and accommodates the vast majority of mass casualty incidents. See Peleg and Kellermann (2009) for additional information on Israel’s system for hospital surge capacity and for notifying hospitals about the approximate number and type of casualties to anticipate.
Some relevant time-phase work has been done with data from bombings and other no-notice mass casualty incidents, where 50 percent of the victims presented to hospitals within the first hour and the vast majority within 3 hours (CDC, 2003, 2010). This may be helpful as the hospital command center opens to provide some assumptions about what degree of resources may be required over what span of time.
Significant variation for calculating inpatient numbers and capacity also has been noted, depending on how beds are counted, reinforcing that data may be falsely alarming or reassuring (DeLia, 2006; Schull, 2006). Determining the impact that longer-term incidents may have is also very difficult because the modeling for pandemic influenza ranges from minimal to catastrophic impacts on the health care system. Nevertheless, some evidence shows that efficient use of beds within a regional system may save lives in a major disaster (Kanter, 2007) and that these types of coordinated efforts at the coalition and state levels are worthwhile and can make a difference.
Data on “surge discharge” is improving, with several articles reflecting the ability to discharge 30 to 60 percent of patients (Challen and Walter, 2006; Kelen et al., 2006, 2009; Satterthwaite and Atkinson, 2012). These percentages may vary depending on the patient population and size of the facility, but surge discharge clearly represents a critical part of hospital surge response. Hospital planners should have a good idea about baseline capacity that can be generated for their facility and have a plan to rapidly implement these techniques. Improved electronic health records systems may allow anticipated discharges or potential discharge status to be reflected on a daily basis, greatly facilitating decision making in a disaster.
As hospitals gain experience with incidents such as the 2009 H1N1 pandemic, they can determine how those historic volumes were managed and apply these metrics and strategies to future incidents. As electronic records systems grow more robust, passive data analysis and submission to central databases may allow the development of much better predictive modeling that can account for disaster demands as well as daily demands. Regional and state agency stakeholders should look for opportunities to partner with hospitals in these areas of meaningful use of encounter and clinical data.
Promise and Limitations 2: Pandemic Influenza
Traditional influenza surveillance programs are conducted by the Centers for Disease Control and Prevention (CDC) primarily to gain an understanding of the nature of the influenza viruses, extent of disease activity, current impact on hospitalizations, and mortality (CDC, 2012b). FluView, a weekly influenza surveillance report that provides data on national and regional levels with a lag of 1-2 weeks, is produced from these data (CDC, 2013).
In recent years additional data streams and modeling have been used to supplement traditional surveillance, including pharmacy sales, calls to emergency services, work or school attendance, insurance and billing claims, search data, social media, telephone medical hotlines, and websites specifically aimed at providing information regarding the severity of symptoms and recommended care to seek (e.g., Espino et al., 2003; IOM, 2012b; Kellermann et al., 2010; Koonin and Hanfling, 2013; Magruder et al., 2004; Price et al., 2013; Rolland et al., 2006). This box discusses some of the issues related to mining non-traditional sources of information to guide decision making along the continuum of care (for more extensive reviews of syndromic surveillance system usage, benefits, and limitations, see, for example, Buehler and colleagues [2008, 2009] and IOM and NRC ). Novel approaches offer the potential to provide earlier detection, demand, and severity forecasting, and faster surge detection. Much of this work has been done on influenza, but could be applied to other slow-onset situations, though [it] would not likely be as helpful for no-notice incidents.
Geographic information system–based mapping tools (Google Earth) combined with other data inputs, including social media crowd-source reporting, are also being used to enhance the ability of response agencies to build a picture of what is occurring closer to real time (e.g., Brownstein et al., 2009, Schmidt, 2012). Projects such as HealthMap (2013), founded by a team of researchers, epidemiologists, and software developers at Boston Children’s Hospital in 2006, serve as an example of using available online sources to monitor disease outbreaks and provide real-time surveillance of emerging public health threats. MedMap (ASPR, 2013) is another tool, available to local, state, and federal public health and emergency health care response agencies. It is intended to develop a common operating platform for shared health care system resource information, which can be layered onto other response and demographic information data. This can be used to improve situational awareness and the decision making that follows from the availability of such information. It allows for the visualization of special data, with inputs and data point assessments determined by the user of the system, tailoring information inputs to those that are most likely to help inform decision making during large-scale incidents.
The most prominent Web data mining effort is Google Flu Trends (GFT) (2013). Other examples include monitoring and soliciting Twitter users to track disease activity (MappyHealth, 2013; Sickweather, 2013; Signorini et al., 2011) and active data entry programs for individuals such as Flu Near You (2013).
GFT, which is the most studied of the Web data mining efforts, illustrates some of the promise and peril with these novel data sources. GFT estimates prevalence from search engine queries for flu-related terms (GFT, 2013). In many cases, GFT estimates have closely matched estimates derived from the traditional surveillance efforts led by CDC, and can be delivered 7-10 days faster (Carneiro and Mylonakis, 2009; Ginsberg et al., 2009; Polgreen et al., 2008). However, in the 2012-2013 influenza season, GFT’s estimate of the peak was nearly double the CDC’s estimate based on traditional surveillance data (Butler, 2013). GFT also underestimated influenza-like illness (ILI) at the start of the 2009 Influenza A (H1N1) pandemic, requiring an algorithm tweak (Cook et al.,
2011). Other work has suggested that while GFT may correlate well with ILI rates, it may not correlate with actual influenza virus infections (Ortiz et al., 2011). GFT algorithms will undoubtedly continue to evolve (Butler, 2013).
Dugas and colleagues (2013) developed an influenza forecast model based on easy-to-access data that are available in real time, including at individual medical centers. They also incorporated GFT meteorological and temporal information. The best model was able to predict weekly influenza cases within seven cases for 83 percent of estimates for a large urban tertiary center emergency department (ED) in Baltimore. This model may help guide prediction of surge response, but additional evaluation is needed on generalizability. It also remains vulnerable to mismatches between the GFT and traditional surveillance data.
To date, GFT has been used primarily to spur increased vigilance, further investigation, and collection of direct measures, and not as a basis for operational actions (Carneiro and Mylonakis, 2009; Ginsberg et al., 2009). City-level GFT data have been shown to correlate with both positive influenza test results and the volume of ED visits with ILI (Dugas et al., 2012), and may offer some promise in forecasting. The GFT data also correlated well with certain ED crowding measures for pediatric patients and moderately for lowacuity adult patients, but not for higher acuity adult patients. GFT is susceptible to false alerts caused by increased queries due to media attention, for example, but because GFT correlated with ED visits, it may still be useful for surge planning even if the increase is due to enhanced concern rather than an actual increase in influenza prevalence (Dugas et al., 2012).
Temporal relation between ED visits and contact with telehealth lines are other examples of indicators. In Ontario, Canada, call volume to Telehealth Ontario showed that increases in call volume correlated with increases in discharge diagnosis data for respiratory illnesses (van Dijk et al., 2008). Telehealth Ontario data are available electronically in near real time. Additional modeling work found that Telehealth Ontario call volume data can be used to estimate future ED visits for respiratory illness at the health unit level, of which there are 36 in Ontario (Perry et al., 2010). Forecast accuracy was better for health units with a population of more than 400,000. An important limitation is that if the hotline is promoted or referenced in the media, the model predictions may not be accurate because they are tied to prior ED visits.
Other efforts have been made to develop statistical models that predict the severity of the influenza season based on sequence and serological data (Wolf et al., 2010). This study found that these types of data could be used to predict severity. Because the scale of this model is North America, geographically, and based on an entire season, temporally, this type of model may have promise for informing vaccine selection and manufacturing; however, at this point it is unlikely to be useful for operational planning at the health system, organization, or agency level.
The methods and models discussed above show the potential promise of novel techniques and modeling for earlier detection, severity prediction, and demand forecasting. These underlying algorithms and models will undoubtedly continue to improve. However, at this point these are probably not a source of information that could be used as indicators and triggers to drive operational planning and decision making. Furthermore, the application is limited to slow-onset diseases with high prevalence across large populations. Currently most work is focused just on influenza, and outputs are subject to significant error. A final gap is that the United States has no system to share standard clinical information sets and no way to have clinicians collaborate electronically to gather rapidly evolving best practices in real time. Hopefully this can be addressed through official and unofficial channels in advance of the next pandemic or severe seasonal influenza year.
it may be necessary to aggregate data in reporting to avoid singling out organizations or entities, or necessary to specify which offices at a health department have access to the data.
Utility by Jurisdiction Size
Key point: The utility of specific indicators will vary significantly by jurisdiction.
In many urban areas, even large numbers of simultaneous casualties (e.g., the 2013 bombing in Massachusetts) or catastrophic community damage (e.g., the 2013 Moore, Oklahoma, tornado) do not require implementation of crisis standards of care because of the resiliency within the area emergency response systems. Because of the scale of resources available in urban versus rural settings, many indicators may be of limited utility in rural areas. For example, bed counts at critical access hospitals are not likely to yield much useful information if that is the only reasonable destination for EMS transport units. A recent survey found that 95 percent of rural facilities would be overwhelmed by 10 patients with serious injuries, which was consistent with the EMS estimated response capability (Furbee et al., 2006; Manley et al., 2006). However, due to this paucity of resources, it may be even more critical to be able to develop scripted triggers and tactics that can enable assistance to be mobilized without delay by line personnel (dispatchers, first responders, etc.) and thus support response with available resources and early mobilization of mutual aid.
Goals at Different Levels of the Emergency Response System
Key point: Different types of indicators may be most valuable at different levels of the system.
Because of the dynamic and complex environment in which information is being collected and shared, as described above, it is valuable to focus on a few key system indicators, rather than trying to monitor “everything at once.” This section outlines the types of indicators that may be most valuable at different levels of the system.
Due to fluctuations during the incident among conventional, contingency, and crisis care, and the categories (space, staff, supplies) that can be affected, it can become challenging at the regional level to keep this information current. It may be most relevant at the regional health care coalition level to keep track of capacity issues—examining whether hospitals are implementing crisis surge response care plans or whether EMS calls are being deferred. Consideration for keeping track of specific supply/staff issues in relation to requests for assistance may be most beneficially oriented toward specific lifesaving resources that are in short supply (e.g., ventilators).
The regional or coalition goal is to support transition and response to maintain enough balance in the system so that individual facilities/agencies are providing consistent levels of care, even though there may be daily and shift-based fluctuations across the system. The facility goal is to stay out of crisis care as much as possible—for example, a patient is triaged away from mechanical ventilation (crisis) due to lack of community resources, but if a ventilator is available at another facility the patient may be bag-valve ventilated during transfer there (contingency). If the patient is still alive when a ventilator becomes available, he/she would receive that “conventional” resource.
The state goal is to ensure that a consistent level of care and common decision-making strategies are provided in the jurisdiction, including identification of additional support, and coordinate with surrounding states to reduce interstate variability as much as possible. For noncontinguous states and territories, coordination with other states may not be feasible or considered high priority because they are far away, and assistance may take hours to arrive and prove logistically challenging. In certain situations (e.g., multistate incidents), however, aid may be most effectively sought from the state that suffered the least damage or has the most resources, or from other partners (e.g., Department of Defense [DoD] or private entities) rather than via usual adjacent partners. Planning discussions should reflect some of these variables.
Information Synthesis and Sharing
Key points: Information sharing and synthesis is critical to responding to a catastrophic disaster. Addressing potential barriers to the flow and movement of such information, both real and perceived, is a critical first step in preparing for the development and use of indicators and triggers needed to help guide the response to the implementation of crisis standards of care.
In the context of catastrophic disaster incidents—which, by their very definition, will entail local, regional, state, tribal, and federal response—the access to information and the ability to share such information across these jurisdictional domains will be critically important to a successful response. Although no surveillance system can ever be counted on to “make the diagnosis” in the case of a bioterror agent release, these systems will provide important situational awareness information and can help to develop the characteristics of an ongoing incident that will be very useful to the emergency response community. Examples of existing surveillance systems are discussed below. Addressing the potential barriers to the flow and movement of such information, both real and perceived, is a critical first step in preparing for the development and use of indicators and triggers needed to help guide the response to the implementation of crisis standards of care.
Important here is the recognition that a wide variety of information will be needed by the many elements of the emergency response system, not all of which will be accessible or available to all of the response disciplines. As discussed in Chapter 1, emergency management is in an excellent position to coordinate efforts of EMS, hospitals, and public health using the Emergency Support Function (ESF)-8 framework. Efforts to synthesize the available information, using the emergency management–led jurisdictional EOC, along with the use of a medical information clearinghouse concept, will be of significant value. For example, stressors emanating from one single incident may be seen across the entirety of the emergency response system. Taken alone, this information may not be as meaningful. A law enforcement concern about increasing civil unrest may come into focus only after it is recognized that there is a disease outbreak in one particular demographic group, leading to subtle but important population-based behavioral expressions of concern. The synthesis of such information will be most evident in communities that employ the use of multiagency coordination (MAC), which is “a process that allows all levels of government and all disciplines to work together more efficiently and effectively,” often implemented using a Multiagency Coordination System (FEMA, 2013b).
Examples of Existing Data-Sharing Systems
Key point: Existing data sources and data-sharing systems can be leveraged for the development and use of indicators and triggers.
Many data-sharing systems have been developed and implemented by state and federal governments to help ensure prompt detection of incidents and aid in decision making and resource allocation during large-scale public health emergencies. These data-sharing systems may provide information that is useful for the development of indicators and triggers. In developing indicators and triggers for their communities, stakeholders should consider existing data-sharing systems and how they may be leveraged to guide decision making about transitions along the continuum of care. Select examples are discussed below. With regard to indicators and triggers in state and local CSC plans, most jurisdictions have yet to address this or are in relatively early stages; details are provided in Box 2-5.
It was beyond the scope of this report to comprehensively examine the benefits, limitations, and resource requirements of biosurveillance and other situational awareness systems. HHS is currently undertaking activities to develop a Public Health and Health Care Situational Awareness Strategy and Implementation Plan (Lurie, 2012). The National Biodefense Science Board reviewed the draft plan and provided guiding principles and recommendations aimed at improving public health and health care situational awareness, including emphasizing the need to “assure compatibility, consistency, continuity, coordination, and integration of all the disparate systems and data requirements” (NBSB, 2013, p. 3). An overview of existing public health surveillance, with an emphasis on the detection of bioterrorism threats, is available in an earlier Institute of Medicine (IOM) report (IOM and NRC, 2011).
Systems for Sharing Information About Prehospital and Hospital Resource Availability
The state of Maryland’s Emergency Medical Resource Center is one of the first examples of a systematic approach to coordinating prehospital EMS and hospital response efforts, for daily use as well as in disaster incidents. In the aftermath of 9/11, a facility resource tracking system was put in place, coordinating information related to key data points, including bed availability, resource availability, staffing, and related issues (MIEMSS, 2013). Other state programs include those in New York, which created the Health Emergency Response Data System (Gotham et al., 2007). Like the Maryland system, this is a statewide electronic Web-based data collection system linking health care facilities, including all hospitals. This serves as the primary means for relaying resource requests to the State Department of Health, and can also be used to distribute just-in-time information, as well as serve as a tool to conduct rapid assessment surveys. These are two examples of statewide information management systems. Many more states have developed, or are developing, similar efforts, particularly as federal grant guidance highlights the importance of establishing situational awareness, including data sharing and analysis.
At the federal level, HAvBED was created under a contract from the Agency for Healthcare Research and Quality to help develop a national hospital bed reporting system that could be used to provide situational awareness of hospital bed availability during times of surge demand in care (AHRQ, 2005). It was born partly out of the experience of the Commonwealth of Virginia’s early adoption of bed reporting capabilities that were in place and fully functional prior to the 9/11 attacks and the anthrax bioterror mailings. Like the early Maryland and New York state efforts, the Northern Virginia hospitals, not yet coalesced under the aus-
pices of the regional coalition that was formed in 2002, had been using an Internet-based bed reporting system since 1999.2 When the other regions in the state chose to implement a similar capability beginning in 2003, it was decided that the vendors would be asked to work together to ensure uniformity of reporting standards and data elements for this statewide system. The Northern Virginia hospitals continued to use the proprietary system that they had previously contracted to use, while the remaining five hospital regions chose a different vendor. The genesis of the HAvBED project was to ensure that open standards were used to allow for interoperability of data exchange, despite the selection of proprietary systems. The reporting of “bed availability” information remains an important marker of the general state of hospital readiness to absorb large numbers of potential casualties. Despite the numerous shortcomings of the reported data (may not reflect accurate numbers; often do not account for concurrent availability of staffing and resources to support the care of patients who might need those beds; and are dynamic values that can change faster than the numbers can be reported), this marker represents an “indicator” that is often taken to reflect basic health care system capacity and capability.
Another important limitation in our ability to achieve a common operating picture, particularly in the realm of the health care response to large-scale incidents, are the many barriers that exist to sharing patient information and tracking patients through the continuum of care—clinical outcomes, treatment modalities, and lessons learned—in near real time during large-scale medical emergencies. There is no good mechanism in place to allow for sharing of clinical information, particularly in the immediate context of an ongoing incident. Some local and regional information sharing may occur: for example, the use of informal networks of health care systems and providers during the anthrax attacks in 2001 permitted real-time exchange of information by providers who managed the anthrax cases with those in the surrounding communities who were concerned that more cases were going undiagnosed. This approach resulted in the successful diagnosis and treatment of the fifth of five inhalational anthrax cases identified in the Washington, DC, region (Gursky et al., 2003; Hanfling, 2011). However, in the setting of a larger-scale incident, it is imperative that there be a clearinghouse for case reports and clinical information exchange, as well as an expedited process for conducting intraincident research on the use of specific medical countermeasures or other treatment modalities that may be useful in improving the medical outcomes and decreasing morbidity and mortality.
ESSENCE is a biosurveillance system that was originally developed for the DoD to account for syndromic surveillance oriented toward the evaluation of emerging infectious disease agents across the globe (Lombardo et al., 2004). An updated version of this program was adopted by state and local governments in the Washington, DC, region for use by their public health agencies to help identify similar issues, including the release of potential bioterror agents in the community. Sharing agreements and protocols for data access were developed in order to effect the implementation of this system. In the case of the DC region, the primary flow of data is often oriented toward state public health departments, with intermittent sharing of data interpretation and analysis. However, this occurs in a cumbersome fashion, with most reports directed to local public health departments, not the hospitals from where the data are initially gathered.
The state of Michigan has had a biosurveillance system in place that serves as an example of how information from such systems can be shared more easily. The Michigan Syndromic Surveillance System
2 Unpublished work; information from committee co-chair Dan Hanfling.
Examples and Analysis of Indicators and Triggers in Existing CSC Plans
In a 2012 report on the allocation of scarce resources during mass casualty events, it was noted that few state plans contained “operational frameworks for shifting to crisis standards of care” (Timbie et al., 2012, p. ES-9). The committee searched for and compiled 18 available jurisdictional plans that discussed triggers for crisis care or pandemic influenza.1-18 Six of these discussed lab or World Health Organization criteria-based triggers for pandemic influenza and not relevant to crisis care.6-8,11-12,16 A few states included state declarations of emergency as the trigger for increased information sharing and coordination, but not for triage.1,4 One state referenced “unusual events” rather than triggers, which prompt enhanced information exchange within the system.2 These were defined as events that significantly impact or threaten public health, environmental health, or medical services; are projected to require resources from outside the region; are politically sensitive or of high visibility; or otherwise require enhanced information exchange between partners or the state.
One state approached the “trigger” for crisis care from a process standpoint—that if a facility did not have a resource, could not get it, and could not transfer the patient, the situation met preexisting criteria for crisis care and resource allocation.18 These preexisting criteria have been described in prior work by the IOM and the American College of Chest Physicians and should be incorporated in the decision-making process, if not in the trigger.19-20
The advantage of this approach is that it offers an all-inclusive process for resource shortfalls. The disadvantage to be considered is that, due to lack of specificity, it may result in less proactive decision making or anticipation of potential trigger events. This is a common trade-off with indicators and triggers—the less specific, the easier the development, but the less sensitive and less specific to the response. The more specific the indicators and triggers, the harder the development work, but with potentially improved system performance.
Other states and entities identified factors that were considered as “triggers” for resource triage, though these were categorical rather than specific aside from a specific staffing threshold in two plans (which may be more relevant to certain job classes or facilities of certain size—no validation of these numbers or references were noted; using expert-based indicators and triggers is the current state of the science, and a systematic approach to evaluation would be useful).1-2,4-5,10,13-15,17-18
• Equipment shortages—including ventilators, beds, blood products, antivirals, antibiotics, operating room capacity, personal protective equipment (PPE), including supply chain disruption or recall/contamination, emergency medical services (EMS) units;
• Staff/personnel triggers—subspecialty staff, security, trauma team, EMS;
• Space triggers—unable to accommodate all patients requiring hospitalization despite maximal surge measures, doubling of patient rooms;
• Infrastructure—including loss of facilities, essential services, or isolation of a facility due to flooding or other access problems;
• Numbers of patients in excess of planned health care facility capacity, or an exceptional surge in number and severity over a short period of time;
• Use of alternate care facilities;
• Marked increase in proportion of patients who are critically ill and unlikely to survive;
• Abnormally high percentage of hospitals on divert for EMS;
• Increase in influenza hospitalizations and deaths reported or other surveillance or forecasting data suggesting surge in excess of resources;
• Marked increase in staff or school absenteeism (two specifying 20-30 percent or >30 percent thresholds);
• Increased emergency medical dispatch call volumes;
• Increased requests for mutual aid or activation of statewide mutual aid agreements;
• Depletion of state assets;
• Unavailability of assets from other states; and
• Depletion of federal assets.
Of note, one county’s pandemic flu plan specified 30 elements of intensive care patient data gathering. Though the specific dataset elements have not been validated and potentially could be optimized, the gathering of clinical data in real time in order to provide aggregate information about severity of disease and treatment effect is a key gap in current national planning for infectious disease incidents.
Available plans tended to list indicators, for the most part without specific thresholds.2-5,7,9,10,13-14,19 This is actually consistent with the fact that most of the plans were state level, and thus unlikely to identify indicators of sufficient certainty to establish triggers, aiming primarily to identify the key resources expected to be in shortage and potential indicators from available systems data or functional thresholds (alternate care site use, etc.) marking the transition to crisis care. This is likely to be as specific as state-level plans can be, though national planning should include guidance for shortages of antivirals, vaccine, or PPE, for which basic assumptions and triggers for policy and clinical guidance should be developed.
The types of indicators and triggers may be less specific at higher tiers, but should be linked to the actions that would be taken by each tier. At the state level, a lack of specificity is acceptable at the state level because much of the data is uncertain and requires analysis and a non-scripted response from state agencies. Triggers that may be appropriate at the state level (opening of alternate care sites) are unhelpful at the local level because they will occur too late to be of assistance in the early management of an escalating incident. Local triggers should be as concrete as possible and provide enough advance warning to take action, rather than only triggering when a crisis situation has already occurred (i.e., better to have an early scripted trigger for notification of an emergency management group to assess a situation rather than a late trigger when the system runs out of ventilators).
1Alaskan Health Care Providers and Medical Emergency Preparedness–Pediatrics (MEP-P) Project [draft], 2008.
2California Department of Public Health and California Emergency Medical Services Authority, 2011.
3City of Albuquerque, Office of Emergency Management, 2005.
4Colorado Department of Public Health and Environment, 2009.
5Florida Department of Health, 2011.
6Indiana State Department of Health, 2009.
7Kansas Department of Health and Environment, 2013.
8Kentucky Department of Public Health, Division of Epidemiology and Health Planning, Cabinet for Health and Family Services, 2007.
9King County, Seattle Health Care Coalition, and Northwest Healthcare Response Network, 2009.
10Minnesota Department of Health, Office of Emergency Preparedness, 2012.
11New Hampshire Department of Health and Human Services, 2007.
12New York State Department of Health, 2008.
13Northern Utah Healthcare Coalition, 2010.
14Ohio Department of Health and Ohio Hospital Association, 2012.
15State of Michigan, 2012b.
16Tennessee Department of Health, 2009.
17Utah Hospitals and Health Systems Association for the Utah Department of Health, 2009.
18Wisconsin Hospital Association, Inc., 2010.
19Devereaux et al., 2008.
(MSSS) is “a real-time surveillance system that tracks chief presenting complaints from emergent care settings, enabling public health officials and providers to monitor trends and investigate unusual increases in symptom presentations” (State of Michigan, 2012a, 2013). Health care facilities have enrolled to participate voluntarily on an ongoing basis since the system was launched in 2003; currently, 89 facilities submit data electronically to the MSSS.3 The system continues to evolve to support public health and information technology needs. In 2013, the MSSS will be able to receive data from health care professionals in settings other than hospital emergency departments, in support of Meaningful Use, which involves using electronic health record technology to ensure complete and accurate information, better access to information, and patient empowerment (CMS, 2013; HealthIT.gov, 2013). In 2012, the MSSS processed more than 4.3 million ED registrations. The chief complaints from ED registrations are categorized using a free-text complaint coder. Trends in the categorical groups are analyzed using an adaptive recursive least squares algorithm, and alerts are sent to Michigan public health officials when unusual increases in symptom presentations are detected. In addition, the MSSS supports enhanced surveillance that is conducted during high-profile events (e.g., local NCAA basketball tournament games, World Series, Super Bowl, and North American International Auto Show), with findings distributed to stakeholders. Access to the MSSS interface is role based: participating health care facilities can visualize and report on their own data, including the ability to run ad hoc queries. The local health departments can view data from within their jurisdictions, and key Michigan Department of Community Health staffs have full statewide access. Since 2008, MSSS data contributions have informed national influenza surveillance via the Distribute Project and national syndromic surveillance efforts via Biosense, soon to be resumed with the redesigned Biosense 2.0 (see CDC, 2012a).
The benefits and costs of creating new surveillance systems that are highly dependent on technology or labor for data entry should be carefully considered. For a discussion of the benefits, limitations, and resource requirements of syndromic surveillance, see IOM and NRC (2011).
Indicators and Triggers in U.S. Department of Veterans Affairs Medical Centers (VAMCs) and Military Treatment Facilities (MTFs)
The coordination of VAMCs and MTFs into planning efforts and response to catastrophic disaster events is of vital importance to the two constituencies served by these unique health care organizations. Both can be considered to be “closed” systems, focused on the delivery of care to specific patient populations that they are entrusted to serve: namely, veterans and active-duty military and their dependents. But both systems are also recognized to be important components of the local and regional health care communities in which they are located, particularly for a disaster response. At the local level, VAMC and MTF leadership are given the authorization to provide care to the communities in which they are situated, invoking principles of humanitarian assistance to ensure that patient care needs are addressed when the entire community is under duress. In the evolving efforts to better organize health care entities to respond to disaster events, VAMC and MTF facilities have been encouraged to become members of health care coalitions. For example, the Washington, DC, VAMC, the former Walter Reed Army Medical Center, and Bethesda National Naval Medical Center (now combined as the Walter Reed National Military Medical Center at Bethesda, Maryland) have been
3 Unpublished work; information from committee member Linda Scott.
a central component of the DC Hospital Coalition. In Northern Virginia, DeWitt Army Hospital at Fort Belvoir was a founding member of the Northern Virginia Hospital Alliance.4
In this regard, the functions of VAMC and MTF facilities during disaster events are best considered to be component parts of the larger, regional health care system. Therefore, they will be expected to use similar indicators, triggers, and tactics as those used by their public- and private-sector counterparts. In the case of mature health care coalitions that have included these facilities within their membership, the use of situational awareness tools in place across the community are likely to provide this information to all member hospitals, including those in the Department of Veterans Affairs (VA) and DoD. In those communities in which the development of health care coalitions is still evolving, the VA and DoD facilities may be in position to help coordinate and facilitate the sharing of key information. This is particularly true given their connectivity to a network of information systems, supply chains, and health care facilities that are located outside of the immediate community, all part of a national health care system.
One of the difficulties that VAMC and MTF leadership may face under catastrophic response conditions will be determining how to parse available resources between two distinct mission profiles: service to their members and provision of care to the community at large. In this respect, there may be “internal” indicators specific to the VA or DoD system that will have to be evaluated in addition to those usual measures being used to determine local and regional capabilities. The community and the VA/DoD system may have different data needs and community and national systems indicators may vary, so the systems used to collect these may not be standardized. These facilities walk a fine line in a crisis situation, as it is not to anyone’s best interest if the level of care provided at the institution is inconsistent with that being provided in the community, but these are not “community facilities.” For example, VA facilities may have substantially more burdens than community hospitals during influenza epidemics affecting the elderly, while military facilities have substantially less; balancing these demands against a local coalition’s resources may be very helpful in easing strain on the system, and proactive ways to accomplish this should be explored with the facilities (e.g., a local VA might prefer to accept those with prior service connections in preference to non-selected patients during a community crisis). Consideration of CSC planning by leadership located at the Veterans Integrated Service Network level, Veterans Health Administration, and Defense Health Headquarters (DoD) will be crucial to the successful implementation of the tactics derived from the analysis of key indicators.
Legal Indicators and Triggers
Detailed examination of legal issues is outside of the scope of this project, although there may be interesting issues regarding legal indicators and triggers that deserve additional attention (see Box 2-65). For more discussion and details about the ethical principles and legal issues, see the Institute of Medicine’s previous reports on crisis standards of care (IOM, 2009, 2012a).
4 Unpublished work; information from committee co-chair Dan Hanfling.
Legal Indicators and Triggers
Indicators and triggers may need to be invoked in the legal and regulatory realms to facilitate provision of health services. During the 2012-2013 seasonal influenza epidemic, for example, some local and state governments took the proactive step of declaring emergencies to facilitate their response efforts, including vaccine administration (City of Boston, 2013; State of New York Executive Chamber, 2013). Such issues vary state by state and require jurisdictional analysis and assessment of the need for emergency activation for purposes of
• Increasing visibility of the incident (risk communication);
• Involving emergency management and additional organizations;
• Interagency coordination;
• Enhancing staff availability and deploying volunteers;
• Requiring additional social distancing measures;
• Allowing interstate licensure reciprocity;
• Increasing vaccine availability;
• Expanding scopes of practice for relevant health care personnel (e.g., pharmacists authorized to provide pediatric influenza vaccine);
• Mobilizing specific resources; and/or
• Issuing waivers of specific statutory or regulatory requirements that may impede response efforts.
In many cases, such declarations are political in nature or made to address specific regulatory requirements. Even with a national public health emergency declaration, resulting state or local inconsistencies across a geographic region related to timing and breadth of emergency powers requires careful assessment and clear explanations to practitioners and the public. Note that a federal public health emergency declaration does not mean that states will make such a declaration, and vice versa. A consistent and proactive approach using indicators of disease prevalence and difficulties in the delivery of conventional response to health care needs, as well as triggers related to allocation of specific resources in shortage, may be helpful.
In planning, facilities and agencies should first identify the key response strategies they will use. Second, data sources and information that inform these thresholds should be examined and optimized. Third, actions to be taken when the trigger is reached should be determined; are they scripted or non-scripted? Fourth, are there scripted tactics that can be employed, or should the tactics be non-scripted and incident-specific?
Determination of indicators and triggers can seem daunting. However, discussing these issues at all tiers of the emergency response system will help clarify and develop indicators and triggers that will inform decision making and help deliver the best possible care during a disaster, given the circumstances. The toolkit in the subsequent chapters will facilitate these conversations.
AHRQ (Agency for Healthcare Research and Quality). 2005. National hospital available beds for emergencies and disasters (HAvBED) System. Rockville, MD: AHRQ. http://archive.ahrq.gov/prep/havbed/index.html (accessed April 13, 2013).
Alaskan Health Care Providers and Medical Emergency Preparedness-Pediatrics (MEP-P) Project. 2008 [draft]. Medical Alaskan technical recommendations for pediatric medical triage and resource allocation in a disaster. Alaskan Health Care Providers and Medical Emergency Preparedness-Pediatrics (MEP-P) Project. http://a2p2.com/oldsite/mep-p/ethics/ MEP-P_Technical_Recommendations_with_Appendices_DRAFT_7-08.PDF (accessed February 14, 2013).
Asplin, B. R., T. J. Flottemesch, and B. D. Gordon. 2006. Developing models for patient flow and daily surge capacity research. Academic Emergency Medicine 13(11):1109-1113.
ASPR (Assistant Secretary for Preparedness and Response). 2013. MedMap. Washington, DC: Department of Health and Human Services. https://medmap.hhs.gov (accessed April 3, 2013).
Barbisch, D. F., and K. L. Koenig. 2006. Understanding surge capacity: Essential elements. Academic Emergency Medicine 13(11):1098-1102.
Bayram, J. D., and S. Zuabi. 2012. Disaster metrics: Quantification of acute medical disasters in trauma related multiple casualty events through modeling of the Acute Medical Severity Index. Prehospital and Disaster Medicine 27(2):130-135. Bayram, J. D., S. Zuabi, and I. Subbarao. 2011. Disaster metrics: Quantitative benchmarking of hospital surge capacity in trauma-related multiple casualty incidents. Disaster Medicine and Public Health Preparedness 5(2):117-124.
Bayram, J. D., S. Zuabi, and M.J. Sayed. 2012. Disaster metrics: Quantitative estimation of the number of ambulances required in trauma-related multiple casualty events. Prehospital and Disaster Medicine 27(5):445-451.
Bradt, D. A., P. Aitken, G. Fitzgerald, R. Swift, G. O’Reilly, and B. Bartley. 2009. Emergency department surge capacity: Recommendations of the Australasian Surge Strategy Working Group. Academic Emergency Management 16(12):1350-1358.
Brownstein, J. S., C. C. Freifeld, and L. C. Madoff. 2009. Digital disease detection—harnessing the Web for public health surveillance. New England Journal of Medicine 360(21):2153-2157.
Buehler, J. W., A. Sonricker, M. Paladini, P. Soper, and F. Mostashari. 2008. Syndromic surveillance practice in the United States: Findings from a survey of state, territorial, and selected local health departments. Advances in Disease Surveillance 6(3):1-20. http://www.isdsjournal.org/articles/2618.pdf (accessed June 11, 2013).
Buehler, J. W., E. A. Whitney, D. Smith, M. J. Prietula, S. H. Stanton, and A. P. Isakov. 2009. Situational uses of syndromic surveillance. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science 7(2):165-177.
Burkle, F.M., E. B. Hsu, M. Loehr, M. D. Christian, D. Markenson, L. Rubinson, and F. L. Archer. 2007. Definition and functions of Health Unified Command and Emergency Operations Centers for large-scale bioevent disasters within the existing ICS. Disaster Medicine and Public Health Preparedness 1(2):135-141.
Butler, D. 2013. When Google got flu wrong. Nature 494(7436):155-156.
California Department of Public Health and California Emergency Medical Services Authority. 2011. California public health and medical emergency operations manual. http://www.emsa.ca.gov/disaster/files/EOM712011.pdf (accessed February 14, 2013).
Carneiro, H. A., and E. Mylonakis. 2009. Google trends: A Web-based tool for real-time surveillance of disease outbreaks. Clinical Infectious Diseases 49(10):1557-1564.
CDC (Centers for Disease Control and Prevention). 2003. Mass casualties predictor. Atlanta, GA: CDC. http://www.bt.cdc.gov/masscasualties/predictor.asp (accessed March 11, 2013).
CDC. 2010. Blast injuries: Fact sheets for professionals. Atlanta, GA: CDC. http://www.bt.cdc.gov/masscasualties/pdf/blast_fact_sheet_professionals-a.pdf (accessed March 10, 2013).
CDC. 2012a. BioSense program. Atlanta, GA: CDC. http://www.cdc.gov/biosense (accessed June 10, 2013).
CDC. 2012b. Overview of influenza surveillance in the United States. Atlanta, GA: CDC. http://www.cdc.gov/flu/pdf/weekly/overview.pdf (accessed March 5, 2013).
CDC. 2013. FluView: 2012-2013 Influenza season week 14 ending April 6, 2013. Atlanta, GA: CDC. http://www.cdc.gov/flu/weekly (accessed March 5, 2013).
Challen, K., and D. Walter. 2006. Accelerated discharge of patients in the event of a major incident: Observational study of a teaching hospital. BMC Public Health 6(1):108.
City of Albuquerque, Office of Emergency Management. 2005. A strategic guide for the city-wide response to and recovery from major emergencies and disasters (Annex 6—health and medical). Albuquerque, NM: City of Albuquerque, Office of Emergency Management. http://www.cabq.gov/police/emergency-management-office/documents/Annex6HealthandMedical.pdf (accessed February 14, 2013).
City of Boston. 2013 (January 9). Mayor Menino declares public health emergency as flu epidemic worsens. http://www.cityofboston.gov/news/Default.aspx?id=5922 (accessed March 11, 2013).
CMS (Centers for Medicare & Medicaid Services). 2013. Meaningful use. Baltimore, MD: CMS. http://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Meaningful_Use.html (accessed May 3, 2013).
Colorado Department of Public Health and Environment. 2009. Guidance for alterations in the healthcare system during moderate to severe influenza pandemic. Denver, CO: Colorado Department of Public Health and Environment.
Cook, S., C. Conrad, A. L. Fowlkes, and M. H. Mohebbi. 2011. Assessing Google Flu Trends performance in the United States during the 2009 influenza virus A (H1N1) pandemic. PLoS ONE 6(8):e23510.
Davidson, S. J., K. L. Koenig, and D. C. Cone. 2006. Daily patient flow is not surge: Management is prediction. Academic Emergency Medicine 13(11):1095-1096.
de Boer, J. 1999. Order in chaos: Modeling medical management in disasters. European Journal of Emergency Medicine 6(2):141-148.
DeLia, D. 2006. Annual bed statistics give a misleading picture of hospital surge capacity. Annals of Emergency Medicine 48(4):384-388.
Devereaux, A. V., J. R. Dichter, M. D. Christian, N. N. Dubler, C. E. Sandrock, J. L. Hick, T. Powell, J. A. Geiling, D. E. Amundson, T. E. Baudendistel, D. A. Braner, M. A. Klein, K. A. Berkowitz, J.R. Curtis, and L. Rubinson. 2008. Definitive care for the critically ill during a disaster: A framework for allocation of scarce resources in mass critical care. From a Task Force for Mass Critical Care summit meeting, January 26-27, 2007, Chicago, IL. Chest 133(Suppl 5):S51-S66. http://www.ceep.ca/resources/Definitive-Care-Critically-Ill-Disaster.pdf (accessed March 4, 2013).
Dugas, A. F., Y. H. Hsieh, S. R. Levin, J. M. Pines, D. P. Mareiniss, A. Mohareb, C. A. Gaydos, T. M. Perl, and R. E. Rothman. 2012. Google Flu Trends: Correlated with emergency department influenza rates and crowding metrics. Clinical Infectious Diseases 54(4):463-469.
Dugas, A. F., M. Jalalpour, Y. Gel, S. Levin, F. Torcaso, and T. Igusa. 2013. Influenza forecasting with Google Flu Trends. PLoS ONE 8(2):e56176.
EMSA (California Emergency Medical Services Authority). 2007. Disaster Medical Services Division—Hospital Incident Command System (HICS). http://www.emsa.ca.gov/hics (accessed March 11, 2013).
Espino, J., W. Hogan, and M. Wagner. 2003. Telephone triage: A timely data source for surveillance of influenza-like diseases. AMIA Annual Symposium Proceedings 2003:215-219.
FEMA (Federal Emergency Management Agency). 2013a. National Incident Management System (NIMS). http://www.fema.gov/emergency/nims (accessed March 11, 2013).
FEMA. 2013b. Multiagency coordination systems. http://www.fema.gov/multiagency-coordination-systems (accessed May 15, 2013).
Florida Department of Health. 2011 (April 5). Pandemic influenza: Triage and scarce resource allocation guidelines. Tallahassee: Florida Department of Health. http://www.doh.state.fl.us/demo/bpr/PDFs/ACS-GUIDE-Ver10-5.pdf (accessed February 14, 2013).
Flu Near You. 2013. Flu near you. https://flunearyou.org (accessed March 5, 2013).
Furbee, P. M., J. H. Coben, S. K. Smyth, W. G. Manley, D. E. Summers, N. D. Sanddal, T. L. Sanddal, J. C. Helmkamp, R. L. Kimble, R. C. Althouse, and A. T. Kocsis. 2006. Realities of rural emergency medical services disaster preparedness. Prehospital Disaster Medicine 21(2):64-70.
Garrett, L. 2009 (June 12). Interview. Hurdles in declaring swine flu a pandemic. Council on Foreign Relations. http://www.cfr.org/public-health-threats/hurdles-declaring-swine-flu-pandemic/p19617 (accessed March 11, 2013).
GFT (Google Flu Trends). 2013. Google flu trends. http://www.google.org/flutrends (accessed March 5, 2013).
Ginsberg, J., M. H. Mohebbi, R. S. Patel, L. Brammer, M. S. Smolinksi, and L. Brilliant. 2009. Detecting influenza epidemics using search engine query data. Nature 457(7232):1012-1014.
Gotham, I. J., D. L. Sottolano, M. E. Hennessy, J. P. Napoli, G. Dobkins, L. H. Le, R. H. Burhans, and B. I. Fage. 2007. An integrated information system for all-hazards health preparedness and response: New York State Health Emergency Response Data System. Journal of Public Health Management and Practice 13(5):486-496.
Gursky, E., T. V. Inglesby, and T. O’Toole. 2003. Anthrax 2001: Observations on the medical and public health response. Biosecurity and Bioterrorism 1(2):97-110.
Handler, J. A., M. Gillam, T. D. Kirsch, and C. F. Feied. 2006. Metrics in the science of surge. Academic Emergency Medicine 13(11):1173-1178.
Hanfling, D. 2011. Public health response to terrorism and bioterrorism: Inventing the wheel. In Remembering 9/11 and anthrax: Public health’s role in national defense. Washington, DC: Trust for America’s Health. http://healthyamericans.org/assets/files/TFAH911Anthrax10YrAnnvFINAL.pdf (accessed May 3, 2013).
HealthIT.gov. 2013. Meaningful use. http://www.healthit.gov/policy-researchers-implementers/meaningful-use (accessed May 15, 2013).
HealthMap. 2013. HealthMap. http://healthmap.org/en (accessed April 3, 2013).
Hick, J. L., K. L. Koenig, D. Barbisch, and T. A. Bey. 2008. Surge capacity concepts for health care facilities: The CO-S-TR model for initial incident assessment. Disaster Medicine and Public Health Preparedness 2(Suppl 1):S51-S57.
Hirschberg, A., G. S. Bradford, T. Granchi, M. J. Wall, K. L. Mattox, and M. Stein. 2005. How does casualty load affect trauma care in urban bombing incidents? A quantitative analysis. Journal of Trauma 58(4):686-695.
Indiana State Department of Health. 2009. Pandemic influenza outbreak plan. Indianapolis, IN: Indiana State Department of Health. http://www.state.in.us/isdh/files/PandemicInfluenzaPlan.pdf (accessed February 14, 2013).
IOM (Institute of Medicine). 2007a. Emergency medical services: At the crossroads. Washington, DC: The National Academies Press. http://www.nap.edu/catalog.php?record_id=11629 (accessed June 7, 2013).
IOM. 2007b. Hospital-based emergency care: At the breaking point. Washington, DC: The National Academies Press. http://www.nap.edu/catalog.php?record_id=11621 (accessed June 7, 2013).
IOM. 2009. Guidance for establishing crisis standards of care for use in disaster situations: A letter report. Washington, DC: The National Academies Press. http://www.nap.edu/catalog.php?record_id=12749 (accessed April 3, 2013).
IOM. 2012a. Crisis standards of care: A systems framework for catastrophic disaster response. Washington, DC: The National Academies Press. http://www.nap.edu/openbook.php?record_id=13351 (accessed April 3, 2013).
IOM. 2012b. Public engagement on facilitating access to antiviral medications and information in an influenza pandemic: Workshop series summary. Washington, DC: The National Academies Press. http://www.nap.edu/catalog.php?record_id=13404 (accessed May 31, 2013).
IOM and NRC (National Research Council). 2011. BioWatch and public health surveillance: Evaluating systems for the early detection of biological threats. Abbreviated version. Washington, DC: The National Academies Press. http://www.nap.edu/catalog.php?record_id=12688 (accessed June 7, 2013).
Israel Ministry of Health. 1976. Sahar Committee for Hospital Preparedness. Tel Aviv: Israel Ministry of Health.
Jenkins, J. L., R. E. O’Connor, and D. C. Cone. 2006. Differentiating large-scale surge versus daily surge. Academic Emergency Medicine 13(11):1169-1172.
Kaji, A., K. L. Koenig, and T. Bey. 2006. Surge capacity for healthcare systems: A conceptual framework. Academic Emergency Medicine 13(11):1157–1159.
Kansas Department of Health and Environment. 2013. Kansas pandemic influenza preparedness and response plan. Topeka: Kansas Department of Health and Environment. http://www.kdheks.gov/cphp/download/KS_PF_Plan.pdf (accessed February 14, 2013).
Kanter, R. K. 2007. Strategies to improve pediatric disaster surge response: Potential mortality reduction and tradeoffs. Critical Care Medicine 35(12):2837-2842.
Kelen, G. D., C. K. Kraus, M. L. McCarthy, E. Bass, E. B. Hsu, G. Li, J. J. Scheulen, J. B. Shahan, J. D. Brill, and G. B. Green. 2006. Inpatient disposition classification for the creation of hospital surge capacity: A multiphase study. Lancet 368(9551):1984-1990.
Kelen, G. D., M. L. McCarthy, C. K. Kraus, R. Ding, E. B. Hsu, G. Li, J. B. Shahan, J. J. Scheulen, and G. B. Green. 2009. Creation of surge capacity by early discharge of hospitalized patients at low risk for untoward events. Disaster Medicine and Public Health Preparedness 3(Suppl 2):S10-S16.
Kellermann, A. L., A. P. Isakov, R. Parker, M. T. Handrigan, and S. Foldy. 2010. Web-based self-triage of influenza-like illness during the 2009 H1N1 influenza pandemic. Annals of Emergency Medicine 56(3):288-294.
Kentucky Department of Public Health, Division of Epidemiology and Health Planning. 2007. Kentucky pandemic influenza preparedness plan. Frankfort: Kentucky Department of Public Health, Division of Epidemiology and Health Planning. http://chfs.ky.gov/nr/rdonlyres/6cd366d2-6726-4ad0-85bb-e83cf769560e/0/kypandemicinfluenzapreparednessplan.pdf (accessed February 14, 2013).
King County, Seattle Health Care Coalition, and Northwest Healthcare Response Network. 2009 (unpublished). H1N1 ICU data questions.
Kirsch, T., L. Sauer, and D. Guha Sapir. 2012. Analysis of the international and US response to the Haiti earthquake: Recommendations for change. Disaster Medicine and Public Health Preparedness 6(3):200-208.
Koonin, L. M., and D. Hanfling. 2013. Broadening access to medical care during a severe influenza pandemic: The CDC nurse triage line project. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science 11(1):75-80.
Kosashvili, Y., L. Aharonson-Daniel, K. Peleg, A. Horowitz, D. Laor, and A. Blumenfeld. 2009. Israeli hospital preparedness for terrorism-related multiple casualty incidents: Can the surge capacity and injury severity distribution be better predicted? Injury 40(7):727-731.
Lombardo, J. S., H. Burkom, and J. Pavlin. 2004. Essence II and the framework for evaluating syndromic surveillance systems. Morbidity and Mortality Weekly Report 53(Suppl):159-165.
Lurie, N. 2012 (June 7). Nicole Lurie to John Parker and members of the National Biodefense Science Board (NBSB). Letter. Washington, DC: ASPR. http://www.phe.gov/Preparedness/legal/boards/nbsb/Documents/sa-evaluation.pdf (accessed June 10, 2013).
Magruder, S. F., S. H. Lewis, A. Najmi, and E. Florio. 2004. Progress in understanding and using over-the-counter pharmaceuticals for syndromic surveillance. Mortality and Morbidity Weekly Report 53(Suppl):117-122.
Manley, W. G., P. M. Furbee, J. H. Coben, S. K. Smyth, D. E. Summers, R. C. Althourse, R. L. Kimble, A. T. Kocsis, and J. C. Helmkamp. 2006. Realities of disaster preparedness in rural hospitals. Disaster Management and Response 4(3):80-87.
MappyHealth. 2013. MappyHealth. http://mappyhealth.com (accessed March 5, 2013).
McCarthy, M. L., D. Aronsky, and G. D. Kelen. 2006. The measurement of daily surge and its relevance to disaster preparedness. Academic Emergency Medicine 13(11):1138-1141.
McCarthy, M. L., S. L. Zeger, R. Ding, D. Aronsky, N. R. Hoot, and G. D. Kelen. 2008. The challenge of predicting demand for emergency department services. Academic Emergency Medicine 15(4):337-346.
Merriam-Webster Dictionary. 2013. Definition of “threshold.” Springfield, MA: Encyclopaedia Britannica. http://www.merriam-webster.com/dictionary/threshold (accessed April 3, 2013).
MIEMSS (Maryland Institute for Emergency Medical Services Systems). 2013. EMRC/SYSCOM. http://www.miemss.org/home/Departments/EMRCSYSCOM/tabid/139/Default.aspx (accessed February 1, 2013).
Minnesota Department of Health, Office of Emergency Preparedness. 2012. Minnesota healthcare system preparedness program. St. Paul: Minnesota Department of Health, Office of Emergency Preparedness. http://www.publichealthpractices.org/sites/cidrappractices.org/files/upload/372/372_protocol.pdf (accessed February 14, 2013).
NBSB (National Biodefense Science Board). 2013. An evaluation of our nation’s public health and healthcare situational awareness: A brief report from the National Biodefense Science Board (NBSB). Washington, DC: ASPR. http://www.phe.gov/Preparedness/legal/boards/nbsb/Documents/sa-evaluation.pdf (accessed June 10, 2013).
New Hampshire Department of Health and Human Services. 2007. Influenza pandemic public health preparedness and response. Concord: New Hampshire Department of Health and Human Services. http://www.dhhs.state.nh.us/dphs/cdcs/avian/documents/pandemic-plan.pdf (accessed February 14, 2013).
New York State Department of Health. 2008. Pandemic influenza plan. Albany: New York State Department of Health. http://www.health.ny.gov/diseases/communicable/influenza/pandemic/plan/docs/pandemic_influenza_plan.pdf (accessed February 14, 2013).
Northern Utah Healthcare Coalition. 2010. Northern Utah regional medical surge capacity plan. http://www.brhd.org/index.php?option=com_content&task=view&id=457&Itemid=31 (accessed February 14, 2013).
Ohio Department of Health and Ohio Hospital Association. 2012 [draft]. Ohio medical coordination plan: Emergency medical service annex. Columbus: Ohio Department of Health.
Ortiz, J. R., H. Zhou, D. K. Shay, K. M. Neuzil, A. L. Fowlkes, and C. H. Goss. 2011. Monitoring influenza activity in the United States: A comparison of traditional surveillance systems with Google Flu Trends. PLoS ONE 6(4):e18687.
Peleg, K., and A. L. Kellermann. 2009. Enhancing hospital surge capacity for mass casualty events. Journal of the American Medical Association 302(5):565-567.
Perry, A. G., K. M. Moore, L. E. Levesque, W. L. Pickett, and M. J. Korenberg. 2010. A comparison of methods for forecasting emergency department visits for respiratory illness using Telehealth Ontario calls. Canadian Journal of Public Health 101(6):464-469.
Polgreen, P. M., Y. Chen, D. M. Pennock, F. D. Nelson, and R. A. Weinstein. 2008. Using Internet searches for influenza surveillance. Clinical Infectious Diseases 47(11):1443-1448.
Price, R. A., D. Fagbuyi, R. Harris, D. Hanfling, F. Place, T. B. Todd, and A. L. Kellermann. 2013. Feasibility of web-based self-triage by parents of children with influenza-like illness: A cautionary tale. Journal of the American Medical Association Pediatrics 167(2):112-118.
Rivara, F. P., A. B. Nathens, G. J. Jurkovich, and R. V. Maier. 2006. Do trauma centers have the capacity to respond to disasters? Journal of Trauma 61(4):949-953.
Rolland, E., K. Moore, V. A. Robinson, and D. McGuiness. 2006. Using Ontario’s “telehealth” health telephone helpline as an early-warning system: A study protocol. BMC Health Services Research 6:10-16.
Satterthwaite, P. S., and C. J. Atkinson. 2012. Using “reverse triage” to create hospital surge capacity: Royal Darwin Hospital’s response to the Ashmore Reef disaster. Emergency Medicine Journal 29(2):160-162.
Schmidt, C. W. 2012. Using social media to predict and track disease outbreaks. Environmental Health Perspectives 120(1):A31-A33.
Schull, M. J. 2006. Hospital surge capacity: If you can’t always get what you want, can you get what you need? Annals Emergency Medicine 48(4):389-390.
Schweigler, L. M., J. S. Desmond, M. L. McCarthy, K. J. Bukowski, E. L. Ionides, and J. G. Younger. 2009. Forecasting models of emergency department crowding. Academic Emergency Medicine 16(4):301-308.
Sickweather. 2013. Sickweather. http://www.sickweather.com (accessed March 5, 2013).
Signorini, A., A. M. Segre, and P. M. Polgreen. 2011. The use of Twitter to track levels of disease activity and public concern in the U.S. during the Influenza A H1N1 pandemic. PLoS ONE 6(5):e19467.
Sprung, C. L., J. L. Zimmerman, M. D. Christian, G. M. Joynt, J. L. Hick, B. Taylor, G. A. Richards, C. Sandrock, R. Cohen, and B. Adini. 2010. Recommendations for intensive care unit and hospital preparations for an influenza epidemic or mass disaster: Summary report of the European Society of Intensive Care Medicine’s Task Force for intensive care unit triage during an influenza epidemic or mass disaster. Intensive Care Medicine 36(3):428-443.
State of Michigan. 2012a. The Michigan Syndromic Surveillance System (MSSS)—Electronic syndromic submission to the Michigan Department of Community Health: Background and electronic syndromic surveillance reporting detail for MSSS. Lansing, MI: Department of Community Health. http://michiganhit.org/docs/Syndromic%20Submission%20Guide.pdf (accessed May 21, 2013).
State of Michigan. 2012b. Guidelines for Ethical Allocation of Scarce Medical Resources and Services during Public Health Emergencies in Michigan. Lansing, MI: Department of Community Health, Office of Public Health Preparedness. http://www.mymedicalethics.net/Documentation/Michigan%20DCH%20Ethical%20Scarce%20Resources%20Guidelines%20v.2.0%20rev.%20Nov%202012%20Guidelines%20Only.pdf (accessed February 14, 2013).
State of Michigan. 2013. Michigan Emergency Department Syndromic Surveillance System. Lansing, MI: Department of Community Health. http://www.michigan.gov/mdch/0,4612,7-132-2945_5104_31274-107091--,00.html (accessed April 12, 2013).
State of New York Executive Chamber. 2013 (January 12). Declaring a disaster emergency in the state of New York and temporarily authorizing pharmacists to immunize children against seasonal influenza. http://www.governor.ny.gov/executiveorder/90 (accessed March 11, 2013).
Subbarao, I., M. K. Wynia, and F. M. Burkle. 2010. The elephant in the room: Collaboration and competition among relief organizations during high-profile disasters. Journal of Clinical Ethics 21(4):328-334.
Tadmor, B., J. McManus, and K. L. Koenig. 2006. The art and science of surge: Experience from Israel and the U.S. military. Academic Emergency Medicine 13(11):1130-1134.
Tennessee Department of Health. 2009. Pandemic influenza response plan. Nashville: Tennessee Department of Health. http://health.state.tn.us/ceds/PDFs/2006_PanFlu_Plan.pdf (accessed February 14, 2013).
Timbie, J. W., J. S. Ringel, D. S. Fox, D. A. Waxman, F. Pillemer, C. Carey, M. Moore, V. Karir, T. J. Johnson, N. Iyer, J. Hu, R. Shanman, J. W. Larkin, M. Timmer, A. Motala, T. R. Perry, S. Newberry, and A. L. Kellermann. 2012. Allocation of scarce resources during mass casualty events. Rockville, MD: AHRQ. http://www.ncbi.nlm.nih.gov/books/NBK98854/pdf/TOC.pdf (accessed June 6, 2013).
Utah Hospitals and Health Systems Association for the Utah Department of Health. 2009. Utah pandemic influenza hospital and ICU triage guidelines. Salt Lake City: Utah Hospitals and Health Systems Association for the Utah Department of Health. http://pandemicflu.utah.gov/plan/med_triage081109.pdf (accessed February 14, 2013).
van Dijk, A., D. McGuiness, E. Rolland, and K. M. Moore. 2008. Can Telehealth Ontario respiratory call volume be used as a proxy for emergency department respiratory visit surveillance by public health? Canadian Journal of Emergency Medicine 10(1):18-24.
WHO (World Health Organization). 2011. Strengthening response to pandemics and other public-health emergencies. Report of the review committee on the functioning of the international health regulations (2005) and on pandemic influenza (H1N1) 2009. Geneva, Switzerland: WHO. http://www.who.int/ihr/publications/RC_report/en/index.html (accessed March 11, 2013).
Wiler, J. L., R. T. Griffey, and T. Olsen. 2011. Review of modeling approaches for emergency department patient flow and crowding research. Academic Emergency Medicine 18(12):1371-1379.
Wisconsin Hospital Association, Inc. 2010. Wisconsin executive summary: Allocation of scarce resources project. Madison: Wisconsin Hospital Association, Inc. http://www.wha.org/scarceResources.aspx (accessed February 14, 2013).
Wolf, Y. I., A. Nikolskaya, J. L. Cherry, C. Viboud, E. Koonin, and D. J. Lipman. 2010. Projection of seasonal influenza severity from sequence and serological data. PLoS Currents 2:RRN1200.