6
Preparedness Indicators
The Metropolitan Medical Response System (MMRS) program context presents some special challenges for evaluation. First, there is much to be learned from analysis of the local, state, and federal responses to the terrorist attacks on the World Trade Center and the Pentagon in September 2001; but the committee believes that chemical, biological, or radiological (CBR) terrorism incidents of the scale envisioned by the Office of Emergency Preparedness (OEP) of the U.S. Department of Health and Human Services are unlikely to occur on a regular basis. As a result, any evaluation of a response system will have to be indirect, in that it will have to measure the intermediate consequences of the MMRS program rather than the ultimate goal, which is to save lives and minimize morbidity from a terrorism incident.
Second, every city’s MMRS encompasses a web of planning activities, resources, intergovernmental agreements, and exercises at multiple levels of government. This web of activities was illustrated in Figure 5-1 in Chapter 5. The many activities in the box beneath “Emergency Capacity” represent only some of the capabilities required for an effective response to CBR terrorism events. Producing those capabilities is the concern of a wide variety of governmental and private-sector institutions through an equally wide variety of mechanisms, including the MMRS program. The MMRS program itself represents an effort to coordinate multiple entities and activities that are independently funded and that receive the authority for their activities from other sources. This complexity means that isolation and quantification of OEP’s role in creating readiness for a CBR terrorism incident will be nearly impossible, regardless of how well one might mea-
sure readiness in any given city. It also suggests that caution is called for in making changes in any part of the web of activities, for they may have unintended consequences far from the locus of change.
Third, although many of the pieces of a response plan may be thoroughly evaluated, evaluation of response capacity as a whole will, by necessity, be inferential; that is, assumptions must be made about how the component parts should work together.
Fourth, the wide variations in the resources and vulnerabilities of the MMRS program municipalities may preclude use of a single yardstick or measure that places all the MMRS cities along a single scale of readiness. As noted in the previous chapter, Washington, D.C., must anticipate attacks on numerous federal facilities and embassies, whereas Baton Rouge, Louisiana, has a variety of chemical plants that are vulnerable to attack. Some cities operate their own emergency medical services; others depend on private, county, or state assets. OEP has dealt with this variation by not attempting to impose a single model or acceptable plan on all its MMRS program cities, instead opting to encourage cities to build their own plans in conjunction with the available structures, resources, and vulnerabilities. This flexible approach results in a substantial reduction in the ability to impose universal performance measures and standards and a corresponding difficulty in devising fair and comparable evaluation tools.
Finally, the committee has been persuaded by both the first five observations and the written and oral explications of OEP that it should approach its tasks with a strong bias toward a formative rather than a summative evaluation. That is, the committee takes as a given that the primary goal of the proposed evaluation is constructive feedback both to OEP staff and to the MMRS program cities.
EXISTING STANDARDS
Many of the personnel, professions, organizations, and jobs referred to in the plans of MMRS program cities are governed by existing standards; some of these are legally mandated (Occupational Safety and Health Administration [OSHA] regulations), and others are voluntary. The following is a partial list of potentially relevant standards that the committee examined:
Joint Commission for Accreditation of Healthcare Organizations (JCAHCO)
Standard EC.1.4—Emergency preparedness management plan
Standard EC.2.9.1—Emergency preparedness drills
Standard EC.1.4 (1997)—Security management plan
Standard EC.1.5 (1997)—Hazardous materials and waste management plan
Commission on Accreditation of Ambulance Services Standards
Organization (includes disaster plan, yearly disaster simulations)
Management
Community relations and public affairs
Human resources
Clinical services
Safety
Equipment and facilities
Communications
National Public Health Performance Standards (Centers for Disease Control and Prevention [CDC])
National Fire Protection Association Standards
NFPA 471—Recommended Practice for Responding to Hazardous Materials Incidents
NFPA 472—Standard for Professional Competence of Responders to Hazardous Materials Incidents
NFPA 473—Standard for Competencies for EMS Personnel Responding to Hazardous Materials Incidents
NFPA 1600—Standard on Disaster/Emergency Management and Business Continuity Programs
OSHA Standard 29 C.F.R. § 1910.120)—Hazardous waste operations and emergency response
Nuclear Regulatory Agency and Federal Emergency Management Agency (FEMA) Criteria for Preparation and Evaluation of Radiological Emergency Response Plans and Preparedness in Support of Nuclear Power Plants (NUREG–0654/FEMA–REP–1)
U.S. Department of Transportation, National Highway Transportation Safety Agency, Emergency Medical Services, National Standard Curriculums
American College of Emergency Physicians Task Force Recommendations on Objectives, Content, and Competencies for Training of Emergency Medical Technicians, Emergency Physicians, and Emergency Nurses on Caring for Casualties of NBC (Nuclear, Biological, and Chemical) Incidents
With only a few exceptions, the committee deemed these standards to be of limited utility in assessing the preparedness of local communities for coping with a CBR terrorism incident. Although the National Emer-
gency Management Association is in the process of developing an accreditation program (DeMers, 2001; National Emergency Management Association, 2001) that may ultimately serve as a means of evaluating most of the non-CBR agent-specific facets of an MMRS, most of the standards listed above are qualitative in nature and are “enforced” only by well-publicized and infrequent inspections. Most of them also focus on the adequacy of written plans, like the OEP checklist in Appendix D. None explicitly addresses CBR terrorism or an emergency of the scale described in the MMRS program contract, and attempts to apply these standards to such scenarios in the past have often proved counterproductive (e.g., misinterpretation of OSHA hazardous waste operations standards has led to expectations that hospital emergency department personnel should have Level A chemical protective suits). Furthermore, each standard applies to only one element, discipline, or agency involved in an MMRS.
It is difficult to envision a successful MMRS in which any of the constituent elements fails to meet its own narrow standards, but it is also true that a collection of individually competent elements does not guarantee a successful system. Each of the standards listed above was nevertheless examined for elements that could be incorporated into an MMRS-specific evaluation, and a number of those have been incorporated into the matrix of preparedness indicators provided in Appendix E.
EXISTING ASSESSMENT TOOLS
The committee examined the following assessment tools for possible application in whole or in part to the task of evaluating preparedness for CBR terrorism events:
Capability Assessment for Readiness (CAR)
—FEMA self-assessment instrument to evaluate state emergency management
—An 1,801-element survey administered to all states and territories in 1997
—“All-hazards” document with only a handful of items related to chemical and biological weapons
Local Capability Assessment for Readiness
—FEMA’s smaller, local community version of CAR
—Currently undergoing pilot testing in selected counties
Hazardous Materials Exercise Evaluation Supplement
—Instructions and checklist for peer reviewers in FEMA’s Com-
prehensive HAZMAT Emergency Response-Capability Assessment Program
—Sixteen elements, each with 10 to 50 “points of review”
—Yes-or-no responses and the time that the specific action was observed
Epidemiologic Capacity Assessment Guide
—Step 2 of a three-step process (Step 1 is document collection, and Step 3 is site visit) designed by the Council of State and Territorial Epidemiologists
—Self-assessment questionnaire
—Short answers or essays and data on speed of investigation from recent cases
—Suggestions for interviews of key personnel
State Domestic Preparedness Equipment Program Assessment and Strategy Development Tool Kit
—Instruments developed by the U.S. Department of Justice (DOJ), the Federal Bureau of Investigation, and CDC to evaluate vulnerability, threat, and public health system performance combined with assessments of required and current capabilities in the realms of fire services, hazmat services, emergency medical services, law enforcement, public works, public health, and emergency management
—A 100-page “Tool Kit” provided for use by the state and local personnel assigned to fill out the forms, but it could be the basis of peer interviews
—State assessment designed to be a compilation of local assessments, so it is really a local instrument
Public Health Assessment Instrument for Public Health Emergency Preparedness (CDC)
—Ten essential public health services amplified specifically for preparedness for CBR terrorism events
—Nineteen “indicators,” each with multiple subparts requiring mostly yes-or-no answers
—Part of DOJ state assessment instrument
Assessment of Community Linkages in Response to a Bioterrorism Event
—Draft (Spring 2001) product of JCAHCO and SAIC, Inc., for the Agency for Healthcare Research and Quality
—Forty-item questionnaire for hospitals (yes-or-no and short answers)
Chemical and Bioterrorism Preparedness Checklist
—American Hospital Association 8-page self-analysis
Mass Casualty Disaster Plan Checklist: A Template for Healthcare Facilities
—A list of 135 items from the Association of Professionals in Infection Control and Epidemiology and the Center for the Study of Bioterrorism and Emerging Infections
Each of these instruments seeks information about elements of disaster preparedness that are directly relevant to CBR terrorism preparedness. All are written self-reports, and either of the two most comprehensive assessments, done properly, would take several people many hours or even several days to complete. In addition, the committee believes that self-reports are vulnerable to “corruption of indicators.” It has long been understood in evaluations of health and social programs that when rewards and punishments result from people’s performance on an indicator, that indicator can sometimes change in ways that have no bearing on the actual outcomes of the governmental program. In the context of the MMRS program, at least two possible forces can lead to corruption of indicators. First, to the extent that municipalities may believe that continued federal funding is contingent on contract compliance, self-reports may make the situation appear to be better than it really is. Second, and alternatively, if local officials believe that further funding is dependent on need, self-reporting may actually lead to an underestimation of preparedness. Like the existing standards described in the previous section, most of these instruments also focus on the adequacy of written plans, like the OEP checklist in Appendix D. In sum, the committee views them as providing too little additional assurance for the substantial effort involved.
The committee also sought information on how other countries assess their capabilities to respond to a terrorist attack with a CBR agent. The United Kingdom (UK) and Israel have faced terrorism for several decades, although conventional explosives have been the weapon employed in almost all cases, and no single incident has been of the magnitude envisioned by the MMRS program planners. Both of those countries’ armed forces have active research and development programs in the chemical and biological defense realms and equip their troops very similarly to U.S. Forces. A recent paper by Sharp (2002) on counterterrorism preparation in UK cities noted that a free society cannot reveal all to its citizens, but implied that there is little evidence to back up the British government’s assertion that it is both informed and prepared. The UK national medical system would presumably make the preparation task easier than it is in the United States, but the IOM Committee staff was unable to locate a
description of an assessment program or procedure comparable to that being asked of the Committee.
Israeli measures to protect its citizenry from possible attack with chemical or biological weapons during the Persian Gulf war of 1991 are well-known. Danon and Shemer (1994) provide a large collection of papers on Israeli medical lessons from the Gulf war. Every person in Israel, for example, has a personal protection kit containing a gas mask, decontamination powder, and an autoinjector of atropine. In times of national strife all Israeli health services are coordinated through a Supreme Hospitalization authority and civilian and military patients become one pool. As a result civilian hospitals are closely involved in planning for the care of chemical and biological casualties. In fact all Israeli hospitals are expected to be able to manage a sudden influx of patients, in a mass casualty incident, of 20 percent of the number of the hospital’s beds (Personal communication, Y. Waisman, Director, Unit of Emergency Medicine, Schneider Children’s Medical Center of Israel, Petah-Tiqva, to F. Henretig, March 1, 2001). Their plans also assume that half of the patients would be moderately to critically ill and that 20 percent would be pediatric victims. Chemical warfare drills involving both emergency medical services and hospitals are conducted every 36 months, mass casualty drills every 18 months, and simulations with senior hospital and military staff every 12 months. An innovation the IOM Committee finds attractive is the use of “smart simulated casualties” in these drills—military physicians and recent graduates of an Advanced Trauma Life support course (Gofrit et al., 1997). Unpublished and undated briefing slides of Smuel Reznikovich made available to the IOM Commitee by K. Tonat, Office of Emergency Preparedness, reveal that Israeli hospitals are periodically evaluated for readiness on a 110 point scale. Evaluation covers 16 subjects, including personnel, training, logistics, medical equipment, blood bank and medications, and “chemical warfare deployment,” but attempts to obtain further details were unsuccessful.
PERFORMANCE MEASURES VERSUS PREPAREDNESS INDICATORS
The MMRS contract deliverables are all written plans, and although written plans are certainly necessary elements of preparedness, they are in most cases only the beginning of a continuing process. Some elements of these plans can be carried out only during or after an actual incident or a very realistic exercise, but many require advance preparations, such as the purchase of equipment, hiring or training of personnel, or even changes in the way in which everyday business is conducted (for example, citywide electronic surveillance of emergency department visits or 911
calls). Even though these advance preparations and their documentation are actions and are necessary for preparedness, they are not the same sort of performances that might be assessed in an actual mass-casualty event (whether it involves CBR terrorism or not) or a drill or field exercise. Measures related to advance preparations are generally easier and cheaper to access, however, and can provide a measure of effective response capability or potential (although, in the absence of an act of mass-casualty-producing CBR terrorism, there are no data that can validate the relationship between the selected indicators and actual performance). The committee therefore prefers the more inclusive term “preparedness indicators” to “performance measures.”
The committee’s recommended preparedness indicators are presented in Appendix E as a series of tables. A separate table is provided for each of the substantive deliverables of the MMRS program’s fiscal year (FY) 2000 contract (omitted are preparedness indicators for three deliverables that call for a meeting with the project officer, monthly progress reports, and a final report, respectively). In each table in Appendix E the far left column, labeled “Plan Elements,” lists the required elements of the deliverable, numbered in accord with the checklist supplied to FY 2000 MMRS program cities by OEP under the title “2000 MMRS Contract Deliverable Evaluation Instrument,” a copy of which is provided as Appendix D
The remaining three columns of the tables present the committee’s suggested preparedness indicators for each plan element. These fall into three categories: inputs, processes, and outputs.
Inputs are the constituent parts called for, implicitly or explicitly, by a given deliverable. An adequate plan itself would contain at least one input for nearly every deliverable, assuming that the required plans would have been completed at the point that the assessment is being undertaken. Other inputs could be designated personnel; standard operating procedures; equipment and supplies; or schedules of planned meetings, training, and other future activities.
Processes are evidence of actions taken to support or implement the plan. Evidence that such actions had been taken or are under way might include minutes of meetings, copies of agreements that had been prepared, evidence that training sessions had been conducted, or the numbers or percentages of personnel trained to use CBR detection equipment.
Outputs are indicators of effective capabilities developed through the actions included under processes, that is, indicators of the effectiveness of actions taken to support or implement the MMRS program plan. They would include preparations that have been completed, for example, establishment of a stockpile of antidotes and antibiotics appropriate for the agents that pose the greatest threat, with evidence of adequate mainte-
nance and deployment procedures. Another output would be demonstration of critical knowledge, skills, and abilities in tabletop exercises, full-scale drills, or surrogate incidents (deliberate scares and false alarms, unintentional chemical releases, naturally occurring epidemics, or isolated cases of rare diseases). Outputs may be evaluated through expert judgment by peer reviewers of answers to written questions or on-site probes. An important advantage of outputs is that they reflect intangibles not easily captured by the input and process indicators suggested by the committee. For example, a strong MMRS requires a champion with the desire and commitment to continually advocate for the project; individuals who are willing to cooperate; a change in attitude by organizational leadership that will adopt an interorganizational and systemic approach to the MMRS; and leaders from local, state, federal, and private agencies with trust and sensitivity to each other’s missions, goals, strengths, and weaknesses.
The best evidence for preparedness will always be outputs, which are the end products of processes undertaken with inputs. A variety of circumstances, including the timing of the assessment, may make collection of output data impossible or impractical. In this circumstance evidence for preparedness might be sought among inputs and processes. All three types of indicators are, however, merely surrogate or proxy measures of MMRS effectiveness that are based on the judgment of knowledgeable students of the field but that have never been truly validated (and that cannot be, short of an actual mass-casualty CBR terrorism incident).
The tables in Appendix E present many preparedness indicators, in part because of the committee’s decision to derive indicators for each of the items on OEP’s checklist of elements required in the plan. In fact, no practical evaluation program could or should use all the indicators listed. Use of the output-based indicators, presented in the far right column of each table in Appendix E, provides the best means of assessing readiness, and whenever possible, these indicators should be used in preference to process- or input-based indicators. The importance of the output-based indicators, especially those obtained from exercises or careful evaluations of real disasters, cannot be overemphasized. Similarly, process-based indicators should take preference over input-based indicators. In addition, it should be clear that every element of the plan need not be given equal weight in the evaluation of preparedness. Indeed, it may not be necessary to include every element in even a very comprehensive evaluation. This selection and prioritization process is addressed in Chapter 8, as is determination of the most effective and efficient means of collecting the desired information and specifying some minimum standards for preparedness wherever possible.