The charge to the Institute of Medicine’s (IOM’s) Committee on the Evaluation of the Metropolitan Medical Response System Program was to identify or develop performance measures and systems to assess the effectiveness of and identify barriers related to the Metropolitan Medical Response System (MMRS) development process and then to establish appropriate evaluation methods, tools, and processes.
Phase I of this project focused on identifying potential performance measures and systems. In Phase II, the committee used the performance measures developed in Phase I to develop appropriate evaluation methods, tools, and processes to assess the MMRS development process, both at the national level (program management) and at the local level (program success). The charge to the committee included a number of specific questions that staff of the Office of Emergency Preparedness (OEP) posed to help clarify the goals of the project. The questions associated with Phase I were answered in the Phase I report (Institute of Medicine, 2001). Those for Phase II are as follows:
What is the most appropriate approach or model to evaluate the MMRS development process (e.g., surveys, interviews, review of plans, peer review, operational tests, etc.)?
Is there an appropriate sample size from which the impact of the MMRS development process could adequately be gauged?
Considering the variance in local health systems, how can OEP appropriately draw meaningful conclusions from the results of this evaluation?
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 164
Preparing for Terrorism: Tools for Evaluating the Metropolitan Medical Response System Program 9 Closing Remarks The charge to the Institute of Medicine’s (IOM’s) Committee on the Evaluation of the Metropolitan Medical Response System Program was to identify or develop performance measures and systems to assess the effectiveness of and identify barriers related to the Metropolitan Medical Response System (MMRS) development process and then to establish appropriate evaluation methods, tools, and processes. Phase I of this project focused on identifying potential performance measures and systems. In Phase II, the committee used the performance measures developed in Phase I to develop appropriate evaluation methods, tools, and processes to assess the MMRS development process, both at the national level (program management) and at the local level (program success). The charge to the committee included a number of specific questions that staff of the Office of Emergency Preparedness (OEP) posed to help clarify the goals of the project. The questions associated with Phase I were answered in the Phase I report (Institute of Medicine, 2001). Those for Phase II are as follows: What is the most appropriate approach or model to evaluate the MMRS development process (e.g., surveys, interviews, review of plans, peer review, operational tests, etc.)? Is there an appropriate sample size from which the impact of the MMRS development process could adequately be gauged? Considering the variance in local health systems, how can OEP appropriately draw meaningful conclusions from the results of this evaluation?
OCR for page 164
Preparing for Terrorism: Tools for Evaluating the Metropolitan Medical Response System Program The primary products of this report clearly answer question (a): a questionnaire survey on program management, to be answered by OEP’s primary point of contact in each MMRS community; a list of essential capabilities for effective response to chemical, biological, and radiological (CBR) terrorism, with associated preparedness indicators; and a three-element evaluation procedure designed to measure program success. The three elements are review of written documents and data, a site visit by a team of peer reviewers, and observations at exercises and drills and are complementary means of analyzing the community’s response capabilities. The answer to question (b), on the appropriate sample size with which the impact of the MMRS program can be gauged, is also clear, but no doubt less satisfying. As noted elsewhere in the report, in the absence of any proper control cities or pre-MMRS program data, it will be impossible to unequivocally assign credit to OEP for high states of preparedness. Most of the larger cities have received training and equipment from the U.S. Department of Defense or the U.S. Department of Justice, some have received grants and training from the Centers for Disease Control and Prevention, and all have spent time and money from state and local budgets. The MMRS program’s emphasis on multiagency, multijurisdictional planning has undoubtedly played a major role in increasing preparedness in many cities, but no large city could become well prepared solely as a result of the relatively meager funding provided by the OEP contracts. Technically, then, there is no sample size that will allow valid generalization about the impact of the MMRS program. Given this answer to question (b), the answer to question (c) on just what conclusions OEP can draw from the use of the committee’s suggested evaluation tools becomes very important, and it is embodied in what were called “guiding principles” in Chapter 8. The first of these was that the committee believes that program assessment is primarily for the purpose of identifying and correcting shortfalls in OEP’s MMRS program. At the community level, evaluation is an exercise designed to guide the distribution of local, state, and federal resources. This evaluation should be valued and understood as an opportunity for local communities to determine the areas in need of improvement and support rather than as a test of communities’ self-reliance. In fact, the committee believes that few if any communities would receive high grades on all essential capabilities if the recommended evaluation program began tomorrow. A second and equally important principle holds that evaluation should be part of a continuous learning and continuous quality improve-
OCR for page 164
Preparing for Terrorism: Tools for Evaluating the Metropolitan Medical Response System Program ment program, not a one-time snapshot. That is, readiness should be seen as a process rather than a state. This implies a continuing relationship between the communities and their evaluators that includes financial as well as technical and educational support. When this study began in the autumn of 2000, the notion of a continuing financial relationship with even a small subset of MMRS program cities would have seemed pointless, given the limited OEP budget and the mandate to develop programs in the 120 largest cities. As this report is being written in April 2002, however, the U.S. Department of Health and Human Services (DHHS) had begun distributing funds from the more than $2.9 billion of fiscal year (FY) 2002 supplemental appropriations to address bioterrorism. That is almost 10 times the amount available in FY 2001. More than $1 billion of that total is designated to help states prepare their public health infrastructures to respond to a bioterrorism attack (U.S. Department of Health and Human Services, 2002). The $51 million allocated to OEP for support of community emergency preparedness includes funding for an additional 25 cities, which, according to DHHS, means that 80 percent of the U.S. population is covered by an MMRS plan. Also included in DHHS spending plans for FY 2002 is $518 million to enhance preparedness at the nation’s hospitals, which, as the committee has already noted (Institute of Medicine, 2001), have been particularly difficult to incorporate into MMRS programs. Given these caveats, how can OEP best use the data from the proposed evaluation program? STRATEGIC USES OF EVALUATION DATA: IMPLEMENTING THE “LAYERING STRATEGY” Chapter 5 outlined a variety of evaluation functions and addressed the issue of how these might be combined through various kinds of data collection and analysis. This “layering strategy” optimizes the use of these data at a reasonable cost. The strategy relies on several assumptions. First, it assumes that the funded cities will indeed provide valid information in the spirit of continuous quality improvement. Particularly in the wake of the events of September 11, 2001, emergency managers and other personnel of major cities have been subject to criticism that is not conducive to problem solving. Chapter 5 outlined the problem of “corruptibility of indicators”: if blame accrues to the assessment of preparedness, the data cannot be valid and problems are unlikely to be addressed. As the committee has indicated, there is no such thing as perfect preparedness. The second assumption is that a variety of stakeholders at the federal, state, and local levels will continue to pose questions about preparedness
OCR for page 164
Preparing for Terrorism: Tools for Evaluating the Metropolitan Medical Response System Program that require different levels of data aggregation. Not all these questions can be anticipated, so OEP may wish to have a portfolio of findings prepared in advance. To understand the needs of stakeholders, especially the policy makers, it is best to engage them when data collection is being planned. This cannot be done when decisions are imminent; it requires substantial lead time (Leviton and Boruch, 1984). Later in this chapter the committee provides some suggestions on how to do this. The third assumption is that even with more abundant resources, not all preparedness indicators can be monitored equally well, all the time, in all metropolitan areas. Therefore, OEP will need to be judicious about the questions that it addresses and the breadth and depth with which it addresses those questions. The trade-offs between data collection for intensity, validity, and discovery versus data collection for breadth and prevalence have been amply described in the evaluation literature, as have methods that can be used to balance the two to optimize the utility of the information obtained (Cronbach, 1982). However, there are several ways to leverage a bigger return on investment in data collection. One of them is the “evaluation funnel.” As outlined in previous chapters, the backbone of evaluation for any MMRS is the peer-review site visit. The site visits will provide more valid and intensive data than documents, reports, surveys, and other methods used to obtain a breadth of data. The site visits are important to provide the formative feedback and to let OEP know about the levels of preparedness in individual metropolitan areas. However, site visits are expensive and time-consuming, and by themselves they cannot give OEP the summative data it requires to assess overall MMRS program performance and identify chronic areas in need of improvement across metropolitan areas. To address this problem, evaluation in other policy areas has adopted an approach best described as an “evaluation funnel.” The evaluation funnel approach permits evaluators to first obtain a large amount of imprecise information; the evaluators then focus on the collection of more in-depth data. The evaluation funnel idea would work as follows for the MMRS program: (1) a large amount of basic information would be obtained on all program sites, (2) the evaluators would confer with program stakeholders to identify dimensions of interest for further study, and (3) the evaluators would collect more in-depth data for a sample of sites. These three stages of the evaluation funnel approach are described in more detail below. Basic information on program sites. In the case of the MMRS program, data collection for the first stage of the evaluation funnel approach would consist of the cross-tabulation of data and information from the final reports (plans) of the MMRS program communities by gathering
OCR for page 164
Preparing for Terrorism: Tools for Evaluating the Metropolitan Medical Response System Program periodic reports with very specific templates for information (the written indicators in Table 8-1) or by surveying lead agencies across sites on a periodic basis (the contractor survey described in Chapter 7). This stage can be broad but shallow, in the sense that one does not obtain data and information beyond those available in the documentation or seek to establish the validity of self-reports. The products of this stage include an overview of a program’s status that can later be validated by sampling, a sense of the most pressing self-reported chronic problems across the MMRS program communities, and an overview of program characteristics and activities and the most important variations among programs. Identify dimensions of interest for further study. The purpose of this stage of the evaluation funnel approach is to understand the most important characteristics of the MMRS program and how they vary among the MMRS program communities to prepare for the later collection of more in-depth data. At this point, OEP would want to present the results of Stage 1 of its evaluation to the key stakeholders for comment. What was revealing to them? What was of concern? What are the most important components or elements about the different MMRS programs to be studied in depth later? The importance of this consultation step cannot be overstated. It is a key element to ensuring relevant and useful evaluations and can also guide sampling strategies for more intensive data collection (during site visits). The products of Stage 2 include input so that OEP can anticipate questions that stakeholders (such as the U.S. Congress) are likely to pose later and a sampling frame that can be used to choose MMRS program communities for further in-depth investigation through site visits. In-depth study of a sample of MMRS program communities. The site visits conducted by peers described in Chapter 8 would be used to study samples of the MMRS program communities. If a large number of site visits can be budgeted, a formal sampling frame becomes feasible, based on deliberations conducted during Stage 2. When it is not possible to sample a sufficient number of MMRS program sites to achieve statistically significant differences, dimensions of particular interest can be chosen. In this way, the evaluators would be able to evaluate sites that achieve the mode for a particular dimension as well as several sites that vary from the mode in a particular dimension. For example, even before consultation with stakeholders, it might be anticipated that the number of jurisdictions involved is likely to be a dimension of interest. OEP might sample one or more metropolitan areas with the average number of jurisdictions in an MMRS program and then deliberately include MMRS program communities that involve far more or far fewer jurisdictions than the mode. The choice of criteria for sampling sites in any given year will come from the stakeholder consultations obtained during Stage 2. The products of
OCR for page 164
Preparing for Terrorism: Tools for Evaluating the Metropolitan Medical Response System Program the third stage can include technical assistance and formative information for individual MMRS program communities, summative evaluations of the MMRS program communities visited, validation of the self-reported information from Stage 1 (which increases confidence in the data collected during Stage 1), and finally, qualitative case study reports to stimulate discussion at both the community and the national levels. These in-depth case studies are chosen during Stage 2 to maximize relevance to policy and program needs for information. COMMITTEE CRITIQUE AND SUGGESTIONS FOR PROGRAM AMENDMENTS The Phase I report suggests several activities or areas that might be useful additions to future MMRS program contracts with additional cities (Institute of Medicine, 2001). Among these are a preliminary assessment of the community’s strengths and weaknesses and provisions for the use and management of volunteers, for the receipt and distribution of materials from the National Pharmaceutical Stockpile, for decision making related to evacuation and disease containment, for the provision of shelters for people fleeing an area of real or perceived contamination, for postevent follow-up on the health of responders and caregivers, and for postevent amelioration of anxiety in the community at large. The preparedness indicators provided in this report should also enable OEP to operationally define the “operational capability” it demands as the capstone of its contracts. Despite these shortfalls, the committee has been favorably impressed by the MMRS program’s focus on empowering local communities, as opposed to creating yet another federal team to rush to a community at the time of an incident, and the program’s flexibility in allowing each community to shape its system to its unique circumstances and requirements. A carefully done evaluation program of the sort described in this report should make the MMRS program even better. Not only does it seem that resources are now available for the continuing financial relationship suggested by the committee, but it also seems that a consensus now exists on the need for shared responsibility among a wide variety of governmental and nongovernmental agencies to achieve the goals of the MMRS program. When the committee began this project, the future success of the MMRS program depended on voluntary cooperative efforts to prepare for possible but seemingly improbable events. As the project concludes, the committee believes that OEP must be empowered to take a stance that fosters voluntary collaboration but must be willing and able to enforce integration of local, state, and federal services as a pressing societal need for coping with inevitable future acts of terrorism.
OCR for page 164
Preparing for Terrorism: Tools for Evaluating the Metropolitan Medical Response System Program The importance of the MMRS program effort is no longer equivocal, questionable, or debatable. The philosophy that it has developed has become an essential and rational approach that can be truly successful only with a rigorous and continuing evaluation and improvement program. The enhanced organization and cooperation demanded by a well-functioning MMRS program will permit a unified preparedness and public health system with immense potential for improved responses not only to a wide spectrum of terrorist acts but also to mass-casualty incidents of all varieties.