The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Preparing for Terrorism: Tools for Evaluating the Metropolitan Medical Response System Program
The primary products of this report clearly answer question (a):
a questionnaire survey on program management, to be answered by OEP’s primary point of contact in each MMRS community;
a list of essential capabilities for effective response to chemical, biological, and radiological (CBR) terrorism, with associated preparedness indicators; and
a three-element evaluation procedure designed to measure program success. The three elements are review of written documents and data, a site visit by a team of peer reviewers, and observations at exercises and drills and are complementary means of analyzing the community’s response capabilities.
The answer to question (b), on the appropriate sample size with which the impact of the MMRS program can be gauged, is also clear, but no doubt less satisfying. As noted elsewhere in the report, in the absence of any proper control cities or pre-MMRS program data, it will be impossible to unequivocally assign credit to OEP for high states of preparedness. Most of the larger cities have received training and equipment from the U.S. Department of Defense or the U.S. Department of Justice, some have received grants and training from the Centers for Disease Control and Prevention, and all have spent time and money from state and local budgets. The MMRS program’s emphasis on multiagency, multijurisdictional planning has undoubtedly played a major role in increasing preparedness in many cities, but no large city could become well prepared solely as a result of the relatively meager funding provided by the OEP contracts. Technically, then, there is no sample size that will allow valid generalization about the impact of the MMRS program.
Given this answer to question (b), the answer to question (c) on just what conclusions OEP can draw from the use of the committee’s suggested evaluation tools becomes very important, and it is embodied in what were called “guiding principles” in Chapter 8. The first of these was that the committee believes that program assessment is primarily for the purpose of identifying and correcting shortfalls in OEP’s MMRS program. At the community level, evaluation is an exercise designed to guide the distribution of local, state, and federal resources. This evaluation should be valued and understood as an opportunity for local communities to determine the areas in need of improvement and support rather than as a test of communities’ self-reliance. In fact, the committee believes that few if any communities would receive high grades on all essential capabilities if the recommended evaluation program began tomorrow.
A second and equally important principle holds that evaluation should be part of a continuous learning and continuous quality improve-