that all hospitals, regardless of mortality rate, shared an extremely negative view of the accuracy, usefulness, and interpretability of the data (Berwick and Wald, 1990).
Many of the data used in outcome analyses are generated by the individuals and institutions who are the focus of the evaluation. In such circumstances, even among conscientious, honest observers, the sentinel effect can significantly alter the data that are recorded and affect comparative outcome rankings. Some such data manipulation may be neither unconscious nor honest: when confronted by outcome measurement for accountability, it is often far easier to look good by gaming the data system than to be good by managing and improving clinical processes.
Anecdotal accounts of “filtering the data” are common in health care. For example, a hospital in the western United States was found to be a high mortality outlier for acute myocardial infarct (AMI) on an early HCFA mortality report. Upon internal review, the hospital discovered that almost all AMI patients were being coded as admissions from their community-based physician’s office, even though many had come through the hospital’s emergency department. Upon realizing that source of admission was an important element in the HCFA risk adjustment model, the hospital began coding all AMI patients as having entered the hospital through the emergency department. By the following year, the hospital had gone from being a high mortality outlier to being a low mortality outlier on the HCFA AMI mortality report without introducing any change in clinical care (James, 1988).
The extent of systematic data manipulation in health care is not known. However, in one study 39 percent of physicians reported falsifying insurance records to obtain payment for care they believed was necessary even though it was not covered by the patient’s policy (Wynia et al., 2000). There is also evidence that voluntary injury detection systems underreport events, although this may be attributable in part to the burden of reporting (Evans et al., 1998).
Industries outside of health care have repeatedly demonstrated that a safe reporting environment is critical to robust failure detection and that robust failure detection is essential to the design of safe systems that significantly reduce failure rates (Institute of Medicine, 2000). Such experience suggests that, whenever possible, accountability for patient safety should focus at the level of an organization rather than the level of individual health professionals working within the organization. Unfortunately, most health