The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Patient Safety: Achieving a New Standard for Care
reliable comparisons. In many circumstances, current clinical data systems and risk adjustment strategies are technically incapable of meeting those reasonable expectations.
There are many potential sources of variation in measured health outcomes (see Box 8-1), and risk adjustment methods can account only for differences in known patient factors. For example, Eddy estimates that all major factors proven to explain infant mortality rates (race, maternal alcohol consumption, maternal tobacco smoke exposure, altitude, and differences in prenatal care delivery performance) account for only about 25 percent of documented variation in patient outcomes (Eddy, 2002). The other 75 percent of outcomes are beyond the reach of risk adjustment strategies.
Geographic aggregation (e.g., variation in hospital programs and local referral patterns) can also play a defining role. A recent evaluation of hospital quality outcome measures found that most produced statistically reliable results when aggregated to the level of a metropolitan area, state, or multistate region, but only a few measures produced valid results at the level of individual hospitals (Bernard et al., 2003). The fact that an outcome mea-
BOX 8-1 Possible Sources of Variation in Measured Outcomes
Differences in clinical performance
Differences in individual patients
Physiologic/anatomic disease expression and response (severity)
Comorbid illnesses (complexity)
Patient values, preferences, and resources
Differences in the structure of the care delivery system (data aggregation)
Unreliable attribution of performance among professionals and organizations within complex care delivery collaborations
Risk-associated referral patterns, undetected by individual patient measures
Differences in measurement (data collection and analysis)
Completeness of data collection (extraction of manual or electronic data)
Level of clinical assessment (e.g., was the test performed and recorded?)
Field finding (e.g., was the clinical result extracted?)
Accuracy of data collection
Consistent and complete field definitions
Accuracy of data entry
Pertinent details of data collection (e.g., administrative vs. clinical system)