Difficulties in Synthesizing the Evidence on Incident Rates
Since the publication of To Err Is Human: Building a Safer Health System (IOM, 2000), there has been a rapid growth in contributions to the field of patient safety. As with any emerging discipline, synthesizing the results of this research is challenging because of the heterogeneity of study definitions and error identification methodologies.
Significant confusion exists about the most fundamental issue in quantifying medication errors. One broad definition of medication errors is any inappropriate use of a drug, regardless of whether that use resulted in harm (Nebeker et al., 2004). Other definitions include only medication errors that have the potential to produce harm, or “clinically significant medication errors” (Lesar et al., 1997). Thus a medication error that could never be executed, such as a prescription to give orally a medication that comes only in parenteral form, would be excluded. As discussed previously, medication use also involves various stages, including selecting and procuring the drug by the pharmacy, prescribing and selecting the drug for the patient, preparing and dispensing the drug, administering the drug, and monitoring the patient for effect, and many studies have focused on errors occurring during only one of these stages.
Contributing to the heterogeneity of the patient safety literature are the varying methodologies used to identify errors. The incidence rates found in the literature depend dramatically on the particular detection method used. Although many such methods exist, those most commonly employed include direct observation, chart review, computerized monitoring, and voluntary reporting (Murff et al., 2003) (see Chapter 5 for more detail). Many studies have established that voluntary reporting results in marked underestimation of rates of medication errors and ADEs (Allan and Barker, 1990; Cullen et al., 1995; Jha et al., 1998; Flynn et al., 2002). Voluntary reporting rates are generally low because of such factors as time pressures, fear of punishment, and lack of a perceived benefit (Cullen et al., 1995). Improvements in internal reporting have been achieved in nonpunitive reporting environments (Rozich and Resar, 2001), but these rates still vastly underestimate the true incidence.
A large study comparing direct observation, chart review, and incident reporting found that direct observation identified the greatest number of errors (Flynn et al., 2002). Earlier it had been established that automated surveillance could detect ADEs at a much higher rate than voluntary reporting. A comparison of automated surveillance, chart review, and voluntary reporting found that of the 617 ADEs detected, chart review identified 65 percent, automated surveillance 45 percent, and voluntary reporting 4 percent (Jha et al., 1998). In this study, only 12 percent of all ADEs detected were identified by both chart review and computerized surveillance (Jha et al., 1998).
Several studies have noted that different methods of detection appear more suited to identifying different types of medication-related problems (O’Neil et al., 1993; Jha et al., 1998), suggesting that the method selected should depend on the area of interest (again, see Chapter 5 for more detail). In conclusion, the incidence rates found in the patient safety literature depend dramatically on the particular detection method used.
A further confounding factor is that medication error rates are quoted in varying ways—errors per order/dose/opportunity, errors per 1,000 patient-days, and errors per 1,000 patient admissions. Rates of preventable ADEs are cited in a similar manner—preventable ADEs per 1,000 patient-days and per 1,000 patient admissions.