The contrasting systems view of errors and error prevention is based on research findings from a variety of fields, including studies of accidents and breaches of safety in a variety of industries, studies of “high-reliability organizations,” and research into effective organizational and managerial practices. In all of this work, the interdependent interaction of multiple human and nonhuman (equipment, technologies, policies, and procedures) elements of any effort to achieve a stated purpose is regarded as a “production process” or “system.” These interrelated human and nonhuman system elements are required to operate in synchrony if a given goal is to be achieved. As the elements of the production process or system are changed, the likelihood of error also changes. This research has revealed that errors typically result from problems within the system in which people work—not from poor individual worker performance—and typically originate in multiple areas within and external to an organization. Error results when these multiple problems converge and impair an organization’s performance (Perrow, 1984; Reason, 2000). Not surprisingly, errors increasingly are attributed to the hyper-complex organizations that emerged in the last half of the twentieth century in response to technological and social changes (Perrow, 1984).

A fundamental principle of the systems approach to error reduction is the recognition that all humans make mistakes and that “errors are to be expected, even in the best organizations” (Reason, 2000:768). To Err Is Human endorses the systems approach to understanding and reducing errors and notes that failures in large systems, such as hospitals or their various patient care units, nursing homes, or ambulatory practice sites, are most often due to unanticipated events or factors occurring within multiple parts of the system. In most cases, the accumulation of these factors, as opposed to the actions of a single individual, is what leads to an error or accident. In the above example, these multiple factors include the inexperience of the nurse; the lack of available supervision; the unavailability of the tools needed to perform the task; and the nurse’s possible perception of her lack of authority to call attention to and change the unsafe situation by, for example, sending the patient to the OR without a catheter and directing OR staff to catheterize the patient. Addressing any one of these factors might have prevented the urinary tract infection. Blaming the individual nurse would not change these factors and would not result in increased safety for the next patient in need of catheterization on the nursing unit. As Reason notes, when an error occurs, the question should not be “Who is at fault?” but rather “Why did our defenses fail?” (Reason, 2000).

At the same time, even though errors are understood to be the result of multiple factors within a system, the human component of systems in all industries has been identified as one of the largest contributors to the occurrence of accidents. Reason explains that since people design, manufacture, operate, maintain, and manage complex technological systems, it is hardly



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement