Nevertheless, complex systems often operate for extended periods of time without displaying catastrophic system-level failures. This is the result of good design as well as adaptation and intervention by those who operate and use the system on a routine basis. Extended failure-free operation is partially predicated on the ability of a complex system to adapt to unanticipated combinations of small failures (i.e., failures at the component level) and to prevent larger failures from occurring (i.e., failure of the entire system to perform its intended function). This adaptive capability is a product of both adherence to sound design principles and of skilled human operators who can react to avert system-level failure. In many cases, adaptations require human operators to select a well-practiced routine from a set of known and available responses. In some cases, adaptations require human operators to create on-the-fly novel combinations of known responses or de novo creations of new approaches to avert failures that result from good design or adaptation and intervention by those who use the system on a routine basis.
System safety is predicated on the affordances available to humans to monitor, evaluate, anticipate, and react to threats and on the capabilities of the individuals themselves. It should be noted that human operators (both individually and collectively as part of a team) serve in two roles: (1) as causes of and defenders against failure and (2) as producers of output (e.g., health care, power, transportation services). Before a system-wide failure, organizations that do not have a strong safety- and reliability-based culture tend to focus on their role as producers of output. However, when these same organizations investigate system-wide failures, they tend to focus on their role as defenders against failure. In practice, human operators must serve both roles simultaneously—and thus must find ways to appropriately balance these roles. For example, human operators may take actions to reduce exposure to the consequences of component-level failures, they may concentrate resources where they are most likely to be needed, and they may develop contingency plans for handling expected and unexpected failures.
When system-level failure does occur, it is almost always because the system does not have the capability to anticipate and adapt to unforeseen combinations of component failure, in addition to not having the ability to detect unforeseen adverse events early enough to mitigate their impact. By most measures, systems involving health IT are complex systems.
One fundamental reason for the complexity of systems involving health IT is that modern medicine is increasingly dependent on information— patient-specific information (e.g., symptoms, genomic information), general biomedical knowledge (e.g., diagnoses, treatments), and information related to an increasingly complex delivery system (e.g., rules, regulations, policies). The information of modern medicine is both large in volume and highly heterogeneous.