to improve medication safety. But the generalizability of the literature across the health care system may be limited. While some studies suggest improvements in patient safety can be made, others have found no effect. Instances of health IT-associated harm have been reported. However, little published evidence could be found quantifying the magnitude of the risk.

Several reasons health IT-related safety data are lacking include the absence of measures and a central repository (or linkages among decentralized repositories) to collect, analyze, and act on information related to safety of this technology. Another impediment to gathering safety data is contractual barriers (e.g., nondisclosure, confidentiality clauses) that can prevent users from sharing information about health IT-related adverse events. These barriers limit users’ abilities to share knowledge of risk-prone user interfaces, for instance through screenshots and descriptions of potentially unsafe processes. In addition, some vendors include language in their sales contracts and escape responsibility for errors or defects in their software (i.e., “hold-harmless clauses”). The committee believes these types of contractual restrictions limit transparency, which significantly contributes to the gaps in knowledge of health IT-related patient safety risks. These barriers to generating evidence pose unacceptable risks to safety.


Software-related safety issues are often ascribed to software coding errors or human errors in using the software. It is rarely that simple. Many problems with health IT relate to usability, implementation, and how software fits with clinical workflow. Focusing on coding or human errors often leads to neglect of other factors (e.g., usability, workflow, interoperability) that may increase the likelihood a patient safety event will occur. Furthermore, software—such as an EHR—is neither safe nor unsafe because safety of health IT cannot exist in isolation from its context of use. Safety is an emergent property of a larger system that takes into account not just the software but also how it is used by clinicians.

The larger system—often called a sociotechnical system—includes technology (e.g., software, hardware), people (e.g., clinicians, patients), processes (e.g., workflow), organization (e.g., capacity, decisions about how health IT is applied, incentives), and the external environment (e.g., regulations, public opinion). Adopting a sociotechnical perspective acknowledges that safety emerges from the interaction among various factors. Comprehensive safety analyses consider these factors taken as a whole and how they affect each other in an attempt to reduce the likelihood of an adverse event, rather than focusing on eliminating one “root cause” and ignoring other possible contributing factors.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement