This chapter focuses on areas of future research that can, in the long term, lead to care transformation and development of safer health IT. The literature shows that with well-designed software and appropriate staff training, health IT can have a positive effect on safety; outside of those conditions, health IT can negatively impact safety. However, the literature is far from complete. A research agenda is needed to help improve patient safety via information technology. The committee discovered a number of research gaps during its information gathering and identified four broad areas: safe design and development of technologies, safe implementation and use of technologies, considerations for researchers, and policy issues. Research is needed to continue to build the evidence to determine how to most effectively and safely adopt health IT. A greater body of conclusive research is needed to fully describe the potential of health IT for ensuring patient safety. This discussion is a starting point and not a comprehensive list.
Patient safety depends on the sound design and development of health IT. However, the optimal design or development is unknown and may indeed be impossible to determine. In light of this, research is needed to identify characteristics of safe systems. Some properties of health IT integral to patient safety require further research, including usability, interoperability, understanding the complexity of health care delivery, and the balance between standardization and customization.
Usability is one characteristic of health IT design and development requiring further research. Maximizing usability will ensure clinicians’ needs are taken into account in the design of the relevant human-computer interactions. A variety of industry standards may apply to health IT but compliance with standards serves only as a weak screen for design deficiencies. Although general principles of usability are well described and some work around usability is currently under way in health IT, additional research is needed specifically about its impact on patient safety.
Another characteristic of design and development important to safety is interoperability. Interoperability can allow data to be shared readily, for example, between an electronic health record (EHR) and a pharmacy system without loss of semantic content. Interoperability will require harmonization of standards, such as how data can best be formatted and stored. Consistent rules governing transmission of data and use of common terminologies are being developed through health information exchanges; their success will need further inquiry.
Research is needed on interfaces to support the fact that medical care requires the cooperation of multiple health professionals in multiple institutions. The exchange of information between users, collaborative decision making, and the support of complex safety-critical processes will be critical to ensuring health IT operates as expected in health care settings. Unlike some of the other areas where such research is conducted (such as nuclear power plant operations and flying airplanes), medical applications have an additional complexity in that health professionals are treating multiple patients over the same time period and do not have the opportunity to land and finish one flight before having to think about the next one. Interfaces that support this “context switching” are essential, and not enough is known about them.
Also important to safe design and development of technologies is a better understanding of the tradeoffs between standardization and customization of health IT. While many users would like to modify health IT to fit their specific needs and health care environments, customization can make systems difficult to analyze. Customization can also prevent development of widespread solutions. On the other hand, health IT products that are too standardized may not appropriately fit into an organization’s workflow. A similar argument is considered in our policy discussion about the tension between regulation and innovation. Rigorous scientific evidence ought to serve as the basis to achieve a balance between making things the same and letting them differ. Similarly, research is needed to address the mismatch between the assumptions of health IT designers and the actual clinical work environment.
Evaluations of how safely health technologies are implemented and used will help build safer systems. There is a need to build a larger body of evidence that identifies the most successful implementation methods as well as to study and measure actual use of health IT. An area of particular concern also underrepresented in the literature is use of patient engagement tools.
To identify successful implementation methods, sharing of common experiences can help create guidance specific to the acquisition and initial implementation of health IT. For example, is the best method of implementing health IT to take a “big bang” approach where all divisions of an organization adopt a health IT product at the same time, or is it to roll out the product incrementally? Evidence for the best method to back up an EHR in case of unforeseen downtime and other types of contingency plans would help reduce the risks of making mistakes and thereby improve the overall system safety.
Further investigations will also be needed about how health IT products are actually being introduced and integrated into clinical workflows. Currently, data on the impact of health IT on workflows are sparse and largely anecdotal. Examining disruption of workflow can reveal where health IT design poorly matches the incentives and demands clinicians encounter during work, generating knowledge about the generic and specific nature of problems. Obstacles to sharing experiences gained during implementation include that providers are too busy to document what happened to them, and that experiences across both large and small medical service organizations are needed. Facilitating the lessons learned may require additional resources from a public source. Specific measures of usability that apply across clinicians and settings would help speed adoption. Assessments made after clinical implementation of health IT can evaluate whether or not it is working as designed as well as the presence of adverse events. Detailed measures will be needed to assess the actual performance of any life-critical technology. For example, measures on how well the technology has been implemented in the clinical setting could monitor whether a technology is being used safely and is not inadvertently introducing risks into the clinical workflow. Exploring the safety consequences of work-as-designed compared to work-as-practiced at the front lines of care delivery is crucial. For example, the Adverse Event Reporting System has been of great value in understanding the practical risks of drug administration.
Another critically important area for research is effective flow of information to both providers and patients. In an age where the average patient record weighs seven pounds, research is needed on summarization, saliency, and understanding to capture the nonlinear nature of the health care work
environment (ACHE-NJ, 2009). Designing information presentation to minimize safety risks with minimum effort is still an unsolved problem. Information visualization is not as advanced in parts of clinical medicine as compared with other scientific disciplines.
Finally, use of health IT by patients needs to be evaluated. Patients are now engaging in their own care using an increasing number of diverse methods and tools, particularly with Internet-based applications. Learning how patients interact with these tools and their expectations for their care will be critical to achieve high levels of patient-clinician interaction as health care enters an era of ubiquitous computing. Understanding the impact of sharing electronic records and the effect of patient partnerships in owning and interacting with data, for example in a personal health record, can help improve safety. At the same time, patients often do not understand instructions given by medical or nursing staff, and do not follow them. To achieve greater safety, mechanisms will need to be developed between clinicians and patients to assess and verify patient- and caregiver-entered data to develop a shared understanding of how such data will be used. Interfaces that can help both patients and clinicians access and assess a patient’s health data will become increasingly important. Any unintended effects of patient engagement tools also ought to be studied. Patient engagement tools might reduce health disparities and improve the health of populations. On the other hand, they might cause individuals to misinterpret their own results or to fret about insignificant changes in test results. Researchers will need to be cognizant of the array of patient engagement tools and monitor their effects on patient safety.
Some research questions about health IT and patient safety are suited for academic research. Manufacturers and health care organizations likely will not examine evaluation methods, considerations specific to small practices and hospitals, and the impact on population health.
Limitations in the quality of the literature arise in part from poor availability of high-quality data and adequately powered research methods. Study methods generally considered the gold standard in health care such as randomized controlled trials are often inappropriately applied to evaluations of health care because they are unable to consider the many exogenous factors facing complex systems. Research should exploit the methods of other disciplines such as those prevalent in social sciences. This is critical to studying the safety of health care systems and is particularly relevant to studying sociotechnical systems.
Further understanding of the various sociotechnical domains discussed in Chapter 3 will be essential, especially in the areas where domains overlap.
Research on the influence of each domain on the quality and safety of care will be needed to identify system vulnerabilities and ways to address them. Any investigations about sociotechnical systems will require collaboration and learning from a wide collection of disciplines and industries, including systems engineering, human factors, IT, and health care, among others.
Studies of implementation need to be evaluated in situ to account for the many factors that affect health IT products as they are actually used. Research methods are currently limited and mostly test health IT products in vitro. Methods for in situ testing need to be developed, as in situ testing becomes increasingly valuable.
It will also be important to examine niche health IT products that are being developed for medical specialties such as anesthesia information systems, radiology information systems, and perioperative management systems. These systems and their interactions within EHRs are not yet widely reported in the literature but carry potentially great implications for patient safety.
Research today largely studies what happens in large hospitals. Additional study is needed of care delivered by small practices and hospitals and/or providers in rural areas. Most U.S. medical care is provided by smaller providers. They have special problems related to staffing, workflow, and a safety culture not dependent on local IT expertise. Examples of research efforts specific to small providers include what type of staffing model best supports patient safety, characteristics of optimal workflow, and how to promote a culture of safety in these smaller organizations in the presence of health IT.
Population health is an area of great promise for health IT to improve patient safety and highlights the transformative potential of health IT. Preliminary experiences have found that the data generated by the use of health IT impacts population health. Specific to patient safety, EHR data might be used to identify close calls and adverse events at the community and population levels. Beyond patient safety, trends in health IT-generated data can create a pool for future research. For example, such data can lead to recognition that specific medications can have previously unknown risks or that widespread use of health IT can actually create larger disparities in care. While such studies were outside the committee’s scope, inquiries at the population level ought to be considered as areas for further research.
To facilitate research, more data will be needed. All users and vendors of EHR technology could maintain records available (in anonymized form) for researchers. These records could be best used if sufficiently complete to support decision making for safety and to permit comparison of the risks and rewards of different strategies for design, implementation, and use. These data ought not to be used for either liability or disciplinary action.
The committee encountered a number of specific policy questions for which evidence was lacking, such as “Is health IT safe?” and “How safe is safe?”. To inform better policy decisions, the effectiveness of regional extension centers, health information exchanges, and regional health information organizations needs to be measured.
The impact of oversight and regulation in the context of health IT and patient safety will require continual monitoring so that future policy decisions can have a base upon which to make informed decisions. This is especially important given the complexity of health IT. The intended and unintended consequences of policy decisions targeting health IT may have significant ramifications for the safety of care. For example, monetary incentives that encourage speed of installation above all else may cause inadequate and risky systems to be used. On the other hand, a monetary incentive for usability standards might produce safer patient care.
Another area for research is focusing on how to best achieve the maximum positive impact of health IT on safety. A better understanding of the unintended consequences will help us determine how to balance research investments by focusing on eliminating health IT-introduced errors or how to perfect and broadly disseminate features of health IT that lead to the greatest improvements in safety.
Understanding both the positive and negative unintended consequences will be critical to developing stronger, more effective policies. A summary of findings related to health IT policies ought to become part of an annual report submitted to the Secretary of Health and Human Services (HHS) on the safety of EHRs, EHR systems, and health IT capabilities in general.
The value proposition for health IT is beyond our scope, but it is poorly developed in the current literature. Costs for implementing and maintaining health IT can be extremely high and can be a deterrent to adopting technologies. On the other hand, health IT has been considered a tool to help potentially reduce health care costs in the long term. Clear evidence does not exist yet supporting one argument over another, and the lack of evidence is troubling for a technology that is so expensive and heavily privatized.
While many of the above suggested research areas are not necessarily limited to health IT and can also apply to the paper-based world, the application of research to health IT is needed because of the widespread presence of health IT products in health care delivery. More research can foster more
rapid improvements in patient safety (examples of future research ideas are shown in Box 7-1).
HHS should support a research program to study patient safety and the use of information technology with the goal of addressing the issues raised throughout this report. This research program should be carefully developed to ensure scientific rigor and thoughtful inquiry into the complex relationship between patient safety and the use of health IT. Within the department, a number of agencies such as the Agency for Healthcare Research and Quality (AHRQ), the Centers for Disease Control and Prevention, the Centers for Medicare & Medicaid Services, and the National Institutes of Health (through the National Library of Medicine) fund research on health IT and informatics. HHS could consider using demonstration projects to answer questions about the contribution of health IT to patient safety or using the Practice-based Research Networks to develop research and data about health IT implementation and use in primary care facilities. These should be part of a sustained, ongoing research program with substantial support for basic and applied research. It should be of a magnitude appropriate for such a large effort; comparable high-technology industries often spend 10 percent of their yearly revenue on research and development (NSF, 2011).
Many industries contribute to the research on improving technology safety and are supported by the government. In an effort to create a shared learning environment, a future research program should combine efforts from a cross-disciplinary set of organizations. For example, many state- based programs are driving innovation in health IT and should be leveraged. Additionally, agencies such as the Department of Defense, the Department of Energy, the Department of Veterans Affairs, the National Institute for Standards and Technology, and the National Science Foundation all support and/or conduct research in improving the safety of technology. These agencies are leaders in the area of technology safety research and their expertise should be leveraged for the development of safe health IT.
Recommendation 10: HHS, in collaboration with other research groups, should support cross-disciplinary research toward the use of health IT as part of a learning health care system. Products of this research should be used to inform the design, testing, and use of health IT. Specific areas of research include
a. User-centered design and human factors applied to health IT;
b. Safe implementation and use of health IT by all users;
c. Sociotechnical systems associated with health IT; and
d. Impact of policy decisions on health IT use in clinical practice.
Examples of Productive Areas for Further Research
The work organization problem: Health IT typically focuses on individual patient details with little support for actual clinical work. How can IT be designed to better support the clinical work activities of health professionals? For example, can IT be used to track and schedule work tasks for clinicians and triage these as emergencies and delays accumulate through the day? Clinical work involves many levels of interruptions; how can health IT be designed to support clinicians in resuming interrupted work and in switching contexts to deal with an interruption? What sorts of status displays or other methods can help clinicians “see” the state of their work and recognize changing priorities and opportunities?
The information structure problem: Health IT designs usually do not reflect clinical associations when organizing and presenting data. Related medications, vital signs, and laboratory studies are routinely presented separately rather than in relation to each other. For example, hypertension appears separately from the current blood pressure and current or past medications, requiring the clinician to track data across various screens in order to synthesize an understanding of a patient’s high blood pressure and its treatment. How can health IT be used to create meaningful representations of clinical data and knowledge?
The pick-list problem: Reports of wrong patient-wrong drug problems with health IT commonly arise from the pick-list problem. Health IT designs require practitioners to select single items from sometimes very long pick lists or menu lists, often containing similar terms presented in alphabetical order. There are lists of patients, lists of tests, lists of drugs, lists of results, and others. Health professionals struggle to find the desired entry in such lists and often select the wrong item, sometimes discovering this only much later. How can lists be presented so that their order and appearance make it easy to know what choices are available and easier to select the desired item?
The alarm/alert problem: Health professionals are drowning in data overload, and the current alarms and alerts within health IT often add to the problem. The “alarm problem” is generic and found across health IT and clinical practice. Each alert can be justified in isolation, but in combination these alerts can become a distraction. How can the use of alerts be managed at the system level so that clinicians receive useful alerts? For example, can the boundaries that trigger alerts be represented while
orders are being entered so that clinicians do not have to “click through” multiple alerts after order entry? Can health IT track the presentation of alerts to specific clinicians so that alerts appear when medicines or conditions new to that clinician appear?
The cooperative work problem: Health IT typically treats activities as belonging to individual clinicians and as being accomplished serially, but clinicians often work in tandem or in small groups and communicate with each other about goals and task details. How can health IT be designed and configured to assist cooperative work?
The accountability and reimbursement problem: Health IT often incorporates features that serve accounting and reimbursement functions. Large parts of the clinical record are being generated to conform to billing requirements or to provide a stream of accountability information for later review. These functions are valuable but do not directly aid the clinical process and can make clinical care more difficult by demanding attention and hiding meaningful data with bureaucratic camouflage. What are the consequences for clinical care of including all these functions in health IT designs? Can health IT be configured to encourage recording of high- quality clinical observations rather than just the accumulation of clinically meaningless filler?
The availability problem: The benefits of health IT are often touted by vendors and chief information officers but outages are nearly always accompanied by statements that “no patient was harmed” by the computer breakdown. These characterizations are seemingly in conflict. What is the real impact of system outages? How often does this occur? How can the effects be determined?
The interoperability at the user level problem: Each health IT vendor has its own “look and feel” and individual implementations are customized so that each facility has unique features. Many health professionals work in more than one facility and encounter these different products on a regular basis. Is it possible to make health IT interoperable at the user level so that clinicians moving from one facility to another do not have to learn a new way of doing things each time? Can systems be designed so that clinician profiles developed in one system can be used in another? What are the consequences of having every implementation be different from every other implementation?
ACHE-NJ (American College of Healthcare Executive of New Jersey). 2009. EMRgency medicine: The impact of EMR/EHR on healthcare, Keynotes and expert panel discussion. http://www.slideshare.net/sdorfman/emrgemcy-medicine-the-impact-of-emrehr-on-healthcare-keynotes-and-expert-panel-discussion-121009-in-nj (accessed July 9, 2011).
NSF (National Science Foundation). 2011. Research and development in industry: 2006-2007. http://www.nsf.gov/statistics/nsf11301/pdf/nsf11301.pdf (accessed July 25, 2011).