The idea behind evidence-based health care is the transfer of research results into practice. This means creating operational tools, procedures, and preappraisals of published results to enable practitioners to apply evidence to practice with confidence. But no matter how well we manage evidence or how compelling the evidence is, it has to fit into a framework; it is only one part of the decision process. Machines are not going to control medical care because the evidence has to fit the clinical circumstances of an individual. Best evidence must complement decision making, which must take into account a number of issues, including how severely a disease affects an individual, other diseases competing for the individual’s body space, allergies, financial constraints, and so on. Evidence-enhanced health care is perhaps a better term than evidence-based health care. We are not trying to replace the medical care process. We are trying to improve it by providing better access to evidence from research.
What are the barriers to the implementation and acceptance of evidence-based health care? First, our current standards for research claims are very loose, and false messages abound—messages such as “this will help you,” “this will be better for you,” and so on. These messages come from many sources, and their validity has not always been tested.
Although our resources for synthesizing evidence are generally inadequate, there is one worldwide organization, the Cochrane Collaboration (2002),1 that attempts to synthesize evidence from research. Because no single study can tell us very much about the value of an innovation, we need the results of several investigations to get a clear picture of an innovation’s effectiveness. Therefore, we should invest in synthesis processes to ensure that there is a hard-wired link between evidence and the final picture.
Support for practitioners and patients is also inadequate, which makes it difficult to provide the best evidence when and where it is needed. Consequently, procedures are sometimes done on the very patients who benefit least and are not done on the patients who would benefit most. How can we address the gap between research and practice? First and foremost, the evidence must be clearly understood and assessed so we can develop clinical policies based on the strengths and limitations of the evidence and the settings in which that evidence is going to be applied.
The extraction and synthesis of evidence is one aspect of the process at which we have been successful. Core medical journals put a massive number of published articles through a double filter: (1) the scientific validity of the research; (2) the contribution of the research to practice. An article that meets both criteria may appear in one of the core journals of internal medicine (e.g., New England Journal of Medicine, Annals of Internal Medicine, Journal of the American Medical Association, etc.). If one concentrates on these core journals, the number of published articles to be read is somewhat reduced. Nevertheless, a practitioner trying to keep up to date by reading the medical literature hasn’t got a chance.
We can now make this mass of information much more tractable by centralizing the evidence-sorting process. The Evidence-Based Medicine Review (EBMR) Service on OVID, for example, provides integrated access to original and reviewed research evidence. Let’s say you find a clinical trial on Medline through EBMR and it has been included in a systematic review by the Cochrane Collaboration. You will then be routed right to that review so you can see all of the other studies on the same topic and how they play out when you put them together.
The next step is to develop clinical policies based on
evidence. This is not simply a matter of taking the evidence as it stands and applying it. Practitioners must first determine how the evidence applies in their own settings. At a few institutions individual practice groups regularly sit down to evaluate evidence systematically, but this is still rare. The Hong Kong Hospital Authority runs about 65 hospitals in the Hong Kong region. Doctors from these hospitals who treat stroke, cancer, and heart disease get together regularly to examine the evidence. They then put together medical bulletins on evidence-based approaches to controversial areas of clinical practice, titled Evidence, which they circulate in print form and post on their institution’s website (available online at: <http://www.ha.org.hk/hesd/nsapi/>).
Now I’d like to comment on the status of continuing professional development. Doctors, and I think most other health professionals, prefer, and are most often offered, continuing professional development in ways that are not very effective—lectures, for example. More effective methods, such as preceptorships, for example, in which a practicing physician returns to an educational institution for a period of supervised training, are expensive and time consuming. In the future, practitioners will have to spend more time using hands-on models. The system will also need ongoing performance reports as feedback so a practitioner’s performance can be compared with the performance of his or her colleagues or against quality standards. Many practitioners are resistant to ongoing training, however, because they are reluctant to have anyone oversee their work. In addition, they do not want to spend unpaid time away from their practices, let alone pay for continuing education.
I’ll leave you with my wish list:
Scientists should be looking for treatments that cure. Unfortunately, the people who make money on illnesses will not fund the search for cures, so we must find different financing mechanisms.
We must develop centralized evidence processing. Despite the volume of research, we can have one central, high-quality, evidence-processing source that examines all of the evidence and evaluates it in terms of certain quality criteria. The next step would be to determine which evidence is relevant to particular practice groups and deliver it to them.
We must refine computerized decision-support systems and information services. We need a valid code that alerts us to the quality and currency of evidence on the Internet.
We must develop information-retrieval systems that are both sensitive and precise.
We must apply human-factors engineering to reduce errors.
We must develop decision-support systems that integrate clinical data with current, evidence-based, best-practice information and that provide information on when and why it may be appropriate to deviate from best practices.
We must develop learning systems for busy practitioners that provide them (and the system) with feedback on their performance.
Cochrane Collaboration. 2002. Online Library of Databases. Available online at: http://www.cochrane.org/cochrane/cc-broch.htm.
The Context of Care and the Patient Care Team: The Safety Attitudes Questionnaire
J. Bryan Sexton and Eric J. Thomas
University of Texas Center of Excellence for Patient Safety Research and Practice
The Johns Hopkins University School of Medicine
In the words of psychologist John Lauber, a former member of the National Transportation Safety Board, “Human performance doesn’t take place in a vacuum, it takes place in an environment engendered and maintained by management, government, and frontline personnel” (Lauber, 1995). Taking the context into consideration is critical for understanding the complexities of human performance. As climate researchers in quality of care, our task is to identify (with methodological rigor) the systems and cultural influences that affect the safe delivery of care.
In the wake of recent reports from the Institute of Medicine and National Health Service, interest in patient safety research has grown substantially (IOM, 1999; Department of Health, 2000). Experience in other safety-critical industries suggests that measuring attitudes toward teamwork and the overall context of work is an important step in improving safety (Maurino et al., 1995; Reason, 1997). In health care, quality of care must also be investigated within the framework of the systems and contextual factors that provide the environments in which errors and adverse events occur (Cook and Woods, 1994; Leape, 1994; Reason, 1995; Vincent et al., 1998). For example, Charles Vincent and his colleagues identify several factors that influence clinical practice: organizational factors (e.g., safety climate and morale), work environment factors (e.g., staffing levels and managerial support), team factors (e.g., teamwork and supervision), and staff factors (e.g., overconfidence and being overly self-assured) (Vincent et al., 1998). These factors are believed to influence the safe delivery of care, but to date, the attitudes of caregivers about these key factors remain largely unexplored (Pronovost et al., 2001; Vella et al., 2000).
Influential organizations in health care agree that caregivers’ attitudes about these issues should be examined. Research agencies (Agency for Healthcare Research and Quality, National Patient Safety Foundation, and National Patient Safety Agency), regulators (Joint Commission on Accreditation of Healthcare Organizations [JCAHO]), health maintenance organizations (e.g., Kaiser Permanente), professional organizations (e.g., American Hospital Association), and quality improvement experts (e.g., Institute for Healthcare Improvement) are encouraging the measurement of caregiver attitudes about the context of work. Despite this interest, there is no commonly used metric to measure these attitudes. The lack of a common metric led our research team at the University of Texas Center of Excellence for Patient Safety Research and Practice to develop and validate a tool that can be used across different types of clinical areas, different types of health care providers, and in different national cultures.
THE SAFETY ATTITUDES QUESTIONNAIRE
The Safety Attitudes Questionnaire (SAQ) is a refinement of the Intensive Care Unit Management Attitudes Questionnaire (Sexton et al., 2000; Thomas et al., 2003), which was derived from a questionnaire widely used in commercial aviation, the Flight Management Attitudes Questionnaire (FMAQ) (Helmreich et al., 1993; Merritt, 1996). The SAQ differs from other medical attitudinal surveys (Shortell et al., 1991) in that it maintains continuity with its predecessor (FMAQ), a traditional human factors survey with a 20-year history (Gregorich et al., 1990; Helmreich, 1984). Preserving this continuity allows for comparisons between professions and assists with the search for universal human factors issues. There is a 25 percent overlap in item content between the SAQ and the FMAQ. The new (non-overlapping) SAQ items were generated by focus groups of health care providers, literature review, and roundtable discussions with subject matter experts. More than 100 items were initially generated, but the number was reduced through pilot testing. The SAQ has been adapted for use in intensive care units (ICUs), operating rooms (ORs), general inpatient settings (medical wards, surgical wards), ambulatory clinics, pharmacies, and labor and delivery units. All versions of the SAQ
have the same item content, with minor modifications to reflect the clinical area. For example, “In this ICU, it is difficult to discuss mistakes” would be changed to “In the ORs here, it is difficult to discuss mistakes.”
The SAQ elicits caregiver attitudes through six-factor analytically derived scales: teamwork climate; job satisfaction; perceptions of management; safety climate; working conditions; and stress recognition. These six scales are based on prior research in the aviation industry and in medicine (Helmreich and Merritt, 1998; Sexton, 2002; Sexton and Klinect, 2001; Sexton et al., 2000; Thomas et al., 2003). The SAQ is a single-page (double-sided) questionnaire with 60 items and demographics information (age, sex, experience, and nationality). The questionnaire takes approximately 10 to 15 minutes to complete. Each of the 60 items is answered using a five-point Likert scale (Disagree Strongly, Disagree Slightly, Neutral, Agree Slightly, Agree Strongly).
To date, we have administered the survey in more than 300 organizations in the United States, the United Kingdom, and New Zealand. Our rule of thumb is that all personnel in a clinical area who influence, or are influenced by, the working environment in that area are invited to participate (e.g., attending/staff physicians, resident physicians, registered nurses, charge nurses, pharmacists, respiratory therapists, technicians, ward clerks, and others). Participation is voluntary, and administration techniques included hand delivery, meetings, and in-house mailings.
The SAQ is a psychometrically valid instrument for assessing the safety-related attitudes and perceptions of frontline health care providers. The SAQ factor structure was replicated in ICUs, ORs, ambulatory clinics, and inpatient settings, as well as three national cultures.
The SAQ results reported here demonstrate the substantial variability in teamwork climate and safety climate across 50 organizations (Figures 1 and 2). Each bar represents the percentage of respondents who reported positive attitudes in each of 50 organizations.
In Figure 1, the right side of the distribution corresponds to organizations with a positive teamwork climate. These organizations are information rich, have good collaboration, effective conflict resolution, and decision making based on input from the team. The left side represents organizations with a negative teamwork climate. These organizations are information poor; the quality of collaboration is abysmal; nurses do not feel comfortable speaking up if they perceive a problem with patient care; conflicts often go unresolved; and decision making does not integrate input from the team. Organizations on the left have problems with turnover and absenteeism, whereas organizations on the right enjoy high levels of retention, good participation, and better working conditions.
In Figure 2, the right side of the distribution shows organizations with a positive safety climate. These organizations have a proactive, rather than reactive, patient-safety posture. Individuals are encouraged to report safety concerns; medical errors are handled appropriately; rules and guidelines are followed; and it is easy to learn from the mistakes of others.
It is noteworthy that the answers of senior leadership were substantially more positive than the answers of health care providers working at the front line. In fact, senior leadership was four times as positive about teamwork climate as front line personnel and two-and-a-half times as positive about safety climate.
We have established a large archive of SAQ administrations to use as bench marks for comparisons in future research. We hope the SAQ can be used to meet some of the demand for survey assessments of climate and culture in medicine.
The SAQ was designed for organizational diagnoses and interventions relevant to patient safety. Hospitals, federal regulators, quality improvement organizations, and JCAHO could use the SAQ as an economical and efficient means of collecting safety-relevant data proactively, rather than waiting for problems to manifest themselves through adverse and sentinel events. The SAQ can be used to assess strengths and weaknesses in a given organization and to provide a basis for suggesting interventions. Examples of interventions include: briefings, checklists, executive walk-rounds, human factors training, multidisciplinary rounds, and the Comprehensive Unit-Based Safety Program (CUSP).
For example, a poor teamwork climate in the OR may indicate a need for preoperative, multidisciplinary surgical briefings, with participation by anesthetists, surgeons, and nurses. More than 90 percent of OR personnel report that briefings are important for patient safety, but only 23 percent report that briefings are routinely held. On average, surgical briefings require less than two minutes; they cover the plan for contingencies for “this patient, this procedure, this equipment, and this team today,” including who is responsible for tasks and what the expectations are. Surgical briefings have been shown to improve nurse retention rates and to have a positive impact on teamwork climate as shown in the higher percentage of respondents reporting that nurse input is well received, that they know the names of the personnel they work with, and that they feel comfortable speaking up if they perceive a problem with patient care.
Poor teamwork climate in the ICU might suggest a need for multidisciplinary rounds (Uhlig et al., 2001), whereas a poor safety climate might suggest a need for executive walk-rounds (Frankel et al., 2003) or CUSP (Pronovost et al., unpublished). CUSP is an eight-step program developed by the Johns Hopkins Hospital Patient Safety Committee and implemented in hospital work units, beginning in ICUs. Improvement teams were identified at each unit; outcome variables included: changes in safety climate from pre-implementation to six months post implementation; and a decrease in medication errors, length of stay, and nursing turnover rates. CUSP was carried out in the Weinberg Intensive Care Unit; a second ICU (the Surgical Intensive
Care Unit) was used as a control (see Figure 3). The evidence from Johns Hopkins Hospital demonstrates that safety climate can be improved and that these improvements are associated with decreases in medication errors, lower nurse turnover rates, and shorter ICU lengths of stay (Pronovost et al., unpublished).
To date, more than 150,000 copies of the SAQ are in circulation, many being used in longitudinal quality-of-care investigations. As our understanding of health care climates and contextual factors evolves, we are becoming better equipped to improve quality of care. Current research at the University of Texas Center of Excellence for Patient Safety Research and Practice is focused on the relationships between provider attitudes and patient, provider, and organizational outcomes. Some preliminary evidence shows that SAQ factors are related to annual rates of nurse turnover (Roberts, 2002; Sexton, 2002), medication errors, and ICU length of stay (Pronovost et al., unpublished). Additional links to outcomes have been found outside of medicine, where predecessors of the SAQ have been linked to pilot performance (Helmreich, 1984), pilot error management (Sexton and Klinect, 2001), and incident rates among night train conductors in Japan (Itoh et al., 2000). Taken together, these relationships suggest that the SAQ can shed light on important clinical, economical, and administrative issues in medicine and beyond.
Cook, R.I., and D.D. Woods. 1994. Operating at the Sharp End: The Complexity of Human Error. Pp. 255–310 in Human Error in Medicine, M.S. Bogner, ed. Hillside, N.J.: Lawrence Erlbaum and Associates.
Department of Health. 2000. Organisation with a Memory. London: The Stationary Office, National Health Service.
Frankel, A., E. Graydon-Baker, C. Neppl, T. Simmonds, M. Gustafson, and T.K. Gandhi. 2003. Patient safety leadership walkrounds. Joint Commission Journal on Quality Improvement 29(1): 16–26.
Gregorich, S.E., R.L. Helmreich, and J.A. Wilhelm. 1990. The structure of cockpit management attitudes. Journal of Applied Psychology 75(6): 682–690.
Helmreich, R.L. 1984. Cockpit management attitudes. Human Factors 26(5): 583–589.
Helmreich, R.L., A.C. Merritt, P.J. Sherman, S.E. Gregorich, and E.L. Wiener. 1993. The Flight Management Attitudes Questionnaire (FMAQ). NASA/UT/FAA Technical Report 93-4. Austin, Texas: University of Texas Press.
Helmreich, R.L., and A.C. Merritt. 1998. Culture at Work in Aviation and Medicine: National, Organizational, and Professional Influences. Aldershot, U.K.: Ashgate Publishing.
IOM (Institute of Medicine). 1999. To Err Is Human: Building a Safer Health System, L.T. Kohn, J.M. Corrigan, and M.S. Donaldson, eds. Washington, D.C.: National Academy Press.
Itoh, K., H.B. Andersen, H. Tanaka, and M. Seki. 2000. Attitudinal factors of night train operators and their correlation with accident/incident statistics. Pp. 87–96 in Proceedings of the 19th European Annual Conference on Human Decision Making and Manual Control, Ispra, Italy, June 26–28, 2000.
Lauber, J. 1995. Putting Professionalism in the Cockpit. CRM Advocate 95.1. Available online at: http://users2.ev1.net/~neilkrey/crmdevel/resources/crmadvocate/95_1/95_1.htm#2.
Leape, L.L. 1994. Error in medicine. Journal of the American Medical Association 272(23): 1851–1857.
Maurino, D.E., J. Reason, N. Johnston, and R.B. Lee. 1995. Beyond Aviation Human Factors. Aldershot, U.K.: Ashgate Publishing.
Merritt, A.C. 1996. National Culture and Work Attitudes in Commercial Aviation: A Cross-Cultural Investigation. Unpublished doctoral dissertation, University of Texas at Austin.
Pronovost, P.J., L. Morlock, and T. Dorman. 2001. Creating Safe Systems of ICU Care. Pp. 695–708 in Year Book of Intensive Care and Emergency Medicine, J.L. Vincent, ed. Heidelberg: Springer Verlag.
Pronovost, P.J., B. Weast, C. Holzmueller, B.J. Rosenstein, K.B. Haller, E.R. Feroli, J.B. Sexton, and H.R. Rubin. Unpublished. Evaluating a culture of safety. Submitted to Quality and Safety in Healthcare.
Reason, J.T. 1995. Understanding Adverse Events: Human Factors. Pp. 31–54 in Clinical Risk Management, C.A. Vincent, editor. London: British Medical Journal Publications.
Reason, J.T. 1997. Managing the Risks of Organizational Accidents. Aldershot, U.K.: Ashgate Publishing.
Roberts, P.R. 2002. In Pursuit of a Safety Culture in New Zealand Public Hospitals. Masters Thesis, Victoria University of Wellington, New Zealand.
Sexton, J.B. 2002. A Matter of Life or Death: Social Psychological and Organizational Factors Related to Patient Outcomes in the Intensive Care Unit. Unpublished doctoral dissertation, University of Texas at Austin.
Sexton, B.J., E.J. Thomas, and R.L. Helmreich. 2000. Error, stress, and teamwork in medicine and aviation: cross sectional surveys. British Medical Journal 320(7237): 745–749.
Sexton, J.B., and J.R. Klinect. 2001. The Link between Safety Attitudes and Observed Performance in Flight Operations. Pp. 7–13 in Proceedings of the 11th International Symposium on Aviation Psychology. Columbus, Ohio: Ohio State University Press.
Shortell, S.M., D.M. Rousseau, R.R. Gillies, K.J. Devers, and T.L. Simons. 1991. Organizational assessment in intensive care units (ICUs): construct development, reliability, and validity of the ICU Nurse-Physician Questionnaire. Medical Care 29(8): 709–726.
Thomas, E.J., J.B. Sexton, and R.L. Helmreich. 2003. Discrepant attitudes about teamwork among critical care nurses and physicians. Critical Care Medicine 31(3): 956–959.
Uhlig, P.N., C.K. Haan, A.K. Nason, P.L. Niemann, A. Camelio, and J. Brown. 2001. Improving Patient Care by the Application of Theory and Practice from the Aviation Safety Community. Pp. 1–9 in Proceedings of the 11th International Symposium on Aviation Psychology. Columbus, Ohio: Ohio State University Press.
Vella, K., C. Goldfrad, K. Rowan, J. Bion, and N. Black. 2000. Use of consensus development to establish national research priorities in critical care. British Medical Journal 320(7240): 976–980.
Vincent, C., S. Taylor-Adams, and N. Stanhope. 1998. Framework for analysing risk and safety in clinical medicine. British Medical Journal 316(7138): 1154–1157.
Engineering the Patient and Family into the Patient Care Team
University of Wisconsin-Madison
The basic message I want to convey is that investing engineering efforts in enabling patients and families to participate more fully in their own care is as important as investing in improving the health care system per se. My focus is on patients and families more than on improving the health delivery system.
In January of this year, my mother died. She had Alzheimer’s for about seven years, and when she developed shortness of breath, my sister-in-law took her to the hospital, thinking that she had pneumonia. She may have been right, but within a day that pneumonia, in some way or other, changed to congestive heart failure. The next day she had a heart attack, and on the third day she died. Setting aside the grieving, in a sense, it was fascinating to watch what happened during those three days. I watched a wonderful woman, whose mind had begun to fail but who was still capable of recognizing people and having conversations and who was still physically quite fit, change into a person connected to oxygen tubes, antibiotic IVs, and catheters, surrounded by monitors, hands and body restrained, unable to move, unable to twist, unable to lie on her side. My mother, who never slept on her back, was expected to endure these conditions as part of her “care.” All my mother wanted to do was to go home.
It was nice that she died pretty quickly, but she died after more than $20,000 worth of treatment had been given in her last three days. My mom hated health care systems and never went to doctors; she didn’t like them, didn’t trust them, and didn’t think they were very useful. As a result, that $20,000 was a lot more than had ever been spent on her health care during her entire life. And it was spent to see her die in a very unpleasant situation. As a matter of fact, my mom was not unique. In 1997, the Institute of Medicine concluded that “the care of the dying does not even approach the norms of decency” (IOM, 1997). I concluded the same thing from this experience.
One of the fascinating things to me about that experience was that there were so many decisions that could’ve, should’ve, maybe would’ve been made if I and the rest of the family had had access to information and had known what our rights were. Could we have had the tubes removed? All she wanted was to remove the tubes and go home. Did we have the right to do that? Could we have done that? Could we have just taken her home and said, “Look, this is silly. Let’s let her die.” Would the assisted-living facility have taken her back? How do you manage 24-hour care, if you can find it? Could I have asked her if she wanted to die, and if so what words should I have used to ask her? Should I have called a hospice, and how would I have found a good one? How about a nursing home? I know that nursing homes vary enormously in quality. Which one should I have put her in? How could I have gotten her into the right one, and how could I have ensured that she was getting the right care?
Physicians were not helpful. They are very much “into” curing patients, and I can’t blame them. I would be too. But they headed for the hills. In three days, we didn’t talk to a single physician! At the same time, the nurses kept telling us they could not answer our questions and that we should talk to the doctors. What we really needed was access to information that we could get on our own. We needed help in making decisions. Techniques like decision analysis might have helped us through this tough time. We couldn’t get that help from the health care system.
Tom Ferguson uses a triangle with a waterline dividing the tip of the iceberg of professional care from the rest of health care (Figure 1) (Ferguson, 1987). He argues that the vast majority of health care is self-care, not care from the traditional health care delivery system. Above the waterline is the amount of care delivered by professionals, and below it is the 95 percent of health care that’s delivered by individuals and their families. One of the key things that engineering can and should do is find a way to raise the
waterline, to help the patient and the family provide more care themselves. This will require a multidisciplinary effort that goes beyond industrial engineering to include other disciplines such as biomedical engineering, other professionals, such as psychologists and communications experts, and beyond professionals to include patients and families. If we can raise the waterline, increase the proportion of care delivered by the patient and the family themselves, we will dramatically cut the cost of care and exponentially improve the quality of care.
This goal is not just for the dying, of course. The same kind of problems occur with drug addicts, kids with severe asthma, women with breast cancer, men with prostate cancer, stroke victims, etc. In all of these situations, the family or patients can make a difference, can be involved, can influence outcomes, possibly more dramatically than the health care system itself can. We need to find ways of making that happen, and engineering is uniquely positioned, with the kinds of tools and techniques it offers (and in collaboration with other fields and the patients and family), to do that. I will give you three examples.
One of them is e-health (interactive health communications, consumer informatics). The basic idea is that vehicles, such as computers and the Internet, can make a huge difference in the capability of patients and families to care for themselves. The system I will describe is CHESS (the comprehensive health enhancement support system) (Figure 2), which we developed at the University of Wisconsin in the late 1980s (Gustafson et al., 2002). Since then, it has gone through many evolutions, and it now addresses many topics (e.g., breast cancer, asthma, heart disease, depression, etc.) and offers 17 different services, such as answers to frequently asked questions, an action plan to determine how likely people are to implement changes in your life, and decision analysis to help people better understand their options and their values. Hidden beneath some of these tools, although patients would never notice, are Bayesian models, multi-attribute utility models, statistical process controls, human-computer interface designs, and several other engineering tools.
Although CHESS has proven to be quite powerful, it has only scratched the surface of what industrial engineering and operations research and other kinds of engineering could do. CHESS does not even take full advantage of statistical process controls as a way of helping patients monitor their own care and helping families monitor the status of their loved ones. We don’t use embedded chips to feed information to CHESS about the health status of a patient. And, we have a long way to go for our interface to be extremely easy to use. We could give families of patients approaching the end of life an opportunity to see how pain has changed and how distress has changed. Patients could use that information as a vehicle to communicate with clinicians. We are working on that now.
There are many other ways interactive health communication systems can make a difference in patients’ lives. In the last 10 years, we’ve done a lot of research on CHESS, and the results of our work and other people’s work suggest that these kind of tools are extensively used, especially by the elderly and the underserved (Gustafson et al., 1998, 2001). These two groups use them differently from the rest of us, and in fact they are the ones who seem to benefit the most—in terms of improved quality of life and less expensive health care services.
The Internet has great potential. The problem is that not enough of the skills and tools of engineering have been applied. We just completed a study of 300 breast cancer patients: one-third of them got usual care; one-third were given computers with access to the Internet and were trained to use the Internet and given a list of high-quality breast cancer sites; one-third got CHESS (Gustafson et al., 2003). The straight line in Figure 2 (where it says zero) is the impact
of having usual care; this was our control group. Notice how many things are below the line; those are Internet results.
The results of this study are preliminary but make some important points. First, if you give people access to the Internet, teach them how to use it, and give them high-quality Internet sites, they become, if anything, more confused, more worried, and more depressed. If the results continue to hold, this suggests that, although the Internet has tremendous potential, that potential won’t be reached until we can also take full advantage of decision analysis, statistical process control, and other kinds of industrial engineering and operations research tools and integrate them into interactive delivery systems.
I think engineering will be the key to making the Internet a truly useful health intervention. Moreover, this will be the result not only of engineering tools, but also of engineering insight. For instance, decision analysis offers not only tools, such as utility models, but also an understanding of how people make decisions and an understanding of how to communicate uncertainty effectively. These are critical issues in sharing information with patients and families. Other issues relate to the way information is displayed—the appropriate combination of audio, video, and text, for instance. Engineering, such as cost analyses, can help us determine the cost effectiveness of interventions. We also need to address the design and relative roles of PCs, PDAs, cell phones, and monitoring chips embedded in the body.
We ought to convene a panel of experts to address the following problem. Suppose we are caring for people (e.g., with severe asthma), and there is only one rule: no health care professionals can be involved, no doctors, nurses, or any other health care provider—just the patient and family. Let’s design a system that’s completely technologically based, recognizing, of course, that this could, and should, never happen. Then let’s back off the assumption of involving no health professionals, but only involve them when it is absolutely essential to do so. What would that system look like? What tools, techniques, and resources would we need? I think if we took that kind of approach, we would get an idea of the potential of engineering to make a difference that we couldn’t make otherwise.
My point is that the National Academy of Engineering and Institute of Medicine should engage in blue-sky work where we assume that we can do without the health care system as we know it, and then back into the current health care system only when it’s absolutely necessary. Our job should not be to improve the existing system but to develop systems to help patients and families play a more central role in their own care. We can’t afford the system we have, and the sooner we get away from the idea of improving it and on to the idea of replacing it, the more likely our work is to make a substantial difference.
In addition, we should not limit ourselves to the acute health care delivery system. There are many other problem areas. I am the national program director for a Robert Wood Johnson Foundation program called Paths to Recovery, which aims to improve access to and retention in substance abuse treatment. When I first got into this area, I didn’t know anything about heroin or other addictive drugs. So I took on a persona and got myself admitted for heroin addiction. Everybody knew I was a fake, that I’d never seen heroin in my life; I still don’t know what it looks like. But I adopted a persona, and I walked in and said I wanted to get help. And they said, “OK, we need to collect some information from you.” They spent two hours collecting information from me, and then they said, “Yes, you need to be admitted, but we don’t have a bed. Call back once a week, and tell us if you’re still interested.” Now heroin addiction is a chronic disease, where timing means everything. A heroin addict can desperately want help one hour and the next hour can give up and desperately look for the next packet of heroin. But they told me to call back. When I called back, I got an answering machine, “leave a message,” first week. “Leave a message,” second week. “Leave a message,” the third week, fourth week, fifth week, sixth week, “leave a message.” The loneliness and hopelessness that I felt (even though I don’t have the problem) was incredible.
Then I went to my “staffing” where they decide how to treat me, to find out how the process worked. Several professionals were at this meeting talking about what to do with me and other potential patients. Remember this was after I had been interviewed for over two hours to collect data on my condition. The staffing team (which did not include the person who had interviewed me) had one small paragraph of information about me, and that is what they based their decision on. All of the other information collected from me was paperwork compliance, simply satisfying a regulatory body. The inefficiencies and duplications of effort and waste in that system were terrible! I had to travel on a bus route for over an hour to reach the location where my interview took place. Would it have been possible to develop a computer system to interview me that could have saved staff time and allowed me to be interviewed at any public library? Would it be possible to develop an Internet-based system to help the family help heroin addicts? This organization did not have a bed for me, but another one might have. Would it be possible to have an inventory system that could have placed me in an open bed immediately? Did I really need to be placed in an inpatient facility? Could outpatient care have been at least partially effective while I was waiting for a bed? Would it be possible to develop a computer-based protocol that could have made these decisions immediately without my waiting for seven weeks to get an opportunity for treatment? There are so many opportunities beyond the traditional physical health system in areas such as substance abuse and mental health, areas where engineering can make a huge difference.
We also need to ask how engineering can contribute to the diffusion of innovation or the implementation of change. Often changes in the health care field simply disappear, and the system regresses back to its previous condition. One of
the things we’ve got to figure out is how to make changes that stick. That’s going to take a lot of work. One way might be to use decision analytic models to predict and explain whether changes will be made and sustained. The problems are much too complex for us to try to solve them alone. We must work with communications scientists, organizational development scientists, psychologists, educators, economists, and others. It’s going to take all of us working together to solve them.
Finally, I think we should be trying to put ourselves out of business. Our tools are so powerful. They have so much potential. But too often, we focus on developing more sophisticated tools rather than on asking how we can spread the application of the tools we have. We engineers ought to assume that with the kinds of information technology out there today, we can design systems that will allow a patient or family to do their own simulations and optimization. There’s no reason we can’t make the technology we have so easy to use and so automated that the assumptions are protected and the data collection mechanisms are developed. Our tools could then be used by the average citizen. We ought to be engaged in developing technologies that automate our field so that everyone can be an industrial engineer. By trying to put ourselves out of business, engineering will find a future that is more dynamic and useful than we can even imagine.
Ferguson, T. 1995. Consumer health informatics. Healthcare Forum Journal 38(1): 28–33.
Gustafson, D.H., F. McTavish, R. Hawkins, S. Pingree, N. Arora, J. Mendenhall, and G.E. Simmons. 1998. Computer support for elderly women with breast cancer: results of a population-based intervention (letter). Journal of the American Medical Association 280(15): 1305.
Gustafson, D.H., R. Hawkins, S. Pingree, F. McTavish, N.K. Arora, J. Mendenhall, D.F. Cella, R.C. Serlin, F.M. Apantaku, J. Stewart, and A. Salner. 2001. Effect of computer support on younger women with breast cancer. Journal of General Internal Medicine 16(5): 435–445.
Gustafson, D.H, R. Hawkins, E. Boberg, F. McTavish, B. Owns, M. Wise, H. Berhe, and S. Pingree. 2002. CHESS: 10 years of research and development in consumer health informatics for broad populations, including the underserved. International Journal of Medical Informatics 65(3): 169–177.
Gustafson, D.H., R. Hawkins, S. Pingree, F. McTavish, W. Chen, K. Volrathongchai, W. Stengle, and J. Stewart. 2003. The Internet as Source of Health Information and Support: Less than Meets the Eye? The Center for Health Systems Research and Analysis, University of Wisconsin, Madison.
IOM (Institute of Medicine). 1997. Approaching Death: Improving Care at the End of Life, M.J. Field and C.K. Cassel, eds. Washington, D.C.: National Academy Press.
Connecting Patients, Providers, and Payers
John D. Halamka
CareGroup Health System
Harvard Medical School
Harvard has moved all of its clinical, financial, and administrative applications to the Web. The changeover began in 1998, when all clinical information was put online. Now one can have complete, ubiquitous, transparent, seamless access to all aspects of the clinical care process. The technologies are robust and secure, and all work flow processes take place on the Web. From an engineering standpoint, the change was made by taking all of the legacy systems that already existed for Harvard’s patients and employees and wrapping them, using XML Web services, to provide standards-based information exchanges. Today, I will present this new system.
Everything in the system is secure, and everything is audited. When the system is accessed by a provider, the first screen provides access to the nine million patients in the CareGroup master patient index, reflecting 6 hospitals (Beth Israel-Deaconess, Mt. Auburn, New England Baptist, and three community hospitals).
One serious impediment to using computerized medical records is that there is no universal health identifier in the United States. To search for a patient’s record, therefore, the Harvard system uses a statistical, probabilistic match based on demographic information. For example, using this model to gather information about a Martha Ford, one can see (Figure 1) that patients with that name have visited the East Campus, the Mt. Auburn Campus, and the West Campus, with
different medical record numbers for each of those visits; but, with consolidated access, it is possible to see that all three Martha Fords are the same person (this patient has given permission for access to her medical records).
Next, one can assemble her entire medical record in real time from all of the places she sought care. Information is stored in very different ways, but, because the legacy systems have been wrapped in a standards-based package, it is possible to click on, for instance, medications, and get a complete look at the medications she is using. That information can be forwarded, for instance, to a drug interaction engine, that could list in order of severity the interactions of all the medications she has taken in the entire continuum of her care. It is possible to look at text data, such as her last echocardiogram, to obtain all of the echo parameters. It is also possible to pull out telemetry data, even data stored in an old, non-standards-based system, convert it to a standards-based display, and deliver it. This is truly a time series, scalable object that can be measured, printed, and manipulated.
All medical record images are also online and are DICOM based. Because Martha’s chest x-ray is an object, it can be examined in many ways, and old films can be pulled up for comparison. Moreover, with this system one can also look at laboratory results over time, such as her CBCs trended over time for the last 10 years. This is the organizational context of information ubiquity, pulling together all of the data from wherever it is stored.
Provider order entry is also complete; there is 100 percent compliance throughout the entire organization. To achieve this, many processes had to be changed. Now, no voice or handwritten orders are done anywhere at Beth Israel-Deaconess Medical Center.
Here is an overview of how provider order entry works. Farr 2 is a typical medical ward. From a dashboard of all patients in that ward, one can click on any patient and get the patient record and a snapshot of the patient. All the standard orders that would appear on paper are available on the Web. When orders are entered, the system response includes queries as rules or reminders. One can, for instance, set rules that a flu vaccine has to be given to a patient per the standard protocol. One can then click a button to document that the patient has received the flu vaccine or click a button to order the vaccine. The system offers a quick pick list of all medications the clinician has ordered in the past for the patient, the most common orders, as well as overlays of some pharmacy and therapeutics standard formulary medications.
The physician order entry system has some fail-safe mechanisms. If a clinician orders something that might be bad for the patient—for instance cefazolin for a patient allergic to penicillin—the system notifies the clinician of a potential drug/drug interaction (Figure 2 shows a typical flagged interaction). If the clinician overrides the warning and continues ordering, the system queries the order. Let’s say the clinician thinks the allergy history is questionable and wants to monitor the patient. The system then immediately fires off another set of care pathways or rules that advise about the drug. Based on the newest information about the patient’s size and test results, it calculates the recommended dose of the drug, the dose frequency, and suggests body parameters that a clinician may want to follow while
administering this drug. Thus, this system helps reduce adverse drug events.
If the clinician orders a different drug that may be restricted, an ID fellow may have to approve it. This is a form of consult. The system presents a built-in work flow: click on the button and the system pages the ID fellow via the Web to get approval for the drug. By the way, disapproval happens less than 10 percent of the time. The clinician may choose to default to the standard dose and standard route, which also requires only one click.
The system covers all aspects of ordering care plans and processes and contains standard order-sets for diseases, such as congestive heart failure and asthma. The system has also proven to be very helpful in the emergency room, where it has resulted in a 30-minute decrease in patients’ length of stay. At the same time, customer and provider satisfaction has gone up a lot—before implementation, 60 percent rated the ER experience as excellent; after implementation, 85 percent rated it as excellent.
The system also gives the care team the organizational context to enable patients to take part in their care. Here is an example. In 1999, Harvard created a patient site, working with people in the patient-centered movement, that allows patients to have ubiquitous access to their own medical records, with secure, encrypted, doctor–patient e-mail and convenient transactions, such as appointment scheduling, referrals, and prescription renewal.
The patient site begins at patientsite.caregroup.org. (A typical screen shot is shown in Figure 3). For an overview, you just click on the “Take a Tour” button. This is how it works. A patient enters the system. The patient has a unique portal with access to messages of the day from the doctor; links to providers and websites; and life events such as appointments, a flu vaccine, or a colonoscopy. The patient can send secure—not standard—e-mails to doctors at any time. The patient sites are backed up with behind-the-scenes triage rules so that the medical staff can observe patient transactions and route messages appropriately, for instance, to a doctor, the appointments desk, or a nurse practitioner. The information on the transactions goes into the permanent medical record, where it is retained for 30 years. All of this happens in a secure, audited way. Patients can see the same medical records their doctors see, with certain limitations; patients can access information about their medications, visits, reports, x-rays, allergies, and problems, but cannot access laboratory reports, microbiology, or DICOM imagery. That happens across all institutions and outpatient facilities. CT, MRI, pathology, and psychology results are delayed for 14 days, to ensure, for example, that patients do not first read about a cancer diagnosis online; bad news does not transmit well electronically. The idea is to get information to patients as soon as possible without compromising the doctor–patient relationship.
One question about such a system is “cyberchondria.” It’s midnight, say, and a patient decides to type in a complete 27-page medical history, including every brand new symptom, such as sudden chest pains. What is the liability issue? Harvard has not encountered this problem. First of all, the site is full of disclaimers and warnings, and patients are repeatedly told to call 911 in case of emergency. Patients have been extraordinarily reasonable when it comes to interplay with their doctors. They have been using the system correctly, even adding in their over-the-counter medications. For example, a patient came in with refractory hypertension. He was treated with ACE inhibitors, beta blockers, and calcium channel blockers, but nothing helped. At the patient
site, the patient documented that he was taking ephedra five times a day for energy. That was good to know because that is like drinking 40 gallons of coffee a day. Of course, this exacerbated his hypertension.
In another example, a post-liver transplant patient who was feeling depressed took St. John’s wort for the depression. One thing St. John’s wort does is wrap up the liver’s cytochrome P-450 system so immune rejection drugs are processed extraordinarily rapidly, leading to subtherapeutic levels and rejection of the transplant. This problem was picked up entirely because of the shared medical record, amendable by the patient and seen by the care staff.
The patient portal also makes available standard services, such as medication renewal. Patients go to the list of their medications, click on the one that needs renewing, and a prescription renewal request appears, querying dose, quantity, and pharmacy. After review by a doctor, the renewal is autofaxed to the pharmacy. The portal also has an appointments feature. Twenty percent of our doctors allow patients to self-schedule into their calendars. There is also compliance with the Health Insurance Portability and Accountability Act, so that patients can find out who has been looking at their medical records over time.
About 10,000 patients a month use this system, and 2.5 million transactions have been carried out. The average patient sends 1.2 e-mails to the provider every month. Ninety percent of those are triagable to extenders, such as nurse practitioners. Even in a busy practice, the doctor does not see more than five or ten clinical messages a day, which, moreover, usually replace phone calls. The system has become an asynchronous communication medium, allowing doctors to answer e-mails at will instead of having to place phone calls that break up the day. This makes the work flow much more efficient. As long as there is a framework with good engineering principles giving the patient and the doctor shared information and a mechanism for questions and answers, problems with excessive volumes of e-mail do not arise.
To ensure that the system is improving quality and using resources appropriately, the performance of the system is evaluated with metrics. This can be done because all data are warehoused; there are about 40 terabytes of health care data. Metrics based on good data, patient involvement, and control systems give doctors an understanding of how well and how appropriately they are performing. It is also possible to assess performance at the organization level.
The entire enterprise has really helped Harvard, as an organization, meet some of the challenges of the last few years. The Web is an ideal technology for connecting payers, providers, and patients. Creating this system did involve some challenges, which were mostly adaptive and organizational. The important thing about the system is that patients can access their information and participate more often in their own care. Consumer empowerment is a reality that is already redefining the practice of medicine.
New Paradigms for Working and Learning
Harvard Business School
Most of us think of the health care process as the tasks and activities we see performed by doctors and nurses and the technologies and settings they use. However, behind these tasks and technologies are problem-solving activities, predominantly information gathering and decision making, undertaken by members of the patient’s caregiving team. These decisions are based on a huge body of medical knowledge developed over centuries. Hence, at its heart the practice of medicine is the application of a general body of medical knowledge to a specific patient for the purpose of resolving the health-related problem for which the patient sought treatment.
Learning (the development and dissemination of the knowledge underlying medical care and medical decisions) plays a pivotal role in this endeavor. Learning is the mechanism by which we advance the practice of medicine and ensure that these advances are widely applied. Learning occurs at many levels in the health care industry. The most prominent levels are the industry level (learning that derives from funded basic research and new technology development) and the individual practitioner level (learning that occurs in medical school and through continuing medical education). At the industry level, learning means the creation of new knowledge; on the individual practitioner level, it refers to the dissemination of existing knowledge. Learning occurs at other levels, too. Patients learn through their experiences of medical problems, and whole organizations learn as they develop experience with particular classes of problems or as they implement new technologies.
One particularly important setting in which learning takes place is inside a delivery organization during the adoption of innovations, such as new services or technologies. Some innovations are, by their nature, nettlesome, difficult to adopt, and significant learning challenges. I call these innovations “interactive,” to distinguish them from what I call “component” innovations, which cause relatively little disruption to the processes and systems in which they are used. Consider a “me-too” drug. For example, we have an existing process of care for treating patients with congestive heart failure, and we simply substitute one drug for another (e.g., we replace ACE inhibitor “A” with ACE inhibitor “B”). The adoption of an interactive innovation occasions a redesign of a process of care, a redistribution of tasks, a change in sequencing—in effect, a disruption of organizational routines.
Interactive innovations are technologies that disrupt processes. Medical devices and new information systems are frequently interactive innovations. A new generation of biopharmaceuticals targeted more specifically based on a genetic profile may also disrupt processes of care, organizational routines, and team configurations by requiring members of the care delivery team to work together in new ways. As interactive innovations occasion rearrangements of organizational roles and routines, care delivery teams have to learn to use new technologies, perform new tasks, and develop new relationships and new ways of working together. Hence, organizational learning, as well as learning at the individual practitioner level and the health care industry level, is an important aspect of the adoption of a new technology.
Both organizational and individual learning are correlated with experience. In cardiac surgery, for example, where much of the volume-outcome debate has taken place, this insight has motivated a requirement for mandatory minimum case numbers to credential an individual surgeon or surgical unit. The underlying assumption of the so-called volume-outcome hypothesis is that if you do something often enough, you will become good at it. Practice makes perfect.
When we mandate minimum volumes to ensure competency, we assume that all individuals learn at the same rate and that all institutions learn at the same rate. In effect, we assume that for a given aliquot of experience all surgeons and all institutions will abstract the same amount of learning. This might not be the case, however. Organizational learning is not simply the accumulation of individual
learning experiences in one organization. It also requires that teams learn new ways of working together.
We recently undertook a research project to examine the learning rates of 16 surgical units in the adoption of a new surgical procedure for minimally invasive cardiac surgery. With this technique, the surgeon places the patient on femoral bypass and uses long-shafted instruments to operate through a small chest incision (Edmondson et al., 2001). This seemingly simple modification to a well understood operation involves a substantial change in the traditional activities of each member of the surgical team and in the way team members interact with each other. The change occurs because the direct visualization of the heart in the conventional open method is replaced with remote monitoring via pressure traces and transesophageal echocardiogram images displayed on various screens in the operating room. The result has been that the new technology—a good example of an interactive innovation—has been difficult for many teams to learn to use, which has slowed its adoption.
The learning rates of the teams in our study varied significantly (a finding that was not predicted by the volume-outcome hypothesis) as did their success in adopting the new technology. Even more intriguing were the factors associated with rapid learning and successful adoption. Type of institution (academic or community) and seniority of the adopting surgeon were not particularly important factors. What mattered was whether the process of adoption of the new technology was managed as a “project.” This meant careful selection of the team members and adequate preparations, such as practice sessions before the first case, the selection of simple cases to operate on early, and debriefings after every early case to reflect on what went well and what did not. For successful adoption, these learning activities took place in an environment conducive to team-based learning—a “psychologically safe” environment. Learning as a team is made easier if team members can fail publicly—make an error or be criticized or warned of an impending error by another team member—and not be disadvantaged.
In short, team learning cannot be left to chance. As the example illustrates, although experience is clearly necessary for learning, experience alone is not enough. Learning takes place at both the individual level and the team level, but unlike individuals, teams do not learn naturally. Team learning requires an environment that is deliberately structured and managed to be conducive to learning. The role of team leader and project manager was new to the surgeons in our study. In the context of new technology adoption, they had to be not only clinical decision makers and practitioners but also team leaders and project managers.
We are becoming increasingly aware of the importance of organization in care delivery. Solo practice is giving way to group practice, and care that was once delivered by an individual is now delivered by a team. In addition, the size and complexity of the health care team has increased dramatically in the last century. Many current innovations in health care (e.g., new services, processes, and technologies) are interactive and thus have the potential to disrupt routines and processes. The successful introduction of these innovations into day-to-day care will require that team members learn to work together in new ways. And as we have seen, team learning depends on leadership more than anything else—a very different role for the rank and file physician.
Engineers have already undergone the change from solo professionals practicing their craft to members and leaders of teams of professionals who collaborate to realize difficult goals. An engineer used to be a technologist who functioned in a tightly defined engineering specialty; now engineers are project managers who use their knowledge base to manage multidisciplinary teams to complete complex projects. Health care practitioners are just beginning to undergo a similar transition. So, we can learn a great deal from engineers, not just about modeling—the subject of many of these presentations—but also about leadership and about restructuring the role of professionals.
Edmondson, A.C., R.M.J. Bohmer, and G.P. Pisano. 2001. Speeding up team learning. Harvard Business Review 79(9): 125–132.
Designing Caregiver- and Patient-Centered Health Care Systems
H. Kent Bowen
Harvard Business School
The engineering discipline, with its proclivity for seeing the world as it really is and then designing systems to make things better, offers a good perspective for addressing the dilemma facing our health care systems. Like many people, I was not aware of the chaos on the front lines of the health care system until my 14-year-old son suddenly became gravely ill. Because of a brain aneurysm, he went from an active, vibrant young man to a paralyzed boy within minutes. My wife and I essentially lived at Massachusetts General Hospital for three weeks while a team of “the best of the best” worked to save his life. During that time, I observed how the actual practice of medicine affects patients, and it became clear to me that the system was not designed to prevent errors and defects. At one point, because important information was not communicated, a grievous mistake (not directly related to the aneurysm) nearly cost my son his leg. Even though the medical team corrected the error, after I caught it, I wondered how such a mistake could have occurred in the first place. After much thought, I came to the conclusion that the nurses, physicians, and technicians were not at fault. Our ad hoc system for delivering health care conspires against the best intentions of care providers, making it extremely difficult for them to provide patient-centered, defect-free care.
Many industries have revolutionized their approaches to deliver products and services that are more customer centered, high quality, and cost effective. The automotive industry, for example, has made dramatic improvements to avoid both design and production failures. Toyota, in particular, has an operating system that delivers award-winning quality year after year. Toyota’s system is designed to bring problems to light, resolve them, and improve the system to ensure that the problems are not repeated and that the organization learns. Toyota’s approach helps frontline workers (as well as all others) be successful, as defined by the customer’s (or patient’s) needs. The goal is “defect-free operations” and learning (Spear and Bowen, 1999).
Based on examples from industry, a young colleague of mine, Professor Steven Spear, developed a case study to determine the applicability of systems-thinking to health care. He engaged a former medical administrator and surgeon, Dr. John Kenagy, to work with leaders of a small community hospital in the Boston area. Like most people in the medical profession, the dedicated hospital staff wanted to provide the best care. He initially focused on a system for the administration of medications using the Toyota production system (TPS) as a model for defect-free operations. First, he taught Dr. Kenagy to look at the hospital through the TPS lens. Early on, he discovered that not only does the medical staff itself not fully understand its system for providing care, but also that the staff was not equipped with the tools, processes, or organizational structure to solve problems (Spear, 2001; Spear and Kenagy, 2000a,b).
Anita Tucker, a doctoral student at the time (now an assistant professor at the Wharton School, University of Pennsylvania), expanded the initial findings with studies of nursing care in 20 additional hospitals. Her studies revealed that nurses’ care of patients was constantly interrupted because of system failures (Tucker, 2003). Nurses are trained to evaluate and diagnose patients and administer a care plan based on a physician’s recommendation. Over the course of a shift, however, nurses spent only 33 to 50 percent of their time caring for patients. The rest of the time, they were searching for information, equipment, or materials or correcting mistakes. Thus, they spent most of their time compensating for the faulty system, becoming frustrated and cynical of management’s work design and rules.
The current design of most hospital work systems is disrespectful to both patients and frontline caregivers, as evidenced by the high turnover of nurses and the complaints of patients. Think about the service you receive at the best commercial establishments and compare that with the service you receive when you are admitted to a hospital. One reason for the difference is the constant and conflicting demands on
hospital service personnel and caregivers. For example, a typical nurse, in a single hour, works in eight different physical locations, makes 22 location changes among those eight places, has conversations with 15 partners on 25 different topics, while taking care of five patients in three rooms (Spear and Kenagy, 2000a). If one of the patients requires critical care, which means following strict care guidelines, it is nearly impossible for the nurse to follow the care plan. The critical-care routines are constantly interrupted because of wrong medications, faulty equipment, poor information, or requests to assist colleagues.
Observations of the flow of information necessary to patient care revealed other problems. Information that originates at the patient (e.g., the patient’s insurance provider, family history, medications, medical history, symptoms, etc.) flows along many pathways to physicians, nurses, and pharmacies. In spite of large investments in information technology, getting the correct information to the people who need it when they need it is very problematic. Any of the pathways over which critical information flows can be blocked, and there is a high probability that this will happen on a daily basis. If the medication-administration pathway breaks down, for example, the medication will not be administered in the right dose at the right time under the right conditions. The medication error rate has been shown to be in the parts-per-hundred range (Bates et al., 1995). The most frequent failures occur between shifts.
Most hospitals do not have defect-free standards for exchanges of information. Anita Tucker identified the best hospitals from her pool of 20 for more detailed analysis of this problem. Her study showed that even at facilities renowned for the high quality of their nursing care, the work of a frontline caregiver is filled with interruptions and poor information flow. When she asked why health care workers “live this way,” she concluded that most of them actually expect the work system to be defective. Because problems often cross organizational boundaries or are so complex a single person cannot hope to eliminate the root cause, they expect to have to “work around” problems (Tucker et al., 2002).
In hospital after hospital, because no resources have been allocated for solving problems, health care workers confront the same problems every day. At this point, the health care system is incapable of fixing itself. This is a significant contrast to a Toyota factory where improvements are made continuously in the course of accomplishing daily work, crossing organizational boundaries if necessary, sending problems to the appropriate management level (Spear and Schmidhofer, 2005).
We did find some medical facilities that have designed systems to reduce defects, improve the work systems of frontline caregivers, and improve the patient experience. For example, we studied an eye surgery clinic in Boston with 18 top ophthalmic surgeons (Miguel and Bowen, 1997). One of the surgeons, Dr. Barry Shingleton, was three times as productive as other surgeons in terms of time spent performing similar surgeries. When Dr. Shingleton was designing his diagnostic and surgical procedures, he had turned to the business literature for guidance. His service model is centered on the patient experience, from the first encounter through post-surgical follow-ups. In addition, he collects outcomes data much more rigorously than his colleagues as feedback for improving procedures and processes. He developed his own patient scheduling algorithm to improve service and efficiency, and he schedules simpler procedures earlier in the day to minimize disruptions and delays. He also eliminated unnecessary variabilities during surgery by standardizing procedures. For example, to reduce changeover time between surgeries, he maintains contact with the anesthesiologist prepping the next patient; in this way, he has been able to reduce the time between the administration of the drug and the beginning of surgery by as much as 50 percent. More important, as a result of his efficiency, his patients experience less surgical trauma, which speeds the healing process.
In a more recent study, we looked at Intermountain Health Care, where doctors, under the leadership of Dr. Brent James, have applied the entire quality-management concept to the hospital’s functions (Bohmer et al., 2002). The study was focused on two intensive care units (ICUs) located next to each other in LDS Hospital in Salt Lake City (Tucker et al., in progress). We found that, even though the hospital had developed an overarching quality system, frontline care was administered differently in the two units. In addition to some structural differences, the medical directors of each ICU had different design models for operating their units. In one ICU, problem solving was more prevalent, especially root-cause elimination (much like Toyota’s TPS). This ICU also stressed patient-centered care: the number of admitting physicians was small; interns spent more time on the rotation; a nurse manager was available to assist in problem solving and problem prevention; and the unit developed and used more medical protocols. In the second ICU, the quality of care was also very high, but operations were more physician centered: because there was a different set of patients, there were more admitting physicians; by design, interns spent less time on this rotation; no nurse manager was available for problem solving; the unit had fewer protocols and did not generate any of its own. To further learning at LDS, the two ICU medical directors have now exchanged positions, which should provide a wonderful natural test of how much the differences relate to design choices and how much they relate to differences in the patient mix, structure, etc.
A recent study at the Pittsburgh Regional Health Initiative demonstrates what can be achieved with a systematic approach to redesigning work systems. In one study, the goal was to eliminate central-line-associated bloodstream infections using techniques like those practiced at Toyota. By implementing simple but elegant tools and devices, transmissions of infection were reduced dramatically. In 2003, Allegheny General Hospital’s MICU and CCC (Cardiac Critical Care) Units had 37 patients who suffered central-
line-associated bloodstream infections, 19 of whom died. In 2004, there were six infected patients, one of whom died (Shannon et al., in progress).
Solutions to the health care problem are being offered from many directions. Our own suggestions are based on the perspectives of the patient and frontline caregiver. We can summarize what we learned through direct observation of how frontline caregivers do their work:
Most hospitals have evolved complex work systems that conspire against defect-free health care.
Caregivers have come up with “work arounds” and other ineffective approaches to solving problems. Frontline workers spend a significant fraction of their time doing nonvalue-added work caused by fundamental failures in the design of work systems.
The delivery of patient-centered care by nurses and other frontline caregivers is limited under current work systems designs.
Systems approaches perfected by industrial corporations (e.g., Toyota’s TPS) appear to provide useful models for improving health care work systems.
The challenge for engineers and managers outside the health care system is to bring the lessons learned in other settings to clinics and hospitals.
Bates, D.W., D.L. Boyle, M.B. Vander Vliet, J. Schneider, and L. Leape. 1995. Relationship between medication errors and adverse drug events. Journal of General Internal Medicine 10(4): 199–205.
Bohmer, R., A.C. Edmondson, and L.R. Feldman. 2002. Intermountain Health Care. HBS Case No. 603-066. Cambridge, Mass.: Harvard Business School Publishing.
Miguel, M.F., and H.K. Bowen. 1997. Ophthalmic Consultants of Boston and Dr. Bradford J. Shingleton. HBS Case No. 697-080. Cambridge, Mass.: Harvard Business School Publishing.
Shannon, R.P., et al. In progress. Eliminating Central Line Infections in Two Intensive Care Units: Results of Real-time Investigation of Individual Problems. Harvard Business School Working Paper. Cambridge, Mass.: Harvard Business School Publishing.
Spear, S. 2001. Deaconess-Glover Hospital (C). HBS Case No. 602-028. Cambridge, Mass.: Harvard Business School Publishing.
Spear, S.J., and H.K. Bowen. 1999. Decoding the DNA of the Toyota Production System. HBS Case No. 99509. Harvard Business Review (September-October): 96–106.
Spear, S., and J. Kenagy. 2000a. Deaconess-Glover Hospital (A). HBS Case No. 601-022. Cambridge, Mass.: Harvard Business School Publishing.
Spear, S., and J. Kenagy. 2000b. Deaconess-Glover Hospital (B). HBS Case No. 601-023. Cambridge, Mass.: Harvard Business School Publishing.
Spear, S.J., and M. Schmidhofer. 2005. Ambiguity and workarounds as contributors to medical error. Annals of Internal Medicine 142(8): 627–630.
Tucker, A.L. 2003. Organizational Learning from Operational Failures. Unpublished dissertation, Harvard University, Cambridge, Massachusetts.
Tucker, A., H.K. Bowen, and B.C. LaPierre. In progress. Quality Improvement in Intensive Care at LDS Hospital. HBS Case No. 604-071. Cambridge, Mass.: Harvard Business School Publishing.
Tucker, A.L., A.C. Edmondson, and S.J. Spear. 2002. When problem solving prevents organizational learning. Journal of Organizational Change Management 15(2): 122–137.