Creating and Sustaining a Culture of Safety
Employing a nursing workforce strong in numbers and capabilities and designing the work of nursing to prevent errors are critical patient safety defenses. Regardless of how strong and how well designed such measures may be, however, they will not by themselves fully safeguard patients. The largest and most capable workforce is still fallible, and the best-designed work processes are still designed by fallible individuals. Moreover, as discussed earlier, each introduction of new health care technology brings a host of unanticipated opportunities for errors. Thus, improving patient safety requires more than relying on the workforce and well-designed work processes; it requires an organizational commitment to vigilance for potential errors and the detection, analysis, and redressing of errors when they occur.
A variety of safety-conscious industries have made such a commitment and achieved lower rates of errors by doing so. These organizations place as high a priority on safety as they do on production; all employees are fully engaged in the process of detecting high-risk situations before an error occurs. Management is so responsive to employees’ detection of risk that it dedicates organizational resources—time, personnel, budget, and training—to bring about needed changes, often recommended by staff, to make work processes safer. Employees also are empowered to act in dangerous situations to reduce the likelihood of adverse events. The environment is fair and just—appropriately recognizing the relative contributions of individuals and systemic organizational features to errors, supportive of staff, and fosters continuous learning by the organization as a whole and its employees. These attitudes and employee engagement are so pervasive and observable in the
behaviors of such organizations and their employees that an actual culture of safety exists within the organization.
The Institute of Medicine (IOM) report To Err Is Human calls attention to the need to create such safety cultures within all health care organizations (HCOs) (IOM, 2000). The committee finds that while some progress has been made to this end, a safety culture is unlikely to reach its full potential without years of substantial commitment. The committee reaffirms the importance of the creation and maintenance of cultures of safety and recommends ongoing action by all HCOs to achieve this goal. Action also is needed from state boards of nursing and Congress to enable strong and effective cultures of safety to exist.
This chapter begins by reviewing the essential elements of an effective safety culture, and then addresses the need for a long-term commitment to create such a culture. Barriers to safety cultures found in nursing and external sources are examined next. The chapter then presents examples of the progress being made by some organizations in creating cultures of safety. The final section addresses the need for all HCOs to measure their progress in the creation of such cultures.
ESSENTIAL ELEMENTS OF AN EFFECTIVE SAFETY CULTURE
Conceptual models of organizational safety and empirical studies of organizations widely noted for low levels of errors and accidents (high safety) identify a number of structures and processes essential to effective cultures of safety. Cultures of safety result from the effective interplay of three organizational elements: (1) environmental structures and processes within the organization, (2) the attitudes and perceptions of workers, and (3) the safety-related behaviors of individuals (Cooper, 2000). Chapters 4 through 6 address the contributions of three major environmental structures and processes (i.e., managerial personnel practices, workforce capability, and work design) to patient safety. The focus here is on the safety management systems and psychological and behavioral readiness and ability of all workers necessary for the creation and maintenance of safety cultures.
Commitment of Leadership to Safety
The commitment of leadership to safety is critical to the development of a culture of safety within an organization (Carnino, undated; Manasse et al., 2002; Spath, 2000). Although management has the strongest ability to influence and unite all groups in the organization (by articulating values, reinforcing norms, and providing incentives for desired behaviors), this com-
mitment is needed from all organizational leaders—governing boards and clinical leaders as well as management.
Words alone are an ineffective leadership tool. Leadership commitment must be expressed through actions observable to employees (Carnino, undated; Spath, 2000). Boards of directors can demonstrate this commitment by regular and close oversight of patient safety in the institutions they oversee (IOM, 2000). Leadership actions that management can take include the following:
Undergoing formal training to gain an understanding of safety culture concepts and practices (Carnino, undated).
Ensuring that safety is addressed as a priority in the strategic plans of the organization (Carnino, undated; Shrivastava, 1992).
Having facility-wide patient safety policies and procedures that delineate clear plans for supervisor responsibility and accountability and enable each employee to explain how his or her performance affects patient safety (Spath, 2000).
Regularly reviewing the safety policies of the organization to ensure their adequacy for current and anticipated circumstances (Carnino, undated).
Including safety as a priority item on the agenda for meetings (Carnino, undated).
Encouraging employees to have a questioning attitude on safety issues (Carnino, undated).
Having personal objectives for directly improving aspects of safety in managers’ areas of responsibility (Carnino, undated).
Monitoring safety trends to ensure that safety objectives are being achieved (Carnino, undated; Spath, 2000).
Taking a genuine interest in safety improvements and recognizing those who achieve them—not restricting interest to situations in which there is a safety problem (Carnino, undated).
Reviewing the safety status of the organization on a periodic (e.g., yearly) basis and identifying short- and long-term safety objectives (Pizzi et al., 2001; Spath, 2000).
Finally, leadership’s commitment to safety is evidenced by a willingness to direct resources for improved safety, as reflected in the organization’s budget (Pizzi et al., 2001; Shrivastava, 1992).
All Employees Empowered and Engaged in Ongoing Vigilance
Organizations with higher rates of accidents tend to believe that managers and system designers will anticipate potential problems in production
systems and to assume that workers will always perform in accordance with performance expectations. In contrast, high-reliability organizations and other organizations committed to a safety culture know that system designers, managers, and organizational planners, as well as workers “at the sharp end” (see Chapter 1), are fallible. They know that system designers and managers cannot plan for the infinite variations that can occur within work systems, and that bad things sometimes happen in spite of best efforts to design a “fail-safe” system. Consequently, organizations with a strong safety culture encourage all employees to be on the lookout for any odd or unusual events instead of assuming that the odd or unusual is insignificant (Roberts and Bea, 2001). While management may set the tone, responsibility for safety is acknowledged as the responsibility of all employees. In a safety culture, all who work within the organization are actively involved in identifying and resolving safety concerns and are empowered to take appropriate action to prevent an adverse event (Spath, 2000).
Creating such attitudes and behaviors in workers requires many of the same practices recommended in the preceding chapters—ongoing, effective, multidirectional communication; the adoption of nonhierarchical decision-making practices; empowering of employees to adopt innovate practices to enhance patient safety; and a substantial commitment to employee training)—as well as alignment of employee incentives and rewards to promote safety.
Communication must accomplish multiple goals. First, leadership needs to convince employees of the organization’s commitment to ensuring patient safety and to building a culture of safety. It can do so by the actions described above, but first and foremost by openly acknowledging to employees the high-risk, error-prone nature of the organization’s activities (Pizzi et al., 2001) and the need to make fundamental changes in organizational policies and procedures to reduce errors and risks to safety. On an ongoing basis, management must be open to problems and warnings detected by staff that indicate possible degradation of quality (Carnino, undated).
Moreover, in effective safety cultures, patterns of communication are not hierarchical. Hierarchical communication typically reflects an organization’s “authority gradient”—the interpersonal dynamics present in situations of real or perceived power (Manasse et al., 2002). Hierarchical lines of communication with steep authority gradients can negatively affect a safety culture. They often involve waiting for orders, unquestioning compliance with directives, and disincentives to questioning or relaying “bad news” up the chain of command. In contrast, in organizations with a strong
safety culture, communication is free and open up and down the chain of command and across organizational divisions. Regardless of rank or level of authority, staff are encouraged to speak up if they identify a risk or uncover an error. Workers feel empowered to report observed system or process vulnerabilities that could lead to an accident (Manasse et al., 2002).
Nonhierarchical Decision Making
Like communication patterns, decision making in organizations with a strong safety culture is made at the lowest level appropriate. High-reliability organizations have malleable structures that allow them to expand and contract given the complexity and volatility of the task at hand. Within these expanding or contracting structures, authority migrates to the point in the organization at which specific expertise about the decision exists. Decision makers either work with or are the people who implement the decision. Portions of decisions often come together across groups or individuals within a group. This principle also is essential to the creation of “learning organizations” as described in Chapter 4.
In organizations with strong safety cultures, employees have permission and indeed are encouraged to engage in “constrained improvisation” (Moorman and Miner, 1998; Weick, 1993) when doing so furthers the goals of the organization. Employees typically improvise three things: tools, rules, and routines. Tools can be and often are used for doing things they were not designed to do; rules are bent in the interest of safety; and routines are altered when they do not work (Bigley and Roberts, 2001). There is an expectation of collaboration across ranks to seek solutions for risks and vulnerabilities as they arise. All employees believe they have the necessary authority and resources to rectify safety hazards as they are identified (Pizzi et al., 2001). It is important to note that, for an organization to be nimble enough to engage in this process appropriately, employees must have a great deal of training and experience.
Safety orientation and recurrent training are essential. Organizations that have fewer accidents teach their people how to recognize and respond to a variety of problems and empower them to act to this end. Staff are trained in safety practices, and education is used to motivate them to anticipate all types of adverse events, eradicate them when possible, and mitigate their effects if they cannot be prevented. When problems are identified,
retraining is available without penalty or stigma if safety is involved. Staff who operate equipment or new technology are trained in its use and can recognize maintenance problems and request timely maintenance (Pizzi et al., 2001).
Research on high-reliability organizations shows that they are better than other organizations at training their employees to look for anomalies and potential problems and, most important, to intervene when problems are detected. They also spend more money on training workers to recognize and respond to problems. For example, operators at Diablo Canyon Nuclear Power Plant work their regular shifts 3 weeks of every month. During the fourth week, they train for a wide range of unusual and potentially dangerous reactions. This training keeps them alert to all the things that can go wrong, and reinforces the idea that the organization is taking the likelihood of errors seriously and needs the ongoing vigilance and action of employees to detect errors before they can result in adverse events (Roberts and Bea, 2001).
Rewards and Incentives
In a culture of safety, people are rewarded for their involvement in safety improvements, whether as individuals or as members of safety improvement teams, safety committees, or participants in safety meetings (Carnino, undated; Spath, 2000). Recognition can be formal (e.g., salary increases and promotions based on staff performance criteria related to safety) or informal, but the value of safety permeates the organization’s reward system. Safety results are clearly displayed and rewarded at all levels (Pizzi et al., 2001).
Pay and reward systems have received a great deal of attention in the psychological and organizational literatures. It is well known, for example, that rewards and punishment function differently. Rewards convey information about performance the organization wants repeated. Punishment, on the other hand, conveys only information about what the organization does not want. Thus, the use of rewards is a powerful learning mechanism, whereas the use of punishment is less powerful unless it is followed up with information about what the organization desires.
The problem with rewards is that they often are not aligned with desired behavior. Attempts to improve the organization’s performance by modifying individual or group incentives often end up rewarding outcomes that actually worsen performance. Such misaligned incentives can undermine important behaviors. For example, rewarding increased productivity can reduce product or service quality, a phenomenon known as the “folly of rewarding A while hoping for B” (Kerr, 1975).
Organizational Learning from Errors and Near Misses
In organizations with strong safety cultures, all errors are considered learning opportunities. Any event related to safety, especially a human or organizational error, is viewed as a valuable opportunity to improve the safety of operations through feedback. High-reliability organizations use accident analysis to:
Build organizational memory of what happened and why.
Develop an understanding of accidents that can happen in that particular organization.
Communicate organizational concern about accidents to reinforce the cultural values of safety.
Identify parts of the system that should have redundancies (Roberts and Bea, 2001).
This attitude toward safety is one of the hallmarks of knowledge management and is possessed by “learning organizations” (DeLong and Fahey, 2000) as described in Chapter 4. Learning in this way requires a fair and just reporting system of near-misses as well as errors, analysis of reported events, and feedback.
Confidential Error Reporting and Fair and Just Responses to Reported Errors
Trust is a critical factor in developing an effective error-reporting system (Manasse et al., 2002). Evidence indicates that approximately three of every four errors are detected by those committing them, as opposed to being detected by an environmental cue or another person (Reason, 1990). Therefore, employees need to be able to trust that they can fully report errors—particularly human errors—without fear of being wrongfully blamed, thereby providing the opportunity to learn how to further improve the process (Spath, 2000). This point cannot be overemphasized. When such reporting has been introduced in health care work environments, reporting of errors and near-misses has increased dramatically, and improvements in the safety of care delivery have been enabled (Tracy, 1999). Just 16 months after the piloting of the Veterans Administration’s (VA) Patient Safety Improvement Initiative, the VA observed a 30-fold increase in reported events and a 900-fold increase in reported near misses for events designated as “high priority.” This increase was attributed, in part, to the VA’s emphasis on a nonpunitive approach to error reporting (Bagian et al., 2001). Examination of error-reporting systems in 25 nonmedical industries found immunity from reprisals to be one of three factors important in determining the quality of incident reports and the success of incident-report-
ing systems. The other two were confidentiality or data deidentification on reports of errors—making the reported data untraceable to caregivers, patients, or institutions—and ease of reporting (Barach and Small, 2000).
Without an understanding and acceptance of human fallibility, personal shame also is a disincentive to error reporting. In a pretest of a survey of employee attitudes at five VA facilities prior to implementation of a near-miss reporting system, 49 percent of the 87 respondents admitted “they are ashamed when they make a mistake in front of others.” Those who reported feeling ashamed also were more likely to report that they did not tell others about their mistakes (Augustine et al., 1999).
To counteract “blame and shame,” the personnel or team involved in an error should be encouraged to propose corrective and preventive measures (Carnino, undated). A fair and just environment extends beyond the attitudes and behaviors of management to those of coworkers. Training in the underlying concepts and principles of human error also can help counteract judgmental attitudes about peers who report errors, which is essential to prepare the work environment for the more fundamental, critical relationship changes that must be employed to ensure long-term safety (Jones, 2002; Pizzi et al., 2001).
The most obvious way to ensure the confidentiality of data on reported errors is to have reports filed anonymously. This approach has drawbacks as well as benefits. In some situations, it may be difficult to guarantee anonymity. When reports are filed anonymously, analysts cannot contact the reporters for more information. Anonymous reports also may be unreliable. Moreover, anonymity is susceptible to criticism that it threatens accountability and the transparency of health care. Despite these drawbacks, some experts studying reporting systems in a variety of industries conclude that it may be important to provide anonymity early in the evolution of an incident-reporting system, at least until trust has been built and reporters see practical results (Barach and Small, 2000). Another strategy, employed by the Federal Aviation Administration’s error-reporting system, is to deidentify the reported data after they have been reported.
Reporting Near Misses as Well as Errors
Experts who have studied accident- and error-reporting systems also assert the benefits of reporting not just errors and accidents that have occurred, but near misses as well (Bagian and Gosbee, 2000; Barach and Small, 2000). A near miss is an event that could have had adverse consequences but did not; it is indistinguishable from a full-fledged adverse event in all but outcome. Examples of near misses are a nurse giving a patient an incorrect medication from which the patient suffered no adverse consequences, and a nurse programming the wrong rate of flow for an intravenous infu-
sion, but the error being detected by a nurse taking over care of the patient so that again the patient suffers no adverse consequences.
Near misses offer effective reminders of system hazards and help counteract the tendency to forget to be afraid. For example, data on near misses in aviation have been used effectively to redesign aircraft, air traffic control systems, airports, and pilot training programs (Barach and Small, 2000). Reports of near misses are also likely to be more candid than error reports, and provide the opportunity to learn without first having to experience an adverse event (Bagian and Gosbee, 2000). Collecting and analyzing data on near misses offers several other advantages: (1) near misses occur 3–300 times more often than adverse events, offering the opportunity for more powerful quantitative analysis; (2) there are fewer barriers (e.g., shame and fear of reprisals) to data collection; (3) recovery strategies can be studied to assist in developing effective defense mechanisms to prevent future errors; and (4) hindsight bias is reduced (Barach and Small, 2000).
Data Analysis and Feedback
Once errors and near misses have been reported, the organization needs to have procedures for analyzing the data and feeding back the results to reporters. The use of root-cause analysis (described in Chapter 6) and the existence of a corrective action program are positive indications of a good safety culture (Carnino, undated; Spath, 2000). Injury-producing incidents and significant near misses are investigated for their root causes, and effective preventive actions are taken (Pizzi et al., 2001). Such research and analysis should not be considered luxuries, but essential to the effective design of safe systems of care because analysis provides the information needed for preventive measures (IOM, 2000).
An examination of error-reporting systems in 25 nonmedical industries found that independent outsourcing of report collection and analysis to peer experts and the provision of rapid, meaningful feedback to reporters and all interested parties are important in determining the quality of error reports and the success of error-reporting systems (Barach and Small, 2000). Staff should be given timely feedback on the results of analysis of their reports, and told how the data were used to improve systems and prevent future errors.
Overall Features of an Effective Error-Reporting System
The above characteristics underscore the recommendations of an Expert Advisory Panel on Patient Safety System Design convened by the Veterans Administration to identify and examine alternative procedures for in-
ternal reporting and reviewing of adverse events (Bagian and Gosbee, 2000). These experts conclude that, to be effective, any internal organization reporting system needs to possess the following features:
First and foremost, it should not be perceived as part of a punitive system; people will not openly report to a system if they believe doing so could result in punitive action.
Confidentiality mechanisms should be in place so people will be confident that they will not be placing themselves in jeopardy by reporting.
The reporting mechanism should stress capturing a description of what happened through the use of narratives, not just “checking off boxes” in a structured format.
The reports need to be analyzed by people who have practical, hands-on knowledge of the subject matter under consideration. Moreover, that analysis should be performed by more than one individual, since a fresh eye often produces more meaningful results.
Voluntary rather than mandatory reporting systems are more likely to uncover events because they reduce the disincentive of fear of punishment.
The reporting system should not be a “counting effort” because there will always be underreporting for a host of reasons. The experts note that the real purpose of a reporting system is to ferret out and correct vulnerabilities, not to count them. This point is particularly important because of the potential misconception that increased reporting represents increased danger.
Timely and appropriate feedback to reporters is essential to the ongoing trust and effectiveness of the reporting system.
To Err Is Human reiterates that reporting systems within organizations should be voluntary and confidential, have minimal restrictions on acceptable content, include descriptive accounts and stories, and be accessible for contributions from all clinical and administrative staff (IOM, 2000).
NEED FOR A LONG-TERM COMMITMENT TO CREATE A CULTURE OF SAFETY
Instituting the structures and processes described above requires changes in attitudes, beliefs, and behaviors. It is not easily accomplished. Some have estimated that it can take 5 years to develop a culture of safety that permeates the entire organization (Manasse et al., 2002).
The International Atomic Energy Agency, which has monitored and
studied safety and cultures of safety across many countries’ nuclear energy installations, likewise observes that cultures of safety develop over time. Their development occurs in three stages (Carnino, undated):
Stage 1—Safety management is based on rules and regulations.
Stage 2—Good safety performance becomes an organizational goal.
Stage 3—Safety performance is seen as dynamic and continuously improving.
In Stage 1, the organization sees safety as an external requirement imposed by governmental or other regulatory bodies. There is little awareness of the behavioral and attitudinal aspects of safety; safety is viewed primarily as a technical issue. Mere compliance with rules and regulations is considered adequate, and the following characteristics may be observed:
Problems are not anticipated; the organization reacts to them as they occur.
Communication between departments is poor.
Departments and functions behave as semiautonomous units, evidencing little collaboration and shared decision making.
The decisions taken by departments and functions focus on little more than the need to comply with rules.
People who make mistakes are simply blamed for their failure to comply with the rules.
Conflicts are not resolved; departments and functions compete with one another.
The role of management is perceived as endorsing the rules, pushing employees, and expecting results.
Little listening or learning occurs within or outside of the organization, which adopts a defensive posture when criticized.
Safety is viewed as a required nuisance.
Regulators, customers, suppliers, or contractors are treated cautiously or in an adversarial manner.
Short-term profits are regarded as all-important.
People are viewed as “system components”; they are defined and valued solely in terms of what they do.
An adversarial relationship exists between management and employees.
There is little or no awareness of work processes.
People are rewarded for obedience and results, regardless of long-term consequences.
In Stage 2, good safety performance becomes an organizational goal,
perceived by management as important even in the absence of regulatory pressure. Although there is a growing awareness of behavioral issues, this aspect is largely missing from the safety management methods employed, which comprise technical and procedural solutions. Safety performance is addressed, like other aspects of the business, in terms of targets or goals. The organization begins to examine the reasons why safety performance reaches a plateau, and is willing to seek the advice of other organizations. In this stage, the following characteristics may be observed:
The organization concentrates primarily on day-to-day matters; there is little in the way of strategy.
Management encourages cross-departmental and cross-functional teams and communication.
Senior managers function as a team and begin to coordinate departmental and functional decisions.
Decisions are often centered on cost and function.
Management’s response to mistakes is to institute more controls through procedures and retraining; somewhat less blaming occurs.
Conflict is disturbing and discouraged in the name of teamwork.
The role of management is perceived as applying management techniques, such as management by objectives.
The organization is somewhat open to learning from other companies, especially with regard to techniques and best practices.
The cost of safety and productivity are viewed as detracting from one another; safety is perceived as increasing costs and reducing production.
The organization’s relationships with regulators, customers, suppliers, and contractors are distant rather than close, reflecting a cautious approach whereby trust must be earned.
It is important to meet or exceed short-term profit goals. People are rewarded for exceeding goals regardless of the long-term results or consequences.
The relationship between employees and management is still adversarial, with little trust or respect demonstrated.
There is growing awareness of the impact of cultural issues in the workplace. People do not understand why added controls fail to yield the expected results in safety performance.
In this stage, the organization establishes a vision of the desired safety culture and communicates it throughout the organization. A systematic effort is made to gather input regarding the culture’s strengths and weaknesses. The organization develops a strategy for realizing desired changes by allocating budgetary resources, personnel, training, and time to the program,
implementing the strategy and holding people accountable for meeting objectives.
In Stage 3, safety performance is viewed as dynamic and always amenable to improvement. The organization has adopted the idea of continuous improvement and has applied the concept to safety. There is a strong emphasis on communication, training, management style, and improving efficiency and effectiveness. Everyone in the organization can contribute. Some behaviors and attitudes are understood to either enable or obstruct safety. Consequently, the level of awareness of behavioral and attitudinal issues is high, and measures are taken to effect improvements in these areas. Progress is made one step at a time and never stops. The organization also asks how it might help other companies. In this stage, the following characteristics may be observed:
The organization begins to act strategically, with a focus on the longer term as well as an awareness of the present. It anticipates problems and deals with their causes before they occur.
People recognize and state the need for collaboration among departments and functions. They receive management support, recognition, and the resources they need for collaborative work.
People are aware of work or business processes in the company and help managers manage them.
Decisions are made with full knowledge of their safety impact on work or business processes, as well as on department and functions.
There is no goal conflict between safety and production performance, so safety is not jeopardized in pursuit of production targets.
Almost all mistakes are viewed in terms of variability in work processes. The important thing is to understand what has happened rather than to find someone to blame. This understanding is used to modify the processes as necessary to avoid similar errors in the future.
The existence of conflict is recognized and addressed through an attempt to create mutually beneficial solutions.
Management’s role is perceived as coaching people to improve business performance.
Learning from other sources both within and outside of the organization is valued. Time is made available for the purpose and devoted to adapting such knowledge to improve business performance.
Safety and production are viewed as interdependent.
Collaborative relationships are developed between the organization and regulators, suppliers, customers, and contractors.
Short-term performance is measured and analyzed so changes can be made to improve long-term performance.
People are respected and valued for their contributions.
The relationship between management and employees is respectful and supportive.
Awareness of the impact of cultural issues is reflected in key decisions. The organization rewards not just those who produce, but also those who support the work of others. People are rewarded for improving processes as well as results.
The characteristics of all three stages can serve organizations as a basis for self-diagnosis. They can also be used by an organization to give direction to its development of a safety culture by identifying its current position and the position to which it aspires. It should be noted that an organization at any given point in time may exhibit a combination of the characteristics listed under each stage and that different departments or other components of an organization may be at different stages.
The time required for an organization to progress through the three stages cannot be predicted. Much will depend on the circumstances of an individual organization and the commitment and effort it is prepared to devote to effecting change. However, sufficient time must be taken in each stage to allow the benefits from changed practices to be realized and to mature. People must be prepared for such change. Too many new initiatives in a relatively short period of time can be organizationally destabilizing. The important point to note is that any organization interested in improving its safety culture should start and not be deterred by the fact that progress will be gradual (Carnino, undated). HCOs should also expect to face a number of barriers unique to health care and the work environment of nurses.
BARRIERS TO EFFECTIVE SAFETY CULTURES FROM NURSING AND EXTERNAL SOURCES
As HCOs undertake the creation of a culture of safety, they must dedicate the internal personnel and other resources required to effect the needed changes. They must also deal with two barriers that must be overcome if they are to achieve the maximum benefit from their efforts—one that originates in the nursing profession (and also is found among other health professionals) and one that is found in the external legal/regulatory environment.
A Nursing Culture That Fosters Unrealistic Expectations of Clinical Perfection
Nurses are trained to believe that clinical perfection is an attainable goal (Jones, 2002) and that “good” nurses do not make errors (Banister et
al., 1996). Like the general public, they perceive errors to be due to carelessness, inattention, indifference, or uninformed decisions. Requiring high standards of performance for nurses is both appropriate and desirable, but becomes counterproductive when it creates an expectation of perfection. Because they regard clinical perfection as a professional goal, nurses feel shame when they make an error (Leape, 1994), which in turn creates pressure to hide or cover up errors (Osborne et al., 1999; Wakefield et al., 1996) (see the example presented at the beginning of Chapter 1).
It is difficult to transform thinking associated with the blame and shame mentality (Banister et al., 1996; Manasse et al., 2002). In a study conducted to assess safety culture transformation over time at six VA medical centers, the first change noted was the realization that errors are the result of a systemic rather than an individual problem. Within a year, health care providers were reporting that they would not think less of coworkers who made errors. One of the last changes to occur was that providers did not think worse of themselves when an error occurred.1 Such a transformation requires extensive education and training and support at all levels of the organization.
Litigation and Regulatory Barriers
Unfortunately, regulatory boards and litigation practices reinforce the myth of clinical perfection, as illustrated by the two cases presented in Box 7-1.
These two cases (Cook et al., 2000; Grant, 1999; Knox, 2000; Schneider, 1999; Senders, 1999) illustrate a persisting focus on individuals rather than systems as the sources of error among licensing boards in medicine, nursing, and pharmacy; regulatory bodies, such as health departments; and sometimes the judicial system (Grant, 1999; Manasse et al., 2002). Malpractice litigation reinforces this perception. One result of this situation is that the consequences of litigation for the nurses involved in these and similar adverse events, in which nurses were fined, fired, sued, or otherwise punished (Serembus et al., 2001; Sexton, 1995), create serious disincentives to disclosure of errors or near misses on the part of nurses and other health professionals. The threat of legal liability is a strong barrier to voluntary reporting of errors (Schneider, 1999) and to the design of measures to prevent additional errors in the future.
The IOM report To Err Is Human speaks directly to these disincentives and identifies two steps HCOs can take to counteract them when designing
their internal error-reporting systems: pledging the confidentiality of the reporter and the information contained in the report, and obtaining and maintaining data in a manner that prevents identification of the reporter or the specific event even if access to the report is obtained. This latter strategy can be pursued by adopting anonymous reporting of adverse events and near misses, or by deidentifying information once reported, as discussed earlier (IOM, 2000).
To Err Is Human also cites the need for federal legislation to extend peer review protections to data collected and analyzed by HCOs for purposes of improving safety and quality. It notes that all but one of the states have passed such legislation, but that these state laws vary in scope and strength. Federal legislation could remedy this situation, providing uniform national protection for the creation of cultures of safety in HCOs (IOM, 2000). This concept also has been endorsed by the Medicare Payment Advisory Commission, which has recommended that Congress enact legislation to protect the confidentiality of individually identifiable information relating to errors in health care delivery when that information is reported for quality improvement purposes (Medicare Payment Advisory Commission, 1999). Australia and New Zealand currently offer such legal protection for reporters of health care errors (Barach and Small, 2000).
To Err Is Human suggests that a combination of federal legislation and internal protections for the reporter and reported data is best. Each alone is imperfect, but together they offer stronger assurance of confidentiality (IOM, 2000).
Another strategy is being tested in the United Kingdom to guide decisions regarding culpability in unsafe acts in which health care professionals are involved. An “incident decision tree,” based on the decision tree in Reason’s (1997) publication Managing the Risks of Organizational Accidents (see Figure 7-1), has been developed to help health care organizations to discriminate fairly and justly among willful acts of wrongdoing, inadvertent human error, and system contributions to error. Doing so can help a health care organization determine the appropriate response to an error in which an individual “at the sharp end” is involved.
The incident decision tree is based on the premise that while the vast majority of unsafe acts involve “honest” or nonculpable errors, a small minority of individuals commit reckless unsafe acts, and will continue to do so if left unchecked. To create a just culture, it is necessary to reach a collective agreement on where the line should be drawn between acceptable and unacceptable errors. To assist in differentiating between blameworthy and blameless acts, the model incorporates the following “substitution” test:
Substitute the individual involved in the adverse event or near miss with another individual possessing comparable qualifications and experience. Then
In the investigation of a widely publicized fatal chemotherapy overdose in 1994 of a health columnist for the Boston Globe, the state department of public health, the Joint Commission on Accreditation of Healthcare Organizations, and the National Institutes of Health found no fault on the part of any of the nurses involved in the administration of the medication. The nurses had checked the patient’s name, medication, dosage and route, frequency of administration, and specific directions for administration against the physician’s order. The investigation of the incident revealed that the error had been caused by a number of system problems, including that the patient was being treated according to an investigational protocol and that the only reference source for confirming the drug, dosage, frequency, and route (apart from the physician’s order) was the protocol document itself. Moreover, the research protocol document was found to be flawed and confirmed the physician’s mistaken order. Finally, there was no computerized pharmacy check system. Despite these findings, 4 years after the overdose, the state board of nursing proposed sanctions against the nurses involved (Grant, 1999) and subsequently reprimanded or placed on probation 16 nurses involved in the administration of the medication (Knox, 2000).
In 1996, a woman with a prenatal history of syphilis gave birth to a baby boy. Because of the absence of documentation of treatment for syphilis, neonatology and pediatric infectious disease experts agreed that the baby should be treated with a dose of penicillin G, 150,000 units/kg by intramuscular (IM)
ask the following question: “In light of how events unfolded and were perceived by those involved in real time, is it likely that this new individual would have behaved any differently?”
If the answer is “probably not,” blaming the individual at the sharp end of the error is an inappropriate response. Similarly, blame is not assigned to the individual if his/her peers respond “probably not” to the question “Given the circumstances that prevailed at that time, could you be sure that you would not have committed the same or similar type of unsafe act?” (Reason, 1997:208).
Ensuring confidential reporting of errors, using fair and just procedures for assessing causation, and extending peer review protections to data collected by HCOs together can reduce the disincentives to error reporting that thwart the detection and prevention of error-producing situations.
injection. In preparing the medication, the pharmacist misread the dose and, because the hospital had no unit dose system, sent the medication to the nursing unit in two syringes containing more medication than the (miscalculated) dose.
Because the volume of the medication would have required the baby to receive five IM injections, the nurses investigated the possibility of giving the drug intravenously. The three nurses (one who cared for both mother and baby, one providing more intensive care to the baby, and a neonatal nurse practitioner) consulted a drug reference book, which indicated that the drug could be given by slow intravenous push. The nurses did not know that there were two different forms of the penicillin: aqueous and viscous. Aqueous penicillin can be given intravenously; viscous penicillin must be given IM. When the dose was administered, the baby suffered a cardiac arrest and died (Schneider, 1999).
This event resulted in indictments against all three nurses on charges of criminal negligence. At trial, the Institute for Safe Medication Practices presented its analysis of the incident showing more than 50 different system failures contributing to the error, including that the drug was seldom used; the pharmacist had miscalculated the dose; the hospital lacked a unit dose medication dispensing system, which required the pharmacist to dispense an amount even greater than the miscalculated dose; the manufacturer’s warning on the syringe that it was for IM administration only was difficult to see; and available reference materials were ambiguous about acceptable routes of administration (Cook et al., 2000).
PROGRESS IN CREATING CULTURES OF SAFETY
JCAHO provided a key stimulus for the creation of cultures of safety in HCOs when in 2001 it adopted new patient safety accreditation standards for health care facilities. These standards encourage the development of cultures of safety by, in part, requiring HCO leaders to ensure implementation of an integrated patient safety program throughout the organization (Standard LD.5) (JCAHO, 2003a). In 2003, JCAHO also began requiring accredited organizations to meet annually specified patient safety goals. Each year the goals and associated recommendations will be reevaluated to determine whether they should be continued or replaced (JCAHO, 2003b).
Some HCOs have made great strides in creating cultures of safety. Two examples are described below.
Good Samaritan Hospital, Dayton, Ohio
In early 2000, Good Samaritan Hospital’s (GSH) Vice President of Clinical Effectiveness and Performance Improvement, an early champion of patient safety, began an initiative to create a culture of safety within the hospital. This leader, as well as the director of the hospital’s Center of Outcomes Research and Clinical Effectiveness, presented to the quality committee of the hospital’s board of trustees a summary of the significance of patient safety and recommendations to institutionalize the vision that safety is essential to the hospital’s mission. The vice president also engaged in one-on-one and group discussions with hospital leaders on the importance and benefits of being a safety-reliable organization. Through this consensus process, key hospital leaders committed their support to and agreed to guide the initiative.
To implement this initiative, GSH modified organizational structures and committed resources to the effort. Rather than assigning the project to a preexisting committee, it added a new Safety Board to its administrative infrastructure. This board is composed of physician, nursing, and administrative leaders, including the chief executive officer (CEO) and hospital communications staff. It serves as an oversight body to ensure the advancement of the safety program and to create the policies and procedures needed to implement the program. The Safety Board is also responsible for medical management, risk management, and quality management.
GSH adopted three early aims for its initiative:
Demonstrate that patient safety is a top leadership priority.
Promote a nonpunitive culture for sharing information and lessons learned.
Implement an integrated patient safety program throughout the organization.
GSH evaluates its progress in meeting these aims on a bimonthly basis using a self-assessment tool adapted from such a tool developed as part of a Voluntary Hospital Association collaborative. The Safety Board formulated criteria for each aim. Specific actions undertaken by GSH to achieve these aims have included the following:
Educational programming for all hospital staff on sentinel events, root-cause analysis, incident reporting, the hospital’s safety initiative, and the roles of all employees in patient safety.
Initiation of an incident and near miss reporting system, supported by automated database software to facilitate the tracking, aggregation, and analysis of incident data.
Creation of a policy to forego corrective action against an employee if an error is reported within 48 hours.
Initiation of specific safety improvement projects in such areas as medication safety, blood transfusions, and the transport of critically ill patients.
Participation in state and local initiatives in patient safety, which has led to the exchange of ideas for improved practice and research.
As a result of this initiative, the hospital has experienced a significant increase in reported errors that has guided system improvements. It further reports a growing awareness that “committing resources to support patient safety initiatives is not in conflict with cost-effective practices. Eliminating rework and errors reduces the cost of providing care and the costs of resolving litigation…” (Wong et al., 2002:372).
Kaiser Permanente, the largest not-for-profit health maintenance organization (HMO) in the United States, undertook the creation of a culture of safety throughout the organization as part of a Patient Safety Plan initiated in 2001. This initiative is aimed at:
Creating a strong patient safety culture, with patient safety embraced as a shared value.
Creating an environment that encourages responsible reporting of near misses and errors and that focuses on fixing systems and not assigning blame.
Implementing strategies for improvement in patient safety performance.
Identifying, sharing, and implementing best practices from other parts of the organization and other industries.
Providing routine patient safety and error prevention training and education for individuals and groups.
Developing new knowledge and understanding of safety in the delivery system.
Identifying, assessing, and implementing indicators and measures of safety.
The above activities are focused on instituting the following six strategic themes:
Safe culture—Creating and maintaining a strong patient safety culture, with patient safety and error reduction embraced as shared organizational values.
Safe care—ensuring that the actual and potential hazards associated with high-risk procedures, processes, and patient care populations are identified, assessed, and controlled in a way that demonstrates continuous improvement and ultimately ensures that patients are free from accidental injury or illness.
Safe staff—Ensuring that staff possess the knowledge and competence to perform required duties safely and improve system safety performance.
Safe support systems—Identifying, implementing, and maintaining support systems—including knowledge-sharing networks and systems for responsible reporting—that provide the right information to the right people at the right time.
Safe place—Designing, constructing, operating, and maintaining the environment of health care to enhance its efficiency and effectiveness.
Safe patients—Engaging patients and their families, as appropriate, in reducing medical errors, improving overall system safety performance, and maintaining trust and respect.
Kaiser Permanente formed an internal National Patient Safety Advisory Board to guide this initiative, provide a forum for information sharing, and help integrate safety into the fabric of the organization. Membership includes a representative of Kaiser Permanente’s labor–management partnership with the Coalition of Kaiser Permanente Unions. Kaiser Permanente has engaged and educated its labor partners in patient safety through a number of mechanisms, including their participation in patient safety executive walkarounds. In a survey of unit personnel at one facility 6 months following visits from senior executives, 90 percent of respondents stated that things related to patient safety were being done differently, 44 percent indicated that their reporting or discussion of errors and near misses had increased, and 90 percent indicated that they had a better understanding of patient safety. Human factors training and projects have also been launched in the medical center operating room, neonatal intensive care unit, perinatal units, and emergency department to integrate human factors into the provision of care. A National Patient Safety website is available to all Kaiser Permanente employees to increase their knowledge about patient safety.
NEED FOR ALL HCOS TO MEASURE THEIR PROGRESS IN CREATING CULTURES OF SAFETY
As discussed in Chapter 4, achieving any systemic organizational change is not easy. Objective measurement and feedback is needed to manage planned change successfully, and efforts to create cultures of safety are no exception. To this end, initial baseline assessment of each organization’s
safety culture and ongoing measurement of its progress in achieving the desired cultural shift are required.
Benchmarking Organizational Safety Culture
A number of health care organizations have surveyed themselves to benchmark their culture-of-safety status (Pizzi et al., 2001), using a variety of surveys and checklists that assess the attitudes and perceptions of workers (Cooper, 2000; Pizzi et al., 2001; Spath, 2000). The Agency for Healthcare Research and Quality and the federal government’s Quality Interagency Coordination Task Force are developing a public-domain instrument for assessing issues of patient safety, medical error, and event reporting as they relate to an organization’s safety culture. This instrument—the Hospital Survey on Patient Safety—is in the final stages of testing and validation and is scheduled to be available in the public domain in early in 2004. It will allow health care institutions to understand the varying safety cultures within their own institutions, how staff view the commission of errors and error reporting, and the extent to which staff perceives the institution to be a safe place for patients.2
By themselves, however, surveys of the safety climate (i.e., the aggregation of individuals’ attitudes and perceptions about safety) within organizations are believed to be inadequate in evaluating the extent to which a culture of safety has been created (Cooper, 2000). While appraisal of the products or outcomes of the safety culture in operating organizations is a challenge, measurable indicators of the culture’s effectiveness are viewed as essential (Cooper, 2000; Spath, 2000).
Although measuring the incidence rate of accidents and other adverse safety-related events as patient safety indicators may appear straightforward, it has serious drawbacks. Negative indicators can be demoralizing to employees, as well as misleading. Reported numbers of errors can decline for reasons having little to do with safety, such as underreporting resulting from other organizational incentives (e.g., production incentives) (Cooper, 2000).
In contrast, positive measures of the observable degree of effort expended by organizational members have been identified as a more effective approach to measuring the degree to which an organization has implemented a safety culture. These measures include the degree to which organi-
zation members report unsafe conditions and the speed with which the organization initiates remedial actions (Cooper, 2000). Other possible indicators include the following (Carnino, undated).
Percentage of employees who have received safety refresher training during the previous month/quarter.
Percentage of safety improvement proposals implemented during the previous month/quarter.
Percentage of improvement teams involved in determining solutions to safety-related problems.
Percentage of employee communication briefs that include safety information.
Number of safety inspections conducted by senior managers during the previous week/month.
Percentage of employee suggestions that relate to safety improvement.
Percent of routine organizational meetings with safety as an agenda item. (Carnino, undated)
The value of positive safety indicators is that they serve as a mechanism for recognizing employees who are endeavoring to improve safety by their thoughts, actions, or commitment. Recognition of achievement is a powerful motivating force to encourage continued improvement (Carnino, undated).
In light of the findings and principles set forth in this chapter, the committee makes the following recommendations:
Recommendation 7-1. HCO boards of directors, managerial leadership, and labor partners should create and sustain cultures of safety by implementing the recommendations presented previously and by:
Specifying short- and long-term safety objectives.
Continuously reviewing success in meeting these objectives and providing feedback at all levels.
Conducting an annual, confidential survey of nursing and other health care workers to assess the extent to which a culture of safety exists.
Instituting a deidentified, fair, and just reporting system for errors and near misses.
Engaging in ongoing employee training in error detection, analysis, and reduction.
Implementing procedures for analyzing errors and providing feedback to direct-care workers.
Instituting rewards and incentives for error reduction.
Recommendation 7-2. The National Council of State Boards of Nursing, in consultation with patient safety experts and health care leaders, should undertake an initiative to design uniform processes across states for better distinguishing human errors from willful negligence and intentional misconduct, along with guidelines for their application by state boards of nursing and other state regulatory bodies having authority over nursing.
Recommendation 7-3. Congress should pass legislation to extend peer review protections to data related to patient safety and quality improvement that are collected and analyzed by HCOs for internal use or shared with others solely for purposes of improving safety and quality.
Augustine C, Weick K, Bagian J, Lee C. 1999. Predispositions toward a culture of safety in a large multi-facility health system. Proceedings of Enhancing Patient Safety and Reducing Errors in Health Care Conference held at Rancho Mirage, CA. Chicago, IL: National Patient Safety Foundation. Pp. 138–141.
Bagian JP, Gosbee JW. 2000. Developing a culture of patient safety at the VA. Ambulatory Outreach Spring:25–29.
Bagian JP, Lee C, Gosbee J, Derosier J, Stalhandske E, Eldridge N, Williams R, Burkhardt M. 2001. Developing and deploying a patient safety program in a large health care delivery systems: You can’t fix what you don’t know about. The Joint Commission Journal on Quality Improvement 27(10):522–532.
Banister G, Butt L, Hackel R. 1996. How nurses perceive medication errors. Nursing Management 27(1):31–34.
Barach P, Small S. 2000. Reporting and preventing medical mishaps: Lessons from non-medical near miss reporting systems. British Medical Journal 320:759–763.
Bigley G, Roberts K. 2001. Structuring temporary systems for high reliability. Academy of Management Journal 44:1281–1300.
Carnino A, Director, Division of Nuclear Installation Safety, International Atomic Energy Agency. Undated. Management of Safety, Safety Culture and Self Assessment. [Online]. Available: http://www.iaea.org/ns/nusafe/publish/papers/mng_safe.htm [accessed January 15, 2003].
Cook R, Render M, Woods D. 2000. Gaps in the continuity of care and progress on patient safety. British Medical Journal 320(7237):791–794.
Cooper M. 2000. Towards a model of safety culture. Safety Science 36:111–136.
DeLong D, Fahey L. 2000. Diagnosing cultural barriers to knowledge management. Academy of Management Executive 14(4):113–127.
Grant S. 1999. Who’s to blame for tragic error? American Journal of Nursing 99(9):9.
IOM (Institute of Medicine). 2000. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press.
JCAHO (Joint Commission on Accreditation of Healthcare Organizations). 2003a. 2003 Hospital Accreditation Standards. Oakbrook Terrace, IL: JCAHO.
JCAHO. 2003b. 2003 National Patient Safety Goals. [Online]. Available: http://www.jcaho.org/accredited+organizations/patient+safety/npsg/index.htm [accessed July 19, 2003].
Jones B. 2002. Nurses and the Code of Silence. Medical Error. San Francisco, CA: Jossey-Bass.
Kerr S. 1975. On the folly of rewarding A while hoping for B. Academy of Management Journal 18:769–783.
Knox R. 2000, March 16. State board clears two nurses in 1994 chemotherapy overdose, a third nurse receives reprimand for role. The Boston Globe. Metro/Region. P. B2.
Leape L. 1994. Error in medicine. Journal of the American Medical Association 272:1851–1857.
Manasse H, Turnbull J, Diamond L. 2002. Patient safety: A review of the contemporary American experience. Singapore Medical Journal 43(5):254–262.
Medicare Payment Advisory Commission. 1999. Report to the Congress: Selected Medicare Issues. Washington, DC: Medicare Payment Advisory Commission.
Moorman C, Miner A. 1998. Organizational improvisation and organizational memory. Academy of Management Review 23:698–723.
Osborne J, Blais K, Hayes J. 1999. Nurses’ perceptions: When is it a medication error? Journal of Nursing Administration 29(4):33–38.
Pizzi L, Goldfarb N, Nash D. 2001. Promoting a culture of safety. In: Shojania K, Duncan B, McDonald K, Wachter R, eds. Making HealthCare Safer: A Critical Analysis of Patient Safety Practices. Rockville, MD: AHRQ.
Reason J. 1990. Human Error. Cambridge, UK: Cambridge University Press.
Reason J. 1997. Managing the Risks of Organizational Accidents. Aldershot, England: Ashgate Publishing Company.
Roberts K, Bea R. 2001. Must accidents happen? Lessons from high-reliability organizations. Academy of Management Executive 15(3):70–78.
Schneider P. 1999. Part one: The anatomy of an event. Proceedings of Enhancing Patient Safety and Reducing Errors in Health Care Conference held at Rancho Mirage, CA. Chicago, IL: National Patient Safety Foundation.
Senders J. 1999. Part two: Theory and remedy. Proceedings of Enhancing Patient Safety and Reducing Errors in Health Care Conference held at Rancho Mirage, CA. Chicago, IL: National Patient Safety Foundation.
Serembus J, Wolf Z, Youngblood N. 2001. Consequences of fatal medication errors for health care providers: A secondary analysis study. MEDSURG Nursing 10(4):193–201.
Sexton J. 1995 (July 28). Three resign after an error in transfusion. New York Times. P. B4.
Shrivastava P. 1992. Preventing and coping with industrial crises. In: Shrivastava P, ed. Bhopal: Anatomy of a Crisis. London: Paul Chapman Publishing Ltd.
Spath P. 2000. Does your facility have a “patient-safe” climate? Hospital Peer Review 25:80–82.
Tracy E. 1999. Evolving practice and a culture of safety. QRC Advisor 15(10):10–12.
Wakefield D, Wakefield B, Uden-Holman T, Blegen M. 1996. Perceived barriers in reporting medication administration errors. Best Practices & Benchmarking in Healthcare 1(4): 191–197.
Weick K. 1993. Organization redesign as improvisation. In: Huber G, Glick W, eds. Organizational Change and Redesign. New York, NY: Oxford University Press.
Wong P, Helsinger D, Petry J. 2002. Providing the right infrastructure to lead the culture change for patient safety. Joint Commission Journal on Quality Improvement 28(7):363–372.