Error Reporting Systems
Although the previous chapter talked about creating and disseminating new knowledge to prevent errors from ever happening, this chapter looks at what happens after an error occurs and how to learn from errors and prevent their recurrence. One way to learn from errors is to establish a reporting system. Reporting systems have the potential to serve two important functions. They can hold providers accountable for performance or, alternatively, they can provide information that leads to improved safety. Conceptually, these purposes are not incompatible, but in reality, they can prove difficult to satisfy simultaneously.
Reporting systems whose primary purpose is to hold providers accountable are "mandatory reporting systems." Reporting focuses on errors associated with serious injuries or death. Most mandatory reporting systems are operated by state regulatory programs that have the authority to investigate specific cases and issue penalties or fines for wrong-doing. These systems serve three purposes. First, they provide the public with a minimum level of protection by assuring that the most serious errors are reported and investigated and appropriate follow-up action is taken. Second, they provide an incentive to health care organizations to improve patient safety in order to avoid the potential penalties and public exposure. Third, they require all health care organizations to make some level of investment in patient safety, thus creating a more level playing field. While safety experts recognize that
errors resulting in serious harm are the "tip of the iceberg," they represent the small subset of errors that signal major system breakdowns with grave consequences for patients.
Reporting systems that focus on safety improvement are "voluntary reporting systems." The focus of voluntary systems is usually on errors that resulted in no harm (sometimes referred to as "near misses") or very minimal patient harm. Reports are usually submitted in confidence outside of the public arena and no penalties or fines are issued around a specific case. When voluntary systems focus on the analysis of ''near misses," their aim is to identify and remedy vulnerabilities in systems before the occurrence of harm. Voluntary reporting systems are particularly useful for identifying types of errors that occur too infrequently for an individual health care organization to readily detect based on their own data, and patterns of errors that point to systemic issues affecting all health care organizations.
The committee believes that there is a need for both mandatory and voluntary reporting systems and that they should be operated separately. Mandatory reporting systems should focus on detection of errors that result in serious patient harm or death (i.e., preventable adverse events). Adequate attention and resources must be devoted to analyzing reports and taking appropriate follow-up action to hold health care organizations accountable. The results of analyses of individual reports should be made available to the public.
The continued development of voluntary reporting efforts should also be encouraged. As discussed in Chapter 6, reports submitted to voluntary reporting systems should be afforded legal protections from data discoverability. Health care organizations should be encouraged to participate in voluntary reporting systems as an important component of their patient safety programs.
For either type of reporting program, implementation without adequate resources for analysis and follow-up will not be useful. Receiving reports is only the first step in the process of reducing errors. Sufficient attention must be devoted to analyzing and understanding the causes of errors in order to make improvements.
Recommendation 5.1 A nationwide mandatory reporting system should be established that provides for the collection of standardized information by state governments about adverse events that result in death or serious harm. Reporting should initially be required
of hospitals and eventually be required of other institutional and ambulatory care delivery settings. Congress should
• designate the National Forum for Health Care Quality Measurement and Reporting as the entity responsible for promulgating and maintaining a core set of reporting standards to be used by states, including a nomenclature and taxonomy for reporting;
• require all health care organizations to report standardized information on a defined list of adverse events;
• provide funds and technical expertise for state governments to establish or adapt their current error reporting systems to collect the standardized information, analyze it and conduct follow-up action as needed with health care organizations. Should a state choose not to implement the mandatory reporting system, the Department of Health and Human Services should be designated as the responsible entity; and designate the Center for Patient Safety to:
(1) convene states to share information and expertise, and to evaluate alternative approaches taken for implementing reporting programs, identify best practices for implementation, and assess the impact of state programs; and
(2) receive and analyze aggregate reports from states to identify persistent safety issues that require more intensive analysis and/or a broader-based response (e.g., designing prototype systems or requesting a response by agencies, manufacturers or others).
Mandatory reporting systems should focus on the identification of serious adverse events attributable to error. Adverse events are deaths or serious injuries resulting from a medical intervention.1 Not all, but many, adverse events result from errors. Mandatory reporting systems generally require health care organizations to submit reports on all serious adverse events for two reasons: they are easy to identify and hard to conceal. But it is only after careful analysis that the subset of reports of particular interest, namely those attributable to error, are identified and follow-up action can be taken.
The committee also believes that the focus of mandatory reporting system should be narrowly defined. There are significant costs associated with reporting systems, both costs to health care organizations and the cost of operating the oversight program. Furthermore, reporting is useful only if it includes analysis and follow-up of reported events. A more narrowly defined program has a better chance of being successful.
A standardized reporting format is needed to define what ought to be
reported and how it should be reported. There are three purposes to having a standardized format. First, a standardized format permits data to be combined and tracked over time. Unless there are consistent definitions and methods for data collection across organizations, the data cannot be aggregated. Second, a standardized format lessens the burden on health care organizations that operate in multiple states or are subject to reporting requirements of multiple agencies and/or private oversight processes and group purchasers. Third, a standardized format facilitates communication with consumers and purchasers about patient safety.
The recently established National Forum for Health Care Quality Measurement and Reporting is well positioned to play a lead role in promulgating standardized reporting formats, including a nomenclature and taxonomy for reporting. The Forum is a public/private partnership charged with developing a comprehensive quality measurement and public reporting strategy. The existing reporting systems (i.e., national and state programs, public and private sector programs) also represent a growing body of expertise on how to collect and analyze information about errors, and should be consulted during this process.2
Recommendation 5.2 The development of voluntary reporting efforts should be encouraged. The Center for Patient Safety should
• describe and disseminate information on existing voluntary reporting programs to encourage greater participation in them and track the development of new reporting systems as they form;
• convene sponsors and users of external reporting systems to evaluate what works and what does not work well in the programs, and ways to make them more effective;
• periodically assess whether additional efforts are needed to address gaps in information to improve patient safety and to encourage health care organizations to participate in voluntary reporting programs; and
• fund and evaluate pilot projects for reporting systems, both within individual health care organizations and collaborative efforts among health care organizations.
Voluntary reporting systems are an important part of an overall program for improving patient safety and should be encouraged. Accrediting bodies and group purchasers should recognize and reward health care organizations that participate in voluntary reporting systems.
The existing voluntary systems vary in scope, type of information col-
lected, confidentiality provisions, how feedback to reporters is fashioned, and what is done with the information received in the reports. Although one of the voluntary medication error reporting systems has been in operation for 25 years, others have evolved in just the past six years. A concerted analysis should assess which features make the reporting system most useful, and how the systems can be made more effective and complementary.
The remainder of this chapter contains a discussion of existing error reporting systems, both within health care and other industries, and a discussion of the committee's recommendations.
Review of Existing Reporting Systems in Health Care
There are a number of reporting systems in health care and other industries. The existing programs vary according to a number of design features. Some programs mandate reporting, whereas others are voluntary. Some programs receive reports from individuals, while others receive reports from organizations. The advantage of receiving reports from organizations is that it signifies that the institution has some commitment to making corrective system changes. The advantage of receiving reports from individuals is the opportunity for input from frontline practitioners. Reporting systems can also vary in their scope. Those that currently exist in health care tend to be more narrow in focus (e.g., medication-related error), but there are examples outside health care of very comprehensive systems.
There appear to be three general approaches taken in the existing reporting systems. One approach involves mandatory reporting to an external entity. This approach is typically employed by states that require reporting by health care organizations for purposes of accountability. A second approach is voluntary, confidential reporting to an external group for purposes of quality improvement (the first model may also use the information for quality improvement, but that is not its main purpose). There are medication reporting programs that fall into this category. Voluntary reporting systems are also used extensively in other industries such as aviation. The third approach is mandatory internal reporting with audit. For example, the Occupational Safety and Health Administration (OSHA) requires organizations to keep data internally according to a standardized format and to make the data available during on-site inspections. The data maintained internally are not routinely submitted, but may be submitted if the organization is selected in the sample of an annual survey.
The following sections provide an overview of existing health care reporting systems in these categories. They also include two examples from areas outside health care. The Aviation Safety Reporting System is discussed because it represents the most sophisticated and long-standing voluntary external reporting system. It differs from the voluntary external reporting systems in health care because of its comprehensive scope. Since there are currently no examples of mandatory internal reporting with audit, the characteristics of the OSHA approach are described.
Mandatory External Reporting
State Adverse Event Tracking
In a recent survey of states conducted by the Joint Commission on Accreditation of Healthcare Organizations (JCAHO), it was found that at least one-third of states have some form of adverse event reporting system.3 It is likely that the actual percentage is higher because not all states responded to the survey and some of the nonrespondents may have reporting requirements. During the development of this report, the Institute of Medicine (IOM) interviewed 13 states with reporting systems to learn more about the scope and operation of their programs. The remainder of this section relates to information provided to the IOM. Appendix D summarizes selected characteristics of the reporting systems in these states, and includes information on what is reported to the state, who is required to submit reports, the number of reports received in the most recent year available, when the program began, who has access to the information collected and how the state uses the information that is obtained. This is not intended as a comprehensive review, but rather, as an overview of how some state reporting systems are designed.
States have generally focused their reporting systems on patient injuries or facility issues (e.g., fire, structural issues). Reports are submitted by health care organizations, mostly hospitals and/or nursing homes, although some states also include ambulatory care centers and other licensed facilities. Although the programs may require reporting from a variety of licensed facilities, nursing homes often consume a great deal of state regulatory attention. In Connecticut, 14,000 of almost 15,000 reports received in 1996 were from nursing homes.
Several of the programs have been in place for ten years or longer, although they have undergone revisions since their inception. For example,
New York State's program has been in place since 1985, but it has been reworked three times, the most recent version having been implemented in 1998 after a three-year pilot test.
Underreporting is believed to plague all programs, especially in their early years of operation. Colorado's program received 17 reports in its first two years of operation,4 but ten years later, received more than 1000 reports. On the other hand, New York's program receives approximately 20,000 reports annually.
The state programs reported that they protected the confidentiality of certain data, but policies varied. Patient identifiers were never released; practitioner's identity was rarely available. States varied in whether or not the hospital's name was released. For example, Florida is barred from releasing any information with hospital or patient identification; it releases only a statewide summary.
The submission of a report itself did not trigger any public release of information. Some states posted information on the Internet, but only after the health department took official action against the facility. New York has plans to release hospital-specific aggregate information (e.g., how many reports were submitted), but no information on any specific report.
Few states aggregate the data or analyze them to identify general trends. For the most part, analysis and follow-up occurs on a case-by-case basis. For example, in some states, the report alerted the health department to a problem; the department would assess whether or not to conduct a follow-up inspection of the facility, If an inspection was conducted, the department might require corrective action and/or issue a deficiency notice for review during application for relicensure.
Two major impediments to making greater use of the reported data were identified: lack of resources and limitations in data. Many states cited a lack of resources as a reason for conducting only limited analysis of data. Several states had, or were planning to construct a database so that information could be tracked over time but had difficulty getting the resources or expertise to do so. Additionally, several states indicated that the information they received in reports from health care organizations was inadequate and variable. The need for more standardized reporting formats was noted.
A focus group was convened with representatives from approximately 20 states at the 12th Annual conference of the National Academy of State Health Policy (August 2, 1999). This discussion reinforced the concerns heard in IOM's telephone interviews. Resource constraints were identified, as well as the need for tools, methods, and protocols to constructively address the issue. The group also identified the need for mechanisms to im-
prove the flow of information between the state, consumers, and providers to encourage safety and quality improvements. The need for collaboration across states to identify and promote best practices was also highlighted. Finally, the group emphasized the need to create greater awareness of the problem of patient safety and errors in health care among the general public and among health care professionals as well.
In summary, the state programs appear to provide a public response for investigation of specific events,5 but are less successful in synthesizing information to analyze where broad system improvements might take place or in communicating alerts and concerns to other institutions. Resource constraints and, in some cases, poorly specified reporting requirements contribute to the inability to have as great an impact as desired.
Food and Drug Administration (FDA)
Reports submitted to FDA are one part of the surveillance system for monitoring adverse events associated with medical products after their approval (referred to as postmarketing surveillance).6 Reports may be submitted directly to FDA or through MedWatch, FDA's reporting program. For medical devices, manufacturers are required to report deaths, serious injuries, and malfunctions to FDA. User facilities (hospitals, nursing homes) are required to report deaths to the manufacturer and FDA and to report serious injuries to the manufacturer. For suspected adverse events associated with drugs, reporting is mandatory for manufacturers and voluntary for physicians, consumers, and others. FDA activities are discussed in greater detail in Chapter 7.
Voluntary External Reporting
Joint Commission on Accreditation of Healthcare Organizations (JCAHO)
JCAHO initiated a sentinel event reporting system for hospitals in 1996 (see Chapter 7 for a discussion on JCAHO activities related to accreditation). For its program, a sentinel event is defined as an "unexpected occurrence or variation involving death or serious physical or psychological injury or the risk thereof." Sentinel events subject to reporting are those that have resulted in an unanticipated death or major permanent loss of function not related to the natural course of the patient's illness or underlying condition, or an event that meets one of the following criteria (even if the outcome was
not death or major permanent loss of function): suicide of a patient in a setting where the patient receives around-the-clock care; infant abduction or discharge to the wrong facility; rape; hemolytic transfusion reaction involving administration of blood or blood products having major blood group incompatibilities; or surgery on the wrong patient or wrong body part.7
The Joint Commission requires that an organization experiencing a sentinel event conduct a root cause analysis, a process for identifying the basic or causal factors of the event. A hospital may voluntarily report an incident to JCAHO and submit their root cause analysis (including actions for improvement). If an organization experiences a sentinel event but does not voluntarily report it and JCAHO discovers the event (e.g., from the media, patient report, employee report), the organization is still required to prepare an acceptable root cause analysis and action plan. If the root cause analysis and action plan are not acceptable, the organization may be placed on accreditation watch until an acceptable plan is prepared. Root cause analyses and action plans are confidential; they are destroyed after required data elements have been entered into a JCAHO database to be used for tracking and sharing risk reduction strategies.
JCAHO encountered some resistance from hospitals when it introduced the sentinel event reporting program and is still working through the issues today. Since the initiation of the program in 1996, JCAHO has changed the definition of a sentinel event to add more detail, instituted procedural revisions on reporting, authorized on-site review of root cause analyses to minimize risk of additional liability exposure, and altered the procedures for affecting a facility's accreditation status (and disclosing this change to the public) while an event is being investigated.8 However, concerns remain regarding the confidentiality of data reported to JCAHO and the extent to which the information on a sentinel event is no longer protected under peer review if it is shared with JCAHO (these issues are discussed in Chapter 6).
There is the potential for cooperation between the JCAHO sentinel event program and state adverse event tracking programs. For example, JCAHO is currently working with New York State so that hospitals that report to the state's program are considered to be in compliance with JCAHO's sentinel events program.9 This will reduce the need for hospitals to report to multiple groups with different requirements for each. The state and JCAHO are also seeking to improve communications between the two organizations before and after hospitals are surveyed for accreditation.
Medication Errors Reporting (MER) Program
The MER program is a voluntary medication error reporting system originated by the Institute for Safe Medication Practice (ISMP) in 1975 and administered today by U.S. Pharmacopeia (USP). The MER program receives reports from frontline practitioners via mail, telephone, or the Internet. Information is also shared with the FDA and the pharmaceutical companies mentioned in the reports. ISMP also publishes error reports received from USP in 16 publications every month and produces a biweekly publication and periodic special alerts that go to all hospitals in the United States. The MER program has received approximately 3,000 reports since 1993, primarily identifying new and emerging problems based on reports from people on the frontline.
MedMARx from the U.S. Pharmacopoeia
In August 1998, U.S. Pharmacopeia initiated the MedMARx program, an Internet-based, anonymous, voluntary system for hospitals to report medication errors. Hospitals subscribe to the program. Hospital employees may then report a medication error anonymously to MedMARx by completing a standardized report. Hospital management is then able to retrieve compiled data on its own facility and also obtain nonidentified comparative information on other participating hospitals. All information reported to MedMARx remains anonymous. All data and correspondence are tied to a confidential facility identification number. Information is not shared with FDA at this time. The JCAHO framework for conducting a root cause analysis is on the system for the convenience of reporters to download the forms, but the programs are not integrated.
Aviation Safety Reporting System at NASA
The three voluntary reporting systems described above represent focused initiatives that apply to a particular type of organization (e.g., hospital) or particular type of error (e.g., medication error). The Aviation Safety Reporting System (ASRS) is a voluntary, confidential incident reporting system used to identify hazards and latent system deficiencies in order to eliminate or mitigate them.10 ASRS is described as an example of a comprehensive voluntary reporting system.
ASRS receives "incident" reports, defined as an occurrence associated
with the operation of an aircraft that affects or could affect the safety of operations. Reports into ASRS are submitted by individuals confidentially. After any additional information is obtained through follow-up with reporters, the information is maintained anonymously in a database (reports submitted anonymously are not accepted). ASRS is designed to capture near misses, which are seen as fruitful areas for designing solutions to prevent future accidents.
The National Transportation Safety Board (NTSB) investigates aviation accidents. An "accident" is defined as an occurrence that results in death or serious injury or in which the aircraft receives substantial damage. NTSB was formed in 1967 and ASRS in 1976. The investigation of accidents thus preceded attention to near misses.
ASRS operates independently from the Federal Aviation Administration (FAA). It was originally formed under FAA, but operations were shifted to the National Aeronautics and Space Administration (NASA) because of the reluctance of pilots to report incidents (as differentiated from accidents) to a regulatory authority. FAA funds the ASRS, but NASA administers and manages the program independently. ASRS has no regulatory or enforcement powers over civil aviation.
ASRS issues alerts to the industry on hazards it identifies as needed (e.g., ASRS does not go through a regulatory agency to issue an alert or other communication; Linda Connell, Director of ASRS, personal communication, May 20, 1999). If a situation is very serious, it may issue an alert after only one incident. Often, ASRS has received multiple reports and noted a pattern. The purpose of ASRS alerts and other communications is to notify others of problems. Alerts may be disseminated throughout the industry and may also be communicated to the FAA to notify them about areas that may require action. ASRS does not propose or advocate specific solutions because it believes this would interfere with its role as an "honest broker" for reporters. As a result, although some reported problems may be acted upon, others are not. For example, ASRS has been notifying FAA and the industry about problems that have persisted throughout its 23-year history, such as problems with call signs. To date, no agency has been able to a find permanent solution. However, ASRS continues to issue alerts about the problem to remind people that the problem has not been solved.
ASRS maintains a database on reported incidents, identifies hazards and patterns in the data, conducts analyses on types of incidents, and interviews reporters when indicated. It sends out alert messages, publishes a monthly safety bulletin that is distributed to 85,000 readers and produces a semi-
annual safety topics publication targeted to the operators and flight crews of complex aircraft. Quick-response studies may be conducted for NTSB and FAA as needed (e.g., if an accident occurred, they may look for similar incidents). ASRS receives over 30,000 reports annually and has an operating budget of approximately $2 million.11
A more recent program is the Aviation Safety Action Programs. The de-identification of reports submitted to ASRS means that organizations do not have access to reports that identify problems in their own operations. In 1997, FAA established a demonstration program for the creation of Aviation Safety Action Programs (ASAP).12 Under ASAP, an employee may submit a report on a serious incident that does not meet the threshold of an accident to the airline and the FAA with pilot and flight identification. Reports are reviewed at a regular meeting of an event review committee that includes representatives from the employee group, FAA and the airline. Corrective actions are identified as needed.
Mandatory Internal Reporting with Audit
Occupational Safety and Health Administration
OSHA uses a different approach for reporting than the systems already described. It requires companies to keep internal records of injury and illness, but does not require that the data be routinely submitted. The records must be made available during on-site inspections and may be required if the company is included in an annual survey of a sample of companies.13 OSHA and the Bureau of Labor Statistics both conduct sample surveys and collect the routine data maintained by the companies. These agencies conduct surveys to construct incidence rates on worksite illness and injury that are tracked over time or to examine particular issues of concern, such as a certain activity.
Employers with 11 or more employees must routinely maintain records of occupational injury and illness as they occur. Employees have access to a summary log of the injury and illness reports, and to copies of any citations issued by OSHA. Citations must be posted for three days or until the problem is corrected, whichever is longer. Companies with ten or fewer employers are exempt from keeping such records unless they are selected for an annual survey and are required to report for that period. Some industries, although required to comply with OSHA rules, are not subject to record-keeping requirements (including some retail, trade, insurance, real estate,
and services). However, they must still report the most serious accidents (defined as an accident that results in at least one death or five or more hospitalizations).
Key Points from Existing Reporting Systems
There are a number of ways that reporting systems can contribute to improving patient safety. Good reporting systems are a tool for gathering sufficient information about errors from multiple reporters to try to understand the factors that contribute to them and subsequently prevent their recurrence throughout the health care system. Feedback and dissemination of information can create an awareness of problems that have been encountered elsewhere and an expectation that errors should be fixed and safety is important. Finally, a larger-scale effort may improve analytic power by increasing the number of ''rare" events reported. A serious error may not occur frequently enough in a single entity to be detected as a systematic problem; it is perceived as a random occurrence. On a larger scale, a trend may be easier to detect.
Reporting systems are particularly useful in their ability to detect unusual events or emerging problems.14 Unusual events are easier to detect and report because they are rare, whereas common events are viewed as part of the "normal" course. For example, a poorly designed medical device that malfunctions routinely becomes viewed as a normal risk and one that practitioners typically find ways to work around. Some common errors may be recognized and reported, but many are not. Reporting systems also potentially allow for a fast response to a problem since reports come in spontaneously as an event occurs and can be reacted to quickly.
Two challenges that confront reporting systems are getting sufficient participation in the programs and building an adequate response system. All reporting programs, whether mandatory or voluntary, are perceived to suffer from underreporting. Indeed, some experts assert that all reporting is fundamentally voluntary since even mandated reporting can be avoided.15 However, some mandatory programs receive many reports and some voluntary programs receive fewer reports. New York's mandatory program receives an average of 20,000 reports annually, while a leading voluntary program, the MER Program, has received approximately 3,000 reports since 1993. Reporting adverse reactions to medications to FDA is voluntary for practitioners, and they are not subject to FDA regulation (so the report is not going to an authority that can take action against them). Yet, underreporting is still perceived.16 Of the approximately 235,000 reports received
annually at FDA, 90 percent come from manufacturers (although practitioners may report to the manufacturers who report to FDA). Only about 10 percent are reported directly through MedWatch, mainly from practitioners.
The volume of reporting is influenced by more factors than simply whether reporting is mandatory or voluntary. Several reasons have been suggested for underreporting. One factor is related to confidentiality. As already described, many of the states contacted faced concerns about confidentiality, and what information should be released and when. Although patients were never identified, states varied on whether to release the identity of organizations. They were faced with having to balance the concerns of health care organizations to encourage participation in the program and the importance of making information available to protect and inform consumers. Voluntary programs often set up special procedures to protect the confidentiality of the information they receive. The issue of data protection and discoverability is discussed in greater detail in Chapter 6.
Another set of factors that affects the volume of reports relates to reporter perceptions and abilities. Feedback to reporters is believed to influence participation levels.17 Belief by reporters that the information is actually used assures them that the time taken to file a report is worthwhile. Reporters need to perceive a benefit for reporting. This is true for all reporting systems, whether mandatory or voluntary. Health care organizations that are trained and educated in event recognition are also more likely to report events.18 Clear standards, definitions, and tools are also believed to influence reporting levels. Clarity and ease helps reporters know what is expected to be reported and when. One experiment tried paying for reporting. This increased reporting while payments were provided, but the volume was not sustained after payments stopped.19
Although some reporting systems that focus on adverse events, such as hospital patients experiencing nosocomial infections, are used to develop incidence rates and track changes in these rates over time, caution must be exercised when calculating rates from adverse event reporting systems for several reasons. Many reporting systems are considered to be "passive" in that they rely on a report being submitted by someone who has observed the event.20 "Active" systems work with participating health care organizations to collect complete data on an issue being tracked to determine rates of an adverse event21 (e.g., the CDC conducted an active surveillance study of vaccine events with four HMOs linking vaccination records with hospital admission records22).
The low occurrence of serious errors can also produce wide variations in frequency from year to year. Some organizations and individuals may routinely report more than others, either because they are more safety conscious or because they have better internal systems.23 Certain characteristics of medical processes may make it difficult to identify an adverse event, which can also lead to variation in reporting. For example, adverse drug events are difficult to detect when they are widely separated in time from the original use of the drug or when the reaction occurs commonly in an unexposed population.24 These reasons make it difficult to develop reliable rates from reporting systems, although it may be possible to do so in selected cases. However, even without a rate, repetitive reports flag areas of concern that require attention.
It is important to note, however, that the goal of reporting programs is not to count the number of reports. The volume of reports by itself does not indicate the success of a program. Analyzing and using the information they provide and attaching the right tools, expertise and resources to the information contained in the reports helps to correct errors. Medication errors are heavily monitored, by several public and private reporting systems, some of which afford anonymous reporting. It is possible for a practitioner to voluntarily and confidentially report a medication error to the FDA or to private systems (e.g., MER program, MedMARx). Some states with mandatory reporting may also receive reports of medication-related adverse events. Yet, some medication problems continue to occur, such as unexpected deaths from the availability of concentrated potassium chloride on patient care units.25
Reporting systems without adequate resources for analysis and follow-up action are not useful. Reporting without analysis or follow-up may even be counterproductive in that it weakens support for constructive responses and is viewed as a waste of resources. Although exact figures are not available, it is generally believed that the analysis of reports is harder to do, takes longer and costs more than data collection. Being able to conduct good analyses also requires that the information received through reporting systems is adequate. People involved in the operation of reporting systems believe it is better to have good information on fewer cases than poor information on many cases. The perceived value of reports (in any type of reporting system) lies in the narrative that describes the event and the circumstances under which it occurred. Inadequate information provides no benefit to the reporter or the health system.
Discussion of Committee Recommendations
Reporting systems may have a primary focus on accountability or on safety improvement. Design features vary depending on the primary purpose. Accountability systems are mandatory and usually receive reports on errors that resulted in serious harm or death; safety improvement systems are generally voluntary and often receive reports on events resulting in less serious harm or no harm at all. Accountability systems tend to receive reports from organizations; safety improvement systems may receive reports from organizations or frontline practitioners. Accountability systems may release information to the public; safety improvement systems are more likely to be confidential.
Figure 5.1 presents a proposed hierarchy of reporting, sorting potential errors into two categories: (1) errors that result in serious injury or death (i.e., serious preventable adverse events), and (2) lesser injuries or noninjurious events (near-misses).26 Few errors cause serious harm or death; that is the tip of the triangle. Most errors result in less or no harm, but may represent early warning signs of a system failure with the potential to cause serious harm or death.
The committee believes that the focus of mandatory reporting systems should be on the top tier of the triangle in Figure 5.1. Errors in the lower tier are issues that might be the focus of voluntary external reporting systems, as well as research projects supported by the Center for Patient Safety and internal patient safety programs of health care organizations. The core reporting formats and measures promulgated by the National Forum for Health Care Quality Measurement and Reporting should focus first on the top tier. Additional standardized formats and measures pertaining to other
types of errors might be promulgated in the future to serve as tools to be made available to voluntary reporting systems or health care organizations for quality improvement purposes.
The committee believes there is an important role for both mandatory and voluntary reporting systems. Mandatory reporting of serious adverse events is essential for public accountability and the current practices are too lax, both in enforcement of the requirements for reporting and in the regulatory responses to these reports. The public has the right to expect health care organizations to respond to evidence of safety hazards by taking whatever steps are necessary to make it difficult or impossible for a similar event to occur in the future. The public also has the right to be informed about unsafe conditions. Requests by providers for confidentiality and protection from liability seem inappropriate in this context. At the same time, the committee recognizes that appropriately designed voluntary reporting systems have the potential to yield information that will impact significantly on patient safety and can be widely disseminated. The reports and analyses in these reporting systems should be protected from disclosure for legal liability purposes.
Mandatory Reporting of Serious Adverse Events
The committee believes there should be a mandatory reporting program for serious adverse events, implemented nationwide, linked to systems of accountability, and made available to the public. Comparable to aviation "accidents" that are investigated by the National Transportation Safety Board, health care organizations should be required to submit reports on the most serious adverse events using a standard format. The types of adverse events to be reported may include, for example, maternal deaths; deaths or serious injuries associated with the use of a new device, operation or medication; deaths following elective surgery or anesthetic deaths in Class I patients. In light of the sizable number of states that have already established mandatory reporting systems, the committee thinks it would be wise to build on this experience in creating a standardized reporting system that is implemented nationwide.
Within these objectives, however, there should be flexibility in implementation. Flexibility and innovation are important in this stage of development because the existing state programs have used different approaches to implement their programs and a "best practice" or preferred approach is not yet known. The Center for Patient Safety can support states in identify-
ing and communicating best practices. States could choose to collect and analyze such data themselves. Alternatively, they could rely on an accrediting body, such as Joint Commission for Accreditation of Healthcare Organizations or the National Committee for Quality Assurance, to perform the function for them as many states do now for licensing surveys. States could also contract with peer review organizations (PROs) to perform the function. As noted in Chapter 4, the Center for Patient Safety should evaluate the approaches taken by states in implementing reporting programs. States have employed a variety of strategies in their programs, yet few (if any) have been subject to rigorous evaluation. Program features that might be evaluated include: factors that encourage or inhibit reporting, methods of analyzing reports, roles and responsibilities of health care organizations and the state in investigating adverse events, follow-up actions taken by states, information disclosed to the public, and uses of the information by consumers and purchasers.
Although states should have flexibility in how they choose to implement the reporting program, all state programs should require reporting for a standardized core set of adverse events that result in death or serious injury, and the information reported should also be standardized.
The committee believes that these standardized reporting formats should be developed by an organization with the following characteristics. First, it should be a public-private partnership, to reflect the need for involvement by both sectors and the potential use of the reporting format by both the public and the private sectors. Second, it should be broadly representative, to reflect the input from many different stakeholders that have an interest in patient safety. Third, it should be able to gather the expertise needed for the task. This requires adequate financial resources, as well as sufficient standing to involve the leading experts. Enabling legislation can support all three objectives.
The National Forum for Health Care Quality Measurement and Reporting meets these criteria. The purpose of this public-private partnership (formed in May 1999) is to develop a comprehensive quality measurement and public reporting strategy that addresses priorities for quality measurement for all stakeholders consistent with national aims for quality improvement in health care. It is to develop a plan for implementing quality measurement, data collection and reporting standards; identify core sets of measures; and promote standardized measurement specifications. One of its specific tasks should relate to patient safety.
The advantage of using the Forum is that its goal already is to develop a
measurement framework for quality generally. A focus on safety would ensure that safety gets built into a broader quality agenda. A public-private partnership would also be able to convene the mix of stakeholders who, it is hoped, would subsequently adopt the standards and standardized reporting recommendations of the Forum. However, the Forum is a new organization that is just starting to come together; undoubtedly some time will be required to build the organization and set its agenda.
Federal enabling legislation and support will be required to direct the National Forum for Health Care Quality Measurement and Reporting to promulgate standardized reporting requirements for serious adverse events and encourage all states to implement the minimum reporting requirements. Such federal legislation pertaining to state roles may be modeled after the Health Insurance Portability and Accountability Act of 1996 (HIPAA). HIPAA provides three options for implementing a program: (1) states may pass laws congruent with or stronger than the federal floor and enforce them using state agencies; (2) they may create an acceptable alternative mechanism and enforce it with state agencies; or finally, (3) they may decline to pass new laws or modify existing ones and leave enforcement of HIPAA to the federal government.27 OSHA is similarly designed in that states may develop their own OSHA program with matching funds from the federal government; the federal OSHA program is employed in states that have not formed a state-level program.
Voluntary Reporting Systems
The committee believes that voluntary reporting systems play a valuable role in encouraging improvements in patient safety and are a complement to mandatory reporting systems. The committee considered whether a national voluntary reporting system should be established similar to the Aviation Safety Reporting System. Compared to mandatory reporting, voluntary reporting systems usually receive reports from frontline practitioners who can report hazardous conditions that may or may not have resulted in patient harm. The aim is to learn about these potential precursors to errors and try to prevent a tragedy from occurring.
The committee does not propose a national voluntary reporting system for several reasons. First, there are already a number of good efforts, particularly in the area of medications. Three complementary national reporting systems are focused on medication errors: FDA, the Institute for Safe Medication Practice, and U.S. Pharmacopeia. The JCAHO sentinel events
program is another existing national reporting program for hospitals that will also receive reports on medication and other errors. These reporting systems should be encouraged and promoted within health care organizations, and better use should be made of available information being reported to them.
Second, there are several options available about how to design such a voluntary reporting system. Better information is needed on what would be the best approach. At least three different approaches were identified. One is a universal, voluntary reporting system, modeled after ASRS. The concern with this approach is the potential volume of reports that might come forward when such a system is applied to health care. Another concern is that any single group is unlikely to have the expertise needed to analyze and interpret the diverse set of issues raised in health care. The experience of ASRS has shown that the analysts reviewing incoming reports must be content experts who can understand and interpret these reports.28 In health care, different expertise is likely needed to analyze, for example, medication errors, equipment problems, problems in the intensive care unit (ICU), pediatric problems, and home care problems.
Another approach is to develop focused "mini-systems" that are targeted toward selected areas (e.g., those that exist for medications) rather than a single voluntary program. This approach would manage the potential volume of reports and match the expertise to the problems. It is possible that there should be different mini-systems for different issues such as medications, surgery, pediatrics, and so forth. If such mini-systems are formed, there should be a mechanism for sharing information across them since a report to one system may have relevance for another (e.g., surgical events that also involve medications).
A third possibility is to use a sampling approach. For example, in its postmarketing surveillance of medical devices, FDA is moving away from a universal reporting system for hospitals and nursing homes to one in which a representative sample of hospitals and nursing homes keeps complete data. Its pilot test found that both the quantity and the quality of reports improved when FDA worked with a sample of hospitals who were trained in error identification and reporting and could receive feedback quickly. By periodically renewing the sample, the burden on any organization is limited (although participation in the sample may have the side benefit of helping interested organizations build their internal systems and train practitioners in error detection).
Lastly, establishing a comprehensive voluntary reporting system mod-
eled after ASRS would require an enormous investment of time and resources. The committee believes that recommending such an investment would be premature in light of the many questions still surrounding this issue.
The committee does believe that voluntary reporting systems have a very important role to play in enhancing understanding of the factors that contribute to errors. When properly structured, voluntary systems can help to keep participating health care organizations focused on patient safety issues through frequent communication about emerging concerns and potential safety improvement strategies. Voluntary systems can provide much-needed expertise and information to health care organizations and providers.
The continued development of voluntary reporting efforts should be encouraged. Through its various outreach activities, the Center for Patient Safety should describe and disseminate information on voluntary reporting programs throughout the health care industry and should periodically convene sponsors and users of voluntary reporting systems to discuss ways in which these systems can be made more effective. As a part of developing the national research agenda for safety, the Center for Patient Safety should consider projects that might lead to the development of knowledge and tools that would enhance the effectiveness of voluntary reporting programs. The Center should also periodically assess whether there are gaps in the current complement of voluntary reporting programs and should consider funding pilot projects.
In summary, this chapter and the previous chapter outlining the proposed Center for Patient Safety together describe a comprehensive approach for improving the availability of information about medical errors and using the information to design systems that are safer for patients. Although this chapter focuses on using reporting systems to learn about and learn from errors that have already occurred, Chapter 4 focused on how to create and disseminate new knowledge for building safer delivery systems. Both of these strategies should work together to make health care safer for patients.
1. Bates, David, W.; Spell, Nathan; Cullen, David J., et al. The Costs of Adverse Drug Events in Hospitalized Patients. JAMA. 277 (4):307–311, 1997.
2. For example, there are several efforts relative to the reporting of medication errors specifically, such as the Institute for Safe Medication Practices (ISMP) and U.S. Pharmacopeia. The FDA sponsors its MedWatch medication and device reporting program. The National Coordinating Council of the Medical Errors Program (NCC-MERP)
has developed a taxonomy for medication errors for the recording and tracking of errors. General reporting programs (not specific to medications) include JCAHO's sentinel events reporting program and some state programs.
3. "State Agency Experiences Regarding Mandatory Reporting of Sentinel Events," JCAHO draft survey results, April 1999.
4. Billings, Charles, "Incident Reporting Systems in Medicine and Experience With the Aviation Safety Reporting System," in Cook, Richard; Woods, David; and Miller, Charlotte, A Tale of Two Stories: Contrasting Views of Patient Safety, Chicago: National Patient Safety Foundation of the AMA, 1998.
5. Office of the Inspector General, "The External Review of Hospital Quality: A Call for Greater Accountability," http://www.dhhs.gov/progorg/oei/reports/oei-01-97-00050.htm.
6. Additional strategies include field investigations, epidemiological studies and other focused studies.
7. "Sentinel Event Policy and Procedure," Revised: July 18, 1998. Joint Commission on Accreditation of Healthcare Organizations, Oakbrook Terrace, Illinois.
8. Joint Commission on Accreditation of Healthcare Organizations, Sentinel Event Alert, Number Three, May 1, 1998.
9. Heigel, Fred, presentation at 12th Annual State Health Policy Conference, National Academy for State Health Policy, Cincinnati, Ohio, August 2, 1999.
10. "Federal Aviation Administration, Office of System Safety, Safety Data," http://nasdac.faa.gov/safety_data.
11. Billings, Charles, "Incident Reporting Systems in Medicine and Experience With the Aviation Safety Reporting System," Appendix B in A Tale of Two Stories, Richard Cook, David Woods and Charlotte Miller, Chicago: National Health Care Safety Council of the National Patient Safety Foundation at the AMA, 1998.
12. Federal Aviation Administration, "Aviation Safety Action Programs (ASAP)," Advisory Circular No. 120-66, 1/8/97.
13. "All About OSHA," U.S. Department of Labor, Occupational Safety and Health Administration, OSHA 2056, 1995 (Revised).
14. Brewer, Timothy and Colditz, Graham A. Postmarketing Surveillance and Adverse Drug Reactions, Current Perspectives and Future Needs. JAMA. 281(9):824–829, 1999. See also: FDA, "Managing the Risks from Medical Product Use, Creating a Risk Management Framework," Report to the FDA Commissioner from the Task Force on Risk Management, USDHHS, May, 1999.
15. Billings, Charles, presentation to Subcommittee on Creating an External Environment for Quality Health Care, January 29, 1999.
16. Brewer and Colditz, 1999. See also: FDA, "Managing the Risks from Medical Product Use," May 1999.
17. FDA, "Managing the Risks from Medical Product Use," May 1999.
18. As part of the FDA Modernization Act of 1997, the FDA is mandated to shift from a universal mandatory reporting system for users (hospitals and nursing homes) of medical devices to one where only a subset of facilities report. In their pilot test, they believed that faster and better feedback to reporters contributed to improved reporting. FDA, May 1999. See also: Susan Gardner, Center for Devices and Radiological Health, personal communication, November 24, 1998.
19. Feely, John; Moriarty, Siobhan; O'Connor, Patricia. Stimulating Reporting of Adverse Drug Reactions by Using a Fee. BMJ. 300:22–23, 1990.
20. FDA, ''Managing the Risks from Medical Product Use," 1999.
21. Brewer and Colditz, 1999.
22. Farrington, Paddy; Pugh, Simon; Colville, Alaric, et al. A New Method for Active Surveillance of Adverse Events from Diphtheria/Tetanus/Pertussis and Measles/Mumps/Rubella Vaccines. Lancet. 345(8949):567–569, 1995.
23. Nagel, David C., "Human Error In Aviation Operations," in D.C. Nagel and E.L. Wiener (eds.), Human Factors in Aviation, eds., Orlando, FL: Academic Press, Inc., 1988.
24. Brewer and Colditz, 1999.
25. Medication Error Prevention—Potassium Chloride. JCAHO Sentinel Event Alert, Issue One, Oakbrook Terrace, Illinois: 1998.
26. Adapted from work by JCAHO based on presentation by Margaret VanAmringe to the Subcommittee on Creating an External Environment for Quality in Health Care, June 15, 1999, Washington, D.C.
27. Nichols, Len M. and Blumberg, Linda J. A Different Kind of "New Federalism"? The Health Insurance Portability and Accountability Act of 1996. Health Affairs. 17(3):25–42, 1998.
28. Billings, Charles, presentation to Subcommittee on Creating an External Environment for Quality, January 29, 1999.