Understanding Accident Precursors
Health Science Center
University of Tennessee
Organizations seek to identify the factors that might cause or contribute to adverse events before these precursors result in accidents. But understanding accident precursors poses difficulties for organizations that seek to untangle those factors from the snarled mesh of history. An organization attempts to learn from its history of accidents, but these infrequent adverse events yield sparse data from which to draw conclusions (March et al., 1991). The unacceptable cost of such events precludes the usual organizational methods of learning from experience by trial and error (La Porte, 1982). In addition, processes of detecting danger signals can be clouded by ambiguities and uncertainties (Marcus and Nichols, 1999; Weick, 1995) and obscured by redundant layers of protection (Sagan, 1993). Finally, when things go wrong, organizations often use the same data as a basis for disciplining those involved and for identifying accident precursors. But linking data collection with disciplinary enforcement inadvertently creates disincentives for the disclosure of information (Tamuz, 2001). Despite these difficulties, or perhaps because of them, various industries have developed alternative models for detecting and identifying accident precursors.
The types of accidents or adverse events vary among industries, from airplane crashes in the aviation industry to lethal emissions in the chemical industry to patient injuries and deaths in the health care industry. These harmful events differ in their probability of occurrence; in the distribution of their negative consequences among employees, clients, and the public; in the complexity and interdependence embedded in their technologies; and in the regulatory context in which they operate (Perrow, 1984). For example, although the estimated number of deaths and injuries attributed to preventable adverse events in health care far
outnumbers the average loss of life in aircraft accidents, the deaths in health care occur one patient at a time and usually without media attention. Aircraft accidents kill many people in one disastrous, highly publicized moment. Indeed, aviation professionals may also lose their lives in a crash (Thomas and Helmreich, 2002). The nuclear power industry, unlike aviation and health care, has to contend with a hostile public skeptical about its ability to operate safely.
Organizations in various industries also differ in their capacity to intervene and avert catastrophe (Perrow, 1984; van der Schaaf, p. 119 in this volume). Despite these and other critical differences, decision makers in diverse industries are all engaged in a common search for accident precursors. Some industries, such as aviation and nuclear power, have a relatively long history of seeking to identify accident precursors; others, such as blood banks and hospitals, are relative newcomers to the field. Nevertheless, they all use similar information-gathering processes and weigh common design choices. Whereas some industries discovered precursors based on their common experiences, such as having to draw on small samples of accidents (March et al., 1991), other industries developed precursor detection programs as a result of learning by imitation (Levitt and March, 1988), such as in the Patient Safety Reporting System.
SEEKING ACCIDENT PRECURSORS AMONG NEAR ACCIDENTS
Accidents and adverse events provide critical sources of information about accident precursors. Discovering precursors from accidents, however, can be difficult, because accidents can be infrequent, costly, and complex (Tamuz, 1987). In industries such as nuclear power and aviation where accidents are rare, organizations investigate accidents in great detail, but they have few accidents from which to learn. In sorting through the complex circumstances of a single accident, it may be difficult to ascertain whether specific conditions preceding the accident are precursors or just a coincidence of random events. Furthermore, adverse events have the potential for catastrophic consequences, not only for those directly involved, but also for the organizations involved—the hospital, airline, device manufacturer, or blood bank—which may be held liable for damages. The actual contributing factors to the accident may be obscured in the struggle to establish liability by casting blame (Tasca, 1989).
Near accidents, events in which no damages or injuries occur but, under slightly different circumstances, could have resulted in harm, are important sources of information about accident precursors (NRC, 1980; Tamuz, 1987). Methods of gathering and sorting near accident data to reveal precursors have been developed in high-hazard industries, in which accidents are rare but have disastrous consequences. The air transportation industry builds on a common experiential understanding of a near miss, such as when two aircraft nearly collide. The nuclear power industry and chemical manufacturing industry draw on engineering culture,
in which it has long been assumed that only chance differentiates near accidents from industrial accidents (Heinrich, 1931).
Terms for a near accident, such as a “close call” and a “near miss,” are being adapted by health care organizations. This reflects a change in emphasis from assessing actual harm to patients, as expressed by the traditional admonition to do no harm, to evaluating the potential for adverse outcomes (Stalhandske et al., 2002). Adopting lessons from the aviation industry, hospital transfusion-medicine departments (Battles et al., 1998), the Department of Veterans Affairs hospital system (Heget et al., 2002), and the Agency for Healthcare Research and Quality (e.g., Pace et al., 2003) have promoted the implementation of close-call reporting systems.
To provide an overview for discussing accident precursors, this paper is divided into two sections. First, the Aviation Safety Reporting System (ASRS) is described to illustrate some common processes involved in detecting and identifying accident precursors and to provide a common frame of reference. A basic understanding of how ASRS identifies accident precursors is important not only for understanding aviation safety programs, but also because it has become a widely discussed and adapted model in health care (IOM, 2000; Leape, 1994). Second, based on examples in the aviation, nuclear power, and health care industries, a few key design choices and trade-offs are described.
PROCESSES IN IDENTIFYING ACCIDENT PRECURSORS
Sifting through the shards of near accidents, organizations engage in several processes to identify precursors. Building on a model developed in a previous accident precursor workshop (Brannigan et al., 1998), I propose that these processes include: aggregating data, detecting signals, gathering information, interpreting and analyzing information, making and implementing decisions, compiling and storing data, and disseminating information.
Although the processes are listed in order, they often occur in recurring decision-making loops linked by feedback chains. For example, in the process of analyzing an event, safety analysts may decide to gather additional information about procedures that preceded the event. Returning to the information-gathering process, they search their database for reports of similar procedures. The location of such feedback loops can be essential for promoting (or impeding) learning. For example, hospital pharmacists diligently gathered data about the prescribing errors they had prevented by calling and asking for clarifications from the resident physician who ordered the medication. But the pharmacy did not provide feedback to the residents and those who train them; thus, the lack of a feedback loop linking the pharmacy back to the physicians may have hindered efforts to identify precursors to adverse drug events (Tamuz et al., 2004).
In practice, organizations skip some processes. To identify threats, decision
makers may rely on data collected for regulatory purposes rather than gathering data solely for the purpose of detecting precursors. For example, data collected in air traffic control centers to monitor controllers’ operational errors and pilots’ deviations from regulations have also been used to identify hazardous conditions (Tamuz, 2001). Similarly, when ASRS identifies reports that illustrate well known, albeit sometimes overlooked, precursors, it proceeds directly from the processes of interpretation and analysis of information to the dissemination of educational information.
ASRS is a voluntary, confidential, nonpunitive reporting system, managed under the auspices of the National Aeronautics and Space Administration, funded by the Federal Aviation Administration (FAA), and operated by a long-term contractor. The following brief description of ASRS is based on interviews with key participants, supporting documents provided by them (Reynard et al., 1986), and secondary sources (Connell, p. 139 in this volume; National Academy of Public Administration, 1994).
The following discussion describes a conceptual model constructed from the processes of identifying accident precursors applied to ASRS operations (summarized in Table 1).
Data regarding safety-related events are compiled at a national level. Individuals working in airlines, airports, and air traffic control facilities are encouraged to file reports. Although pilots submit most of the reports, ASRS encourages reporting by air traffic controllers and other groups in the aviation community.
The goal of ASRS is to detect signals of potentially dangerous events by having individuals in the air transportation industry report their perceptions of safety-related incidents. The definition of an event is broadly defined, and individuals are encouraged to report anything they perceive to be significant. In practice, as will be explained below, pilots have incentives to report events that could involve violations of FAA regulations. ASRS specifically excludes reports of accidents, events that resulted in injuries or property damage, and intentional regulatory violations, such as sabotage.
ASRS contributes to the identification of accident precursors by collecting data in ways that overcome some of the traditional barriers to gathering reports of negative information in organizations. First, reporting is voluntary. Second, it is promoted as a professional obligation. Indeed, instruction in the use of ASRS
TABLE 1 Processes in Identifying Accident Precursors
Aviation Safety Reporting System
Data on a national level
Safety-related incidents, excluding accidents, criminal acts, and intentional regulatory violations
Voluntary, professional reporting system
Incentives, including limited immunity from prosecution for pilots
Confidentiality, including call-back capacity and de-identification
Interpreting and analyzing data
Classification of events for safety significance
Identification of urgent hazardous situations
Focus on potential outcomes (what could have occurred)
Identification of examples of known precursors
Prospective view: discovery of possible accident precursors for further investigation
Retrospective view: investigation of accidents in context of near accidents
Compiling and storing data
Public data distribution
Making and implementing decisions
Recommendations to FAA
Lack of decision-making authority
Distribution of hazard warnings
Provision of data to regulators and public
Publication of practical precursor information
reporting forms is routinely included in the training of general aviation pilots, and the Airline Pilots Association encourages its members to file ASRS reports. Third, pilots receive limited immunity from administrative action by the FAA if they have filed an ASRS report regarding a possible infraction. The FAA may still take action against the pilot, but the sanctions will not include certificate suspension, thus allowing the pilot to continue flying. Finally, ASRS uses de-identification of both individuals and airlines to ensure that reports are not used as a basis for regulatory enforcement, disciplinary action, or litigation. The main objective of information gathering is to promote learning from experience rather than to discipline individuals for regulatory infractions.
In response to its innovative data gathering methods, ASRS receives different kinds of reports. Some pilots submit sparse, telegraph-style factual summaries, describing their aircraft’s deviation from an assigned altitude, for example. These reports appear to be filed simply to protect the pilot from possible FAA enforcement action. Other reports of events that involve regulatory violations describe in detail how conditions that caused near accidents could have resulted in accidents. A third type of report describes potentially dangerous events, with no mention of a possible air traffic violation. Thus, although ASRS provides incentives for pilots to file reports, some pilots take the time to report events they perceive to be dangerous, even if they do not benefit directly from the incentives (Tamuz, 1987).
Interpreting and Analyzing Information
ASRS analysts first classify reports by their safety significance. If a report has safety potential, it is carefully examined and coded. Potentially significant events are further classified as (1) urgent situations that require immediate intervention or (2) events that warrant in-depth analysis and coding by ASRS safety analysts (Tamuz, 2000). Although ASRS safety analysts occasionally “call back” individuals to obtain additional information, ASRS does not independently investigate reports.
ASRS safety analysts identify accident precursors by examining critical incidents in detail and by noticing patterns in the data. After an aircraft accident, ASRS safety analysts routinely search the database for near accidents that occurred under similar conditions. After identifying a critical near accident, they scan the database to generate hypotheses about potential accident precursors, inform the FAA, and call for further study and, possibly, corrective measures. ASRS analysts also conduct database searches in response to FAA inquiries about potential threats to safety. In addition, ASRS analysts look for instructive examples to illustrate recognized precursors for publications and training materials.
To discern possible accident precursors, ASRS relies mainly on human expertise. ASRS safety analysts are drawn from the ranks of retired pilots and air traffic controllers who have years of experience, as well as expertise. Building on their cumulative knowledge and experience, ASRS has developed an extensive coding scheme for classifying incident reports. Because ASRS is a voluntary reporting system, however, fluctuations in the number of reports are not reliable indicators of changes in underlying safety conditions.
Making and Implementing Decisions
ASRS representatives advise the FAA and make policy recommendations, but they cannot initiate changes. ASRS is designed to gather reports of potential dangers and identify possible accident precursors, but it does not have the authority to make or implement decisions.
Analyses of ASRS near-accident data have led to the identification of many accident precursors. For example, ASRS analysts noticed that skilled pilots had almost lost control of their aircraft in the wake turbulence from a Boeing 757 aircraft, even though they were following at the prescribed distance (wake turbulence can be described as horizontal, tornado-like vortices of air behind an aircraft). Tragically, these ASRS data did not reach the appropriate FAA decision makers, and no corrective action was taken until several accidents attributed to wake turbulence had occurred (Reynard, 1994). This illustrates ASRS’s capacity to identify accident precursors proactively, as well as the importance of feedback loops linking ASRS data analysts to FAA policy makers.
Compiling and Storing Data
The ASRS data compilation activities are performed under carefully defined confidentiality restrictions. Indeed, analyses and conclusions drawn by safety analysts are not released to the public. ASRS staff do conduct some limited searches of the database for the aviation community, researchers, and the public. De-identified ASRS data are also available on the Web, as part of the FAA National Aviation Safety Data Analysis Center, and are routinely used by journalists.
ASRS representatives regularly brief FAA policy makers and issue warnings about hazardous situations in air traffic control facilities and airports. ASRS also publishes information about accident precursors on the Web and in print. In particular, they disseminate information about accident precursors and other threats to safety to individuals working in aviation (Hardy, 1990). These examples are published in “Callback,” a newsletter distributed to members of the aviation community and freely available on the Web. For example, an issue of “Callback” featured the following statement in an excerpt from a pilot’s ASRS report, “As I was accelerating down the runway, a shadow appeared.” The shadow was of another aircraft landing immediately in front of him (Callback, 2003). The editors used this example to call attention to a well known, but overlooked accident precursor.
KEY DESIGN CHOICES AND TRADE-OFFS
Industries, and the organizations within them, differ in their methods of identifying accident precursors. They may differ in their design choices for the level of aggregating data, in how they define and classify safety-related events, in their choice of surveillance or reporting systems, and in their methods of overcoming barriers to reporting. Each of these choices can result in trade-offs that influence the system’s capacity for identifying accident precursors.
Aggregating Data: Pooling Data on an Organizational or Interorganizational Level
The implications of aggregating data at the organizational or interorganizational level are apparent in a comparison of an airline-based model, the Airline Safety Action Partnership (ASAP), with the nationwide model, ASRS. ASAP was created when representatives of the Southwest Region of the FAA Flight Standards Division joined with the pilots’ association and management of American Airlines to promote the confidential disclosure and correction of potentially dangerous conditions, ranging from inadequate techniques demonstrated by an individual pilot to the identification of accident precursors (Aviation Daily, 1996). In an airline-based reporting system, safety analysts not only investigate organizational conditions to determine if they constitute accident precursors, but they also have the expertise to identify necessary changes and the decision-making authority to eliminate precursor conditions or mitigate their effects.
For example, ASAP members work with (1) union representatives on the ASAP committee to urge pilots to take remedial training; (2) management representatives to change airline procedures; and (3) FAA committee members to influence regulatory changes. Indeed, based on ASAP reports, the airline has identified and used accident precursors in pilot training sessions, updated unclear and potentially confusing airline procedures, and clarified regulatory expectations.
By comparison, ASRS aggregates data at a national level, which enables safety analysts to identify patterns in rare events that, if reported only to an airline, might be classified as isolated events. However, the de-identification of airlines that enables ASRS to gather reports from pilots from competing airlines impedes the gathering of data on specific airline operations. Thus, decision makers cannot detect and correct airline-specific precursors based on ASRS data.
Aggregating data at an organizational or interorganizational level is presented here as a design choice. In some situations, however, the choices (and the trade-offs) need not be made. The British Airways Safety Information System (BASIS) was originally designed as an airline-based system (Holtom, 1991). However, with the widespread adoption of the BASIS model by other airlines, the system was expanded to enable the pooling of data among airlines. Thus, BASIS members benefit from the advantages of pooling data on an interorganizational level and the capacity for corrective action of an airline-based system.
The advantage of pooling data at the national level is apparent in the Accident Sequence Precursors Program, a national system sponsored by the U.S.
Nuclear Regulatory Commission (USNRC) for gathering and analyzing data from the required licensee event reports (LERs) of serious near accidents at nuclear power plants (Minarick, 1990; Sattison, p. 89 in this volume). Because significant events, such as LERs, occur infrequently at any one plant, data from all relevant nuclear power plants must be aggregated. If, however, potentially dangerous events occur that do not meet the LER definitions, they are not reported to the USNRC. Hence the trade-off. Although events classified as nonreportable by LER criteria may provide information about accident precursors, the data remain within the particular plant and would be aggregated at the plant level, rather than the national level, although the USNRC representative located at the plant may also know about the nonreportable events. In addition, the Institute of Nuclear Power Operations (INPO) encourages, but does not require, plant managers to report such events to a closed information dissemination system operated by INPO.
Health care organizations, including blood banks, hospitals, and pharmacies, have begun to pool data regarding actual and potential adverse events. The Medical Event Reporting System for Transfusion Medicine (MERS-TM) maintains a database that enables transfusion medicine departments at participating hospitals to pool data on errors and near misses (Battles et al., 1998). Each department has access to its own data and the pooled data, but no department has access to another department’s specific data. In a similar arrangement, the Veterans Health Administration (VHA) Patient Safety Reporting System collects data on close calls and patient safety issues from individuals working in VHA hospitals across the country. VHA also continues to support hospital-based, close-call reporting systems (Bagian, p. 37 in this volume; Heget et al., 2002).
The Institute for Safe Medication Practices (ISMP) has established an interorganizational clearinghouse for data from pharmacies and pharmacists on potential and actual adverse drug events, such as mix-ups resulting from drugs with sound-alike names or look-alike packaging (Cohen, 1999). ISMP publishes a newsletter and disseminates warning notices to the professional pharmacy community, mainly lessons learned from the experience of others.
All of these health care organizations have initiated programs for pooling data and have established databases to encourage the identification of precursors. As these innovative programs develop, individual health care organizations may be able to identify precursors from pooled data.
Detecting Signals: Defining and Classifying Safety-Related Events
The method used to classify safety-related events can influence an organization’s capacity to identify accident precursors. One design choice is between broad or precise definitions. By using a broad, general definition of safety-related events, ASRS can capture data on events that may lead to the identification of previously unknown accident precursors. By contrast, the FAA-operated computerized surveillance system used in air traffic control centers applied specific, precisely measured definitions of deviations from safety standards that tended to identify well known conditions that were unlikely to yield new insights into accident precursors (Tamuz, 2001).
Potentially dangerous events may be “defined away” if the conditions do not meet the technical definition of a safety-related event. An example of defining away a potential danger is a near miss over La Guardia Airport that air traffic controllers did not report because, technically, it did not fit the formal definition of an operational error (Tamuz, 2000). Although in this case the controller could not be held accountable for making an error, the near miss represented a significant threat to safety and was a possible source of precursor information.
The classification scheme also influences an organization’s capacity to gather and analyze data about potentially dangerous events. If a safety-related event is classified as an error or a regulatory violation, it can lead to measures designed to maintain individual or organizational accountability. In air traffic control centers, for example, when two aircraft failed to maintain the prescribed distance between them, the event could alternatively be defined as an “operational error” for controllers or a “pilot deviation,” depending on who was held accountable. These similar events with differing labels were analyzed and stored in separate databases, hindering the search for possible common precursors (Tamuz, 2001).
The classification of safety-related events not only influences how these events are detected, but also enhances (or constrains) an organization’s capacity to investigate and draw conclusions from its experience. In an Australian hospital, for example, nurses interpreted and defined away potentially harmful events (Baker, 1997); and in one U.S. hospital pharmacy, the definition of a reportable error led to under-reporting and reduced the flow of medication error data to the hospital, while simultaneously enabling learning within the pharmacy department (Tamuz et al., 2004).
By contrast, in a blood bank, the detection of safety-related events that could not harm patients, but were nonetheless classified as posing a threat to the organization (e.g., prompting a regulatory inspection) triggered the allocation of
organizational resources for investigation and problem solving (Tamuz et al., 2001). Hence, these studies of health care organizations suggest that the definition of safety-related events and their classification into alternative categories influence event detection, as well as the activation of organizational routines for gathering and analyzing information.
Gathering Information: Surveillance vs. Reporting Systems
Surveillance and reporting systems are alternative methods of monitoring known accident precursors and discovering new ones. Data about threats to safety can be gathered either through surveillance (i.e., direct observation or auditing) or through voluntary reporting systems.
Automated safety surveillance systems have been implemented in the air transportation industry. As early as 1986, the FAA implemented a computerized surveillance system in air traffic control centers that automatically detected when an aircraft failed to maintain its assigned separation distance (Tamuz, 1987). Since then, United Airlines has championed the Flight Operational Quality Assurance Program (other airlines support similar programs) based on technologies for monitoring aircraft operations by collecting real-time flight data, such as engine temperature and flight trajectory (Flight Safety Digest, 1998).
One critical trade-off between using a surveillance system and using a reporting system is between data reliability and the richness of information. In automated surveillance systems, counts of safety-related events, such as tallies of operational errors in air traffic control, tend to be more reliable than data obtained through reporting systems. The number of safety-related events submitted to reporting systems, for example, fluctuates with changes in perceived incentives for reporting (Tamuz, 1987, 2001). Computerized surveillance systems provide reliable monitoring of operational errors and adverse events; however, they may be less useful in detecting the contributing factors that lead to a malfunction or harmful outcome.
The trade-off between data reliability and information richness is illustrated by voluntary reporting systems, such as ASRS and ASAP. The data gathered by these reporting systems do not provide reliable indicators of the frequency of safety-related accidents, but they do enable the identification of accident precursors. Analyses of ASRS reports, for example, have revealed conditions that contribute to accidents, from the well documented consequences of failing to follow standardized landing procedures to the seemingly trivial, but potentially lethal distraction of drinking coffee in the cockpit, which resulted in the enactment of regulations for a sterile cockpit. In the highly publicized case of the B757 wake turbulence, ASRS reports revealed that skilled pilots almost lost control of their
aircraft even when they maintained the prescribed distance for aircraft trailing a B757 (Reynard, 1994). Hence, although voluntary reporting systems cannot reliably monitor the frequency of errors and adverse events, they can provide important data that may reveal previously overlooked or unknown precursors.
Although individual hospitals have long maintained reporting systems, the underreporting of errors and adverse events is widespread (e.g., IOM, 2000). Adverse drug events in hospitals, for example, are routinely underreported (e.g., Cullen et al., 1995; Edlavitch, 1988). Underreporting to hospital incident reporting systems has been attributed to many factors including shared perceptions of team members (Edmondson, 1996), fear of punishment, and lack of time (Vincent et al., 1999). One design alternative, as noted previously, is to implement close call reporting systems; another is to rely on surveillance rather than on reporting.
Surveillance methods used in hospitals range from traditional labor-intensive methods to new computerized surveillance systems. In some hospitals, nurses and medical researchers periodically audit patient charts to identify errors and adverse events after they have occurred. Other hospitals use sophisticated information technology to identify preventable medical injuries, such as adverse drug events. One of the advantages of these automated surveillance systems is that they can provide a more accurate count of adverse events than reporting systems (Bates et al., 2003).
Gathering Information: Overcoming Barriers to Reporting
A regulatory agency must make a critical trade-off between its responsibility to maintain accountability and its responsibility to identify and avert accident precursors (Tamuz, 2001). This is reflected in the necessity of choosing between engaging in regulatory enforcement and foregoing punishment to encourage event reporting. Consider the design choices in the FAA Near Midair Collision Reporting System and ASRS. If a pilot reports a near miss to the FAA Near Midair Collision Reporting System in which an air traffic regulation was violated, the FAA can initiate enforcement action against the pilot based on his own report. If the pilot reports the same near midair collision to ASRS, he may be eligible for immunity. Thus, the design of the FAA Near Midair Collision Reporting System creates disincentives for reporting, whereas the ASRS immunity provisions remove some of these disincentives.
A similar design choice between cooperating with or separating from regulators is apparent in a comparison of ASAP with ASRS. American Airlines’
ASAP program pioneered an innovative way of overcoming barriers to reporting by building trust and cooperating with FAA regulators, rather than differentiating themselves from them, as in the ASRS model. The FAA does not grant immunity from enforcement action to ASAP program participants. However, if a pilot voluntarily reports a safety-related event that reveals an unintentional violation of an FAA regulation, the FAA responds with an administrative reprimand rather than punitive sanctions (Griffith, 1996).
Based on voluntary ASAP reports of inadvertent violations and other safety-related events, FAA regulators can learn about possible precursor conditions that otherwise might not have been reported, and the airline gets a detailed description of the conditions under which the event occurred. If similar events were reported to ASRS, the name of the airline would be de-identified; thus, the airline could not learn directly from the experience reported by its pilots. In ASAP, the FAA appears to have made a trade-off between using punitive means to enforce safety regulations and obtaining data necessary for the identification of precursors, and thus, improving safety conditions.
Two health care reporting systems are modeled after ASRS: (1) the VHA Patient Safety Reporting System and (2) Applied Strategies for Improving Patient Safety (ASIPS). Although both systems are based on the ASRS model, they confront different legal barriers to reporting. Physicians employed by VHA hospitals are not subject to the same threat of litigation as physicians who practice in other settings. ASIPS, a Denver-based program, has modified the ASRS model to gather data on medical errors in ambulatory settings (e.g., doctors’ offices). Unlike their colleagues in the VHA, members of the ASIPS collaborative are engaged in protecting their data from disclosure in litigation. The system designers, anticipating a legal challenge or security breach, are developing methods to ensure that serial numbers and computer identifiers cannot be used to link ASIPS reports to particular medical errors (Pace et al., 2003). Hence, the choice of confidentiality protections varies with the potential exposure to litigation or other threats.
As the examples from the aviation, nuclear power, and health care industries show, many types of organizations sponsor and support systems designed to identify precursors. These include government regulatory agencies (e.g., FAA and USNRC), individual organizations (e.g., airlines), hospital systems (e.g., VHA), industry associations (e.g., INPO), professional organizations (e.g., the Institute for Safe Medication Practices), and professional associations (e.g., Airline Pilots Association). Insurance companies could also contribute to precursor
identification through activities designed to improve patient safety. For example, insurance companies could offer discounts in malpractice insurance to hospitals and physicians that participate in patient safety monitoring systems. They could also offer incentives to health care providers in hospitals and ambulatory settings to report close calls and identify precursors.
Three additional policy implications can be drawn from comparisons of industry efforts to identify accident precursors. First, based on the experiences of different industries, we can select better criteria for choosing design alternatives and evaluating and understanding the trade-offs involved in adopting alternative methods of detecting accident precursors. Second, we can gauge the strengths and weaknesses of systems and identify areas of expertise. For example, the aviation industry has developed several methods of detecting and gathering information about potential accident precursors. In addition, they have designed alternative models for aggregating data on different organizational levels, from the airline level to the level of air traffic control facilities to the interorganizational level encompassing everyone who uses the national airspace. Similarly, the nuclear power industry has demonstrated expertise in classifying and triaging events to identify accident precursors and weigh their probabilities.
Finally, we can conclude that every system design, whether organizational or interorganizational, requires trade-offs and has blind spots. No system can identify all of the conditions and behaviors that interact to produce disastrous events. To compensate for blind spots, we need multiple systems for precursor identification.
I wish to thank Marilyn Sue Bogner, Howard Kunreuther, and James Phimister for their insightful and instructive comments. Valuable support for this research was provided by the National Science Foundation, Decision Risk and Management Sciences Division (Grant SBR-9410749), and the Aetna Foundation’s Quality Care Research Fund.
Aviation Daily. 1996. FAA, American, Pilot Union to Begin ‘Safety Program.’ 316(15): 115.
Baker, H.M. 1997. Rules outside the rules for administration of medication: a study in New South Wales, Australia. Image: The Journal of Nursing Scholarship 29: 155–158
Bates, D.W., R.S. Evans, H. Murff, P.D. Stetson, L. Pizziferri, and G. Hripcsak. 2003. Detecting adverse events using information technology. Journal of American Medical Informatics Association 10: 115–128.
Battles, J.B., H.S. Kaplan, T.W. Van der Schaaf, and C.E. Shea. 1998. The attributes of medical event-reporting systems: experience with a prototype medical-event reporting system for transfusion medicine. Archives of Pathology and Laboratory Medicine 122: 231–238.
Brannigan, V., A. DiNovo, W. Freudenburg, H. Kaplan, L. Lakats, J. Minarick, M. Tamuz, and F. Witmer. 1998. The Collection and Use of Accident Precursor Data. Pp. 207–224 in Proceedings of Workshop on Accident Sequence Precursors and Probabilistic Risk Analysis, V.M. Bier, ed. College Park, Md.: University of Maryland Center for Reliability Engineering.
Callback. 2003. Cutoff on takeoff. Callback 286(July): 1. Available online at http://asrs.arc.nasa.gov.
Cohen, M., ed. 1999. Medication Errors. Washington, D.C.: American Pharmaceutical Association
Cullen, D.J., D.W. Bates, S.D. Small, J.B. Cooper, A.R. Nemeskal, and L.L. Leape. 1995. The incident reporting system does not detect adverse drug events: a problem for quality improvement. Joint Commission Journal of Quality Improvement 1:541-548.
Edlavitch, S.A. 1988. Adverse drug event reporting: improving the low U.S. reporting rates. Archives of Internal Medicine 148: 1499–1503.
Edmonson, A.C. 1996. Learning from mistakes is easier said than done: group and organizational influences on the detection and correction of human error . Journal of Applied Behavioral Science 32: 5–28.
Flight Safety Digest. 1998. Aviation safety: U.S. efforts to implement flight operational quality assurance programs. Flight Safety Digest 17(7-9): 1–56.
Griffith, S. 1996. American Airlines ASAP. Presentation at the Global Analysis and Information Network (GAIN) Workshop, October 22–24, 1996, Cambridge, Massachusetts.
Hardy, R. 1990. Callback: NASA’s Aviation Safety Reporting System. Washington, D.C.: Smithsonian Institution Press.
Heget, J.R., J.P. Bagian, C.Z. Lee, and J.W. Gosbee. 2002. John M. Eisenberg Patient Safety Awards. System innovation: Veterans Health Administration National Center for Patient Safety. Joint Commission Journal of Quality Improvement 12: 660–665.
Heinrich, H.W. 1931. Industrial Accident Prevention. New York: McGraw-Hill.
Holtom, M. 1991. The basis for safety management. Focus on Commercial Aviation Safety 5: 25–28.
IOM (Institute of Medicine). 2000. To Err Is Human: Building a Safer Health System, L.T. Kohn, J.M. Corrigan, and M.S. Donaldson, eds. Washington, D.C.: National Academy Press.
La Porte, T.R. 1982. On the Design and Management of Nearly Error-Free Organizational Control Systems. Pp. 185–200 in Accident at Three Mile Island: The Human Dimensions, D.L. Sills, C.P. Wolf, and V.B. Shelanski, eds. Boulder, Colo.: Westview Press.
Leape, L.L. 1994. Error in medicine. JAMA 272: 1851–1857.
Levitt, B., and J.G. March. 1988. Organizational learning. Annual Review of Sociology 14: 319–340.
March, J.G., L.S. Sproull, and M. Tamuz. 1991. Learning from samples of one or fewer. Organization Science 2(1): 1–14.
Marcus, A.A., and M.L. Nichols. 1999. On the edge: heeding the warnings of unusual events. Organization Science 10(4): 482–499.
Minarick, J.W. 1990. The USNRC Accident Sequence Precursor Program: present methods and findings. Reliability Engineering and System Safety 27: 23–51.
National Academy of Public Administration. 1994. A Review of the Aviation Safety Reporting System. Washington, D.C.: National Academy of Public Administration.
NRC (National Research Council). 1980. Improving Aircraft Safety: FAA Certification of Commercial Passenger Aircraft. Washington, D.C.: National Academy of Sciences.
Pace W.D., E.W. Staton, G.S. Higgins, D.S Main, D.R. West, and D.M. Harris. 2003. Database design to ensure anonymous study of medical errors: a report from the ASIPS collaborative. Journal of American Medical Informatics Association 10(6): 531–540.
Perrow, C. 1984. Normal Accidents: Living with High-Risk Technologies. New York: Basic Books.
Reynard, W. 1994. Statement of Dr. William Reynard, director, Aviation Safety Reporting System, to the U.S. House of Representatives Subcommittee on Technology, Environment and Aviation, Committee on Science Space and Technology, July 28, 1994. Pp. 73–231 in 95-H701-21, testimony no. 2, Application of FAA Wake Vortex Research to Safety. Washington, D.C.: Congressional Information Service.
Reynard, W.D., C.E. Billings, E.S. Cheaney, and R. Hardy. 1986. The Development of the NASA Aviation Safety Reporting System, NASA Reference Publication 1114. Washington, D.C.: U.S. Government Printing Office.
Sagan, S.D. 1993. The Limits of Safety. Princeton, N.J.: Princeton University Press.
Stalhandske, E., J.P. Bagian, and J. Gosbee. 2002. Department of Veterans Affairs Patient Safety Program. American Journal of Infection Control 30(5): 296–302.
Tamuz, M. 1987. The impact of computer surveillance on air safety reporting. Columbia Journal of World Business 22(1): 69–77.
Tamuz, M. 2000. Defining Away Dangers: A Study in the Influences of Managerial Cognition on Information Systems. Pp. 157–183 in Organizational Cognition: Computation and Interpretation, T.K. Lant and Z. Shapira, eds. Mahwah, N.J.: Lawrence Erlbaum Associates.
Tamuz, M. 2001. Learning disabilities for regulators: the perils of organizational learning in the air transportation industry. Administration and Society 33(3): 276–302.
Tamuz, M., H.S. Kaplan, and M.P. Linn. 2001. Illuminating the Blind Spots: Studying Organizational Learning about Adverse Events in Blood Banks. Presented at the Academy of Management Annual Meeting, August 2001, Washington, D.C.
Tamuz, M., E.J. Thomas, and K.E. Franchois. 2004. Defining and classifying medical error: lessons for patient safety reporting systems. Quality and Safety in Health Care 13: 3–20.
Tasca, L. 1989. The Social Construction of Human Error. Ph.D. Dissertation. State University of New York-Stony Brook.
Thomas, E.J., and R.L. Helmreich. 2002. Will Airline Safety Models Work in Medicine? Pp. 217–232 in Medical Error, K.M. Sutcliffe and M.M. Rosenthal, eds. San Francisco: Jossey-Bass.
Vincent C., N. Stanhope, and M. Crowley-Murphy. 1999. Reasons for not reporting adverse incidents: an empirical study. Journal of Evaluation in Clinical Practice 5: 13–21.
Weick, K.E. 1995. Sensemaking in Organizations. Thousand Oaks, Calif.: Sage Publications.