Channing R. Robertson, Ph.D., is Ruth G. and William K. Bowes Professor, School of Engineering, and Professor, Department of Chemical Engineering, Stanford University, Stanford, California.
John E. Moalli, Sc.D., is Group Vice President & Principal, Exponent, Menlo Park, California.
David L. Black, J.D., is Partner, Perkins Coie, Denver, Colorado.
“Scientists investigate that which already is; Engineers create that which has never been.”
Although this is a reference manual on scientific evidence, the Supreme Court in Kumho Tire Co., Ltd. v. Carmichael1 extended the Daubert v. Merrell Dow Pharmaceuticals, Inc.2 decision on admissibility of scientific evidence to encompass nonscientific expert testimony as well.3 Put another way, experts not proffered as “scientists” also are held to the Daubert standard.4 So then we might ask, who are these nonscience experts and where do they come from? Many emerge from the realm of engineering and hence the relevance of “engineering” or “technical” expert testimony to this manual.
The Court’s distinction between these two kinds of expert testimony might suggest that there is a bright line dividing science and engineering. Indeed, a great deal has been written and discussed about this matter and arguments made for why science and engineering are either similar or different. It is a conversation that resonates among philosophers, historians, “scientists,” “engineers,” politicians, and lawyers. Apparently even Albert Einstein had a point of view on this issue as attested to by the above quotation. Perhaps this deceptively attractive dichotomy is best resolved by recognizing that at the end of the day engineering and science can be as different as they are alike.
There is no shortage of “sound bites” that attempt to categorize science from engineering and vice versa. Consider, for instance, the notion that engineering is nothing more than “applied science.” This is a too often recited, simple and uninformed view and one that has long been discredited.5 Indeed, it is not the case that science is only about knowing and experimentation, and that engineering is only about doing, designing, and building. These are false asymmetries that defy reality. The reality is that who is in science or who is in engineering or who is doing science or who is doing engineering are questions to be answered based on the merit of accomplishments and not on pedigree alone.
1. 526 U.S. (1999).
2. 509 U.S. 579 (1993).
3. See Margaret A. Berger, The Admissibility of Expert Testimony, in this manual.
4. See David Goodstein, How Science Works, in this manual, for a discussion of science and scientists.
5. Walter G. Vincenti, What Engineers Know and How They Know It (1990).
One can think of engineering in terms of its various disciplines as they relate to the academic enterprise and the names of departments or degrees with which they are associated, for instance electrical engineering or chemical engineering. One also can consider the technological context in which engineering is practiced as in the case of nanotechnology, aerospace engineering, biotechnology, green buildings, or clean energy.
In the same sense that some struggle trying to identify the differences and likenesses between science and engineering, others pursue a different kind of identity crisis by staking out their turf through title assignment. It is pointless to list titles of engineering disciplines because such a list would be incomplete and not stand the test of time as disciplines come and go, merge, diverge, and evolve. Bioengineering, biochemical engineering, molecular engineering, nanoengineering, and biomedical engineering are relative newcomers and have emerged in response to discoveries in the sciences that underlie biological and physiological processes. Software engineering and financial engineering are two other examples of disciplines that have developed in recent years.
In the end, it is not the names of disciplines that are critical, they being no more than labels. Names of disciplines are at best imprecise descriptors of the activities taking place within those disciplines and ought not to be relied on for accurate characterizations of pursuits that may or may not be occurring within them.
Whereas engineering disciplines are often associated with their scientific roots (i.e., mechanical engineering and physics, electrical engineering and physics, chemical engineering and chemistry, bioengineering and biology, biomedical engineering and physiology) some lack this kind of direct association (i.e., aerospace engineering, materials engineering, civil engineering, polymer engineering, marine engineering). Indeed, there are software engineers, hardware engineers, financial engineers, and management engineers. There is no shortage of adjectives here.
Nonetheless, these and many other such discipline titles have meant or mean something to someone, and new ones are emerging all the time as the historical barriers that once separated and defined the “classic” engineering disciplines continue to disintegrate and become a thing of the past. No longer can we rely on discipline names to inform us of specific enterprises and activities. There is, after all, nothing wrong with this as long as it is recognized that they ought not be used as reliable descriptors to subsume all possible activities that might be occurring within a domain. One must reach into a domain and investigate what kind of engineering is being conducted and resist the temptation to draw conclusions based on name only. Doing otherwise could easily lead to an unreliable and inaccurate characterization.
To provide a tangible example, consider cases involving personal injury in which central questions often revolve around the specifics of how a particular trauma occurred. In situations where proximate cause is an issue, the trier of fact can benefit from a thorough understanding of the mechanics that created an injury. The engineering and scientific communities are increasingly called on to provide expert testimony that can assist courts and juries in coming to this type of understanding. What qualifies an individual to offer expert opinions in this area is often a matter of dispute. As gatekeepers of admission of scientific evidence, courts are required to evaluate the qualifications of experts offering opinions regarding the physical mechanics of a particular injury. As pointed out earlier, however, this gatekeeping function should not rise and fall on whether a person is referred to or refers to himself or herself as a scientist or engineer.
Specifically, one cross-disciplinary domain deals with the study of injury mechanics, which spans the interface between mechanics and biology. The traditional role of the physician is the diagnosis (identification) of injuries and their treatment, not necessarily a detailed assessment of the physical forces and motions that created injuries during a specific event. The field of biomechanics (alternatively called biomechanical engineering) involves the application of mechanical principles to biological systems, and is well suited to answering questions pertaining to injury mechanics. Biomechanical engineers are trained in principles of mechanics (the branch of physics concerned with how physical bodies respond to forces and motion), and also have varying degrees of training or experience in the biological sciences relevant to their particular interest or expertise. This training or experience can take a variety of forms, including medical or biological coursework, clinical experience, study of real-world injury data, mechanical testing of human or animal tissue in the laboratory, studies of human volunteers in non-injurious environments, or computational modeling of injury-producing events.
Biomechanics by its very nature is diverse and multidisciplinary; therefore courts may encounter individuals being offered as biomechanical experts with seemingly disparate degrees or credentials. For example, qualified experts may have one or more advanced degrees in mechanical engineering, bioengineering, or related engineering fields, the basic sciences or even may have a medical degree. The court’s role as gatekeeper requires an evaluation of an individual’s specific training and experience that goes beyond academic degrees. In addition to academic degrees, practitioners in biomechanics may be further qualified by virtue of laboratory research experience in the testing of biological tissues or human surrogates (including anthropomorphic test devices, or “crash-test dummies”), experience in the reconstruction of real-world injury events, or experience in computer modeling of human motion or tissue mechanics. A record of technical publications in the peer-reviewed biomechanical literature will often support these experiences. Such an expert would rely on medical records to obtain information regarding clinical diagnoses, and would rely on engineering and physics training to understand the mechanics of the specific event that created the injuries. A practitioner whose expe-
rience spans the interface between mechanics (i.e., engineering) and biology (i.e., science), considered in the context of the facts of a particular case, can be of significant assistance in answering questions pertaining to injury mechanism and causation.
This example illustrates the futility of trying to untangle engineering from science and vice versa and to the inappropriateness of using semantics, dictionary definitions, or labels (i.e., degree names) to parse, dissect, or portray the intellectual activities of an expert witness. In the end, it is their background and experience that are the dominant defining factors—not whether they are a scientist and/or an engineer and not by the titles they hold.
Although a somewhat overworked part of our lexicon, it is indeed the case that “necessity is the mother of invention.” Engineering breeds a culture of technological responsiveness. All the “science” explaining a solution to a problem need not be known before an engineer can solve a problem.
Take steam engines, for example. Their history goes back several thousand years and their utility forged the beginning of the industrial revolution late in the seventeenth century. It was not until the middle of the nineteenth century that the science of thermodynamics began to gain a firm ground and offer explanations for the how and why of steam power.6 In this instance, technology came first—science second. This, of course, is not always the case, but demonstrates that one does not necessarily precede the other and notions otherwise ought to be discarded. So here the problem was one of wanting to produce mechanical motions from a heat source, and engineers designed and built systems that did this even though the science base was essentially nonexistent.
To reinforce the point that technology can precede science, consider the design of the shape of aircraft wings. This, of course, was driven by the desire of humans to fly, a problem already solved in nature since the time of the dinosaurs but one that had eluded humankind for tens of thousands of years. Practical solutions to this problem began to emerge with the Wright brothers’ first motive-powered flight and continued into the twentieth century before the “science” of fluid flow over wing structures had been fully elucidated. Once that happened, wings could be designed to reduce drag and increase lift using a set of “first principles” rather than relying solely on the results of empirical testing in wind tunnels and prototype aircraft.7
6. Pierre Perrot, A to Z of Thermodynamics (1998).
7. The pioneering aerodynamicist Walter Vincenti provides a detailed and fascinating account of this. See Vincenti, supra note 5, ch. 2; see also John D. Anderson, Ludwig Prandtl’s Boundary Layer, Physics Today, December 2005, at 42–48.
So, in short, engineers create, design, and construct because interesting and challenging problems arise in the course of human events and emergent societal needs. Whether a science base exists or only partially exists is just one of a myriad of constraints that shapes the process. Other constraints might include, but are not limited to, the availability of materials; device shape, size, and/or weight; cost; demand; efficiency; safety; robustness; and utility. It has been said, and possibly overstated, but it does make the point, that if engineers waited until scientists completed their work, they might well still be starting fires with flint stones.
So when faced with a vexing and challenging problem, along with its particular or peculiar constraints, an engineer seeks a path to follow that has a reasonable chance of leading to a solution. In so doing an engineer must contend with uncertainty and be comfortable with it. In very few instances will everything be known that is required to proceed with a project. Assumptions need to be made and here it is critical that the engineer understand the difference between what is incidental and what is essential. There are excellent assumptions, good assumptions, fair assumptions, poor assumptions, and very bad assumptions. Along this spectrum the engineer must carefully pick and choose to make those assumptions that ensure the robustness, safety, and utility of a design without undue compromise. This is the sort of wisdom that comes from experience and is not often well honed in the novice engineer.
This impreciseness that accompanies uncertainty can be used as a perceived disadvantage for the engineer in the role of expert witness. Yet it is this very uncertainty that lies at the heart of technological innovation and is not to be viewed as so much a weakness as it is a strength. To overcome uncertainty in design under the burden of constraints is the hallmark of great design, and although subtle and not always well understood by those who seek precision (i.e., why can’t you define your error rate?), this is the way the world works and one must accept it for what it is. Assumptions and approximations are key elements of the engineering enterprise and must be regarded as such. And as with all things, hindsight might suggest that a particular assumption or approximation was not appropriate. Even so, given what was known, it may well have been the right thing to do at the time it was made.
In addition to evolving business opportunities and changing financial markets, technological innovation results from the continuing and many times unexpected advances in science and technology that occur as time passes. Buildings constructed in Los Angeles in the 1940s would never be built there in the same way now. We have a much better understanding of earthquakes and the forces they exert on structures now than then. Airbags were not placed in automobiles until recently because we did not have cost-effective systems and materials in place to accurately measure deceleration and acceleration forces, trigger explosives, contain
the explosion, and do this on a timescale that was effective without harming an occupant more-so than the impending collision. It is unavoidable that as we learn from new discoveries about the natural world and accumulate more experience with our designed systems, products, and infrastructure, engineers will be in an increasingly better place to move forward with improved and new designs. It is both an evolutionary and a revolutionary process, one that produces both failures and successes.
The genesis of nearly every object, thing, or environment conceived by engineers is the design process. Surprisingly, although products designed using it can be incredibly complex, the general tenets of the design process are relatively simple, and are illustrated in Figure 1.
The progression is iterative from two perspectives: (1) Changes in the design resulting from testing and validation lead to new formulations that are retested. (2) After the design is complete, performance data from the field can also lead to design changes.
As a first step, engineers begin with a concept—an idea that addresses a need, concern, or function desired by society. The concept is refined through research, appropriate goals and constraints are identified, and one or more prototypes are constructed. Although confined to a sentence here, this stage can take a significant amount of time to complete.
In the next phase of the design process, the prototypes are tested and evaluated against the design requirements, and refinements, perhaps even significant changes, are made. The process is iterative, as faults identified during the testing phase manifest themselves as changes in the concept, and the testing and evaluation process is restarted after having been reset to a higher point on the learning curve. As knowledge is gained with each iteration, the design progresses and is eventually validated, although as alternative solutions are considered, it is possible that certain undesirable characteristics in the design cannot be completely mitigated through changes in design and should be guarded against to minimize their impact on safety or other constraints. A classic example of this step in the design process is the installation of a protective shield over the blade in a table saw; although the saw may have the unwanted characteristic of cutting fingers or arms, the blade clearly cannot be eliminated (designed out) in a functioning product. As a last resort, anomalies that cannot be designed out or guarded against can be addressed through warnings. Not every design is amenable to guarding or warning, but instead the iterative process of testing and prototype revision is relied
Figure 1. Schematic of the engineering design process.
upon to perfect designs. Indeed, in some instances, an acceptable design solution cannot be found and the work is abandoned.
The testing process itself can be complex, ranging from simple evaluations to examine a certain characteristic to multifaceted procedures that evaluate the prototype in conditions it is anticipated to see in the real world. The latter type of evaluation is often denoted as end-use testing, and is very effective in identifying faults in the prototype. Because many designs cannot be evaluated over their anticipated life cycle because of time constraints (a product expected to last for 20 years cannot be tested for 20 years in the development process), the testing
cycle is often accelerated. For example, if it is known that a pressure vessel will see 50,000 cycles over a 10-year lifetime, those cycles can be performed in several months and the resultant effects on vessel performance established. Another method of accelerating the evaluation cycle involves testing at an elevated temperature and using scientific theory and principles to equate the temperature increase to a timescale reduction. The efficacy of this approach is highly dependent on correct execution, but done properly and with appropriate care, it allows product development to go forward rather than having good or even great designs languish on the drawing boards because there is no feasible way to validate them under the exact end-use environment.
Regulations, standards, and guidelines also play an important role in testing of products during the design process. Federal requirements are imposed on design and testing of aircraft, medical devices, and motor vehicles, for example, and mostly govern how those products are evaluated by engineers. Standards organizations such as the American Society for Testing and Materials (ASTM), the American National Standards Institute (ANSI), and the European Committee for Standardization (CEN) promulgate test methods and associated performance requirements for a large number of objects and materials, and are relied on by engineers as they evaluate their designs. It is critical to understand, however, that ASTM, ANSI, CEN, and other such national and international standards organizations describe testing methods that engineers use to obtain reliable data about either the products they are evaluating (or components thereof), but most often they do not in and of themselves provide a means to evaluate a finished product in its actual end-use environment. It is also important to understand the difference between a performance standard and a testing standard—the former actually specifies values (strength, ductility, environmental resistance) that a product must achieve, whereas the latter simply describes how a test to measure a parameter should be conducted. It is the engineer’s job to use the correct testing procedures from those that have been approved and on which he or she can rely. Or, alternatively, if no approved test exists, the engineer must create one that is reproducible, repeatable, reliable, and efficacious. Furthermore, it is the engineer’s job to ensure the relevance of such testing to the overall and final product performance in its end-use environment. No testing or standards organization can foresee, nor do they claim to do so, all possible combinations of product components, design choices, and functional end-use requirements. Therefore, testing of a design in accordance with a testing standard does not necessarily validate the design, nor does it necessarily mean that the design will function in its end-use environment.
After testing and validation are complete, and the product is introduced to the market, the design process is still not finished. As field experience is gained, and products are used by consumers and sometimes returned to the manufacturer, engineers often fine-tune and perfect designs based on newly acquired data. In this part of the design process, engineers will analyze failures and performance
problems in products returned from the field, and adjust product parameters appropriately.8 The process of continual product improvement, illustrated by an arrow from the “Go” stage to the “Design/Formulate” and “Test/Validate” stages in Figure 1, is taught to engineers as a method to effectively optimize designs. Such refinements of product design are often the topic of inquiry in depositions of engineers and others involved in product design, and frequently misunderstood as an indication that the initial design was defective.9 The engineering design process anticipates review and ongoing refinement of product design as a means of developing better and safer products. In fact, retrospective product modification is mandated as company practice in some industries, and regulated or suggested by the government in others. For example, examination of FDA guidelines for medical device design will show a process that mirrors the one described above.
Another important component of the design process relates to changes in technology that render a design, design feature, or even tools used by an engineer obsolete. Engineers consider obsolescence to be a consequence of advancement, and readily adjust designs, or create new designs, as new technology becomes available. This concept is apparent in the automotive industry, where tremendous advances in restraint systems and impact protection have greatly reduced the risk of fatal injuries from driving (see discussion below). Although vehicles with lap belts as the sole means of occupant protection would today be considered unacceptable, they were by no means deficient when introduced in the 1950s. From the engineer’s perspective, errors and omissions in the design process can render a design defective; however, changes in technology can render a design obsolete, not retrospectively defective.
Of course even well-designed products can fail, especially if they are not manufactured or used in the manner intended by the design engineer. For example, a steel structure may be adequately designed, but if the welds holding it together are not properly made, the structure can fail. Similarly, a well-designed plastic component manufactured in such a way as to overheat and degrade its constituents may also be prone to premature failure. In terms of misuse of a product, most engineers are trained to consider foreseeable misuse as part of the design process, and one can generally expect to encounter a debate over what is reasonably foreseeable and what is not.
8. Although feedback on product performance and failure analysis on returned products is most often used to perfect designs, the iterative nature of the process can also cause the design to progress toward failure when cost becomes the driving factor.
9. Although the reasons for subsequent refinements in product design may be explored in depositions, Federal Rule of Evidence 407 bars the introduction of evidence of such improvements at trial as evidence of a defect in a product or a product’s design.
Almost everything that an engineer designs involves some aspect of safety, and the elegance and efficiency of designs are often forced to balance safety with competing parameters such as cost and physical constraints. The legal dilemmas that often arise from this balance are a direct result of the way an engineer must deal with safety in the reality of the engineering world (i.e., assertions that safety must be considered over everything else or that a particular design should or could be safer). Therefore, a discussion of how safety factors into design, and “how safe is safe enough” is prudent for an understanding of engineering and engineering design. It is critical that the reader note that in the framework of this discussion, risk is something engineers constantly face, and while we discuss what levels of risk are acceptable, the context is clearly engineering, and no legal construct is intended.
There is practically no product that cannot be made safer by reducing the product benefits (making it more inconvenient) or increasing the product cost, or both. In product design, safety is just one of the many variables factored into the design, as also is cost, and often safety and cost trade off directly on the product price point. In product design there are rarely instances where small cost changes render a substantial improvement in the risks. Safety always has a cost; the question is whether the consumer will find it reasonable in the face of what else the design has to offer. Conversely, the claim that the product is as safe as possible is almost never true either.
The simple and completely correct answer to the question “How safe is safe enough?” is “It depends.” Exactly what safety is, and what conditions determine its adequacy, that is, what adequate safety depends on, are the topics briefly discussed in the following sections.
Few words are used more often in the context of a product liability tort than the words “safe” and “unsafe,” and their close cousin, “defective.” Because the word “safe” is commonly used in so many different contexts, it is seldom, if ever, used with precision. Indeed, its common use has given it a number of meanings, some of which are in conflict.
Intuitively we understand the word and have a grasp of what a speaker probably means when declaring a product or environment “safe.” We have to say “probably” because some would mean by a “safe” product one that presents no risk to the user under normal circumstances, and others would mean no risk to the user under any circumstances. Still others who ask the question “how safe is safe enough?” clearly evidence an understanding that safety is a continuum and not an absolute. Although “safe” is a simple word, it is used in so many ways that
rigorous definition presents much of the complexity of other deceptively simple but widely used four letter words, for example, “good.”
Fortunately, there is a whole field of scholarship, science, and technology related to the study of “safety.” The field was spawned during the industrial revolution, when it came to be recognized that preventable industrial accidents were simply economically, if not morally, unacceptable.10 For the remainder of this discussion, we examine the concept of safety as it relates to the possibility of physical harm to persons.
Safety is technically defined, and empirically measured, by the concept of “risk.” And often a speaker who declares a product or environment “safe” does indeed mean to say that the product or environment is risk-free. However, as we will discuss in more detail, there is no product or product environment that attains the ideal status of “risk-free.”11 Every product manufactured by man, with his imperfections, and every environment, no matter how carefully constructed, presents some risk in its use, even if this risk is extremely small. This fact of life is easily illustrated.
For example, the U.S. Consumer Product Safety Commission (CPSC) estimates that nationally in the year 2007 alone, there were approximately 42,000 injuries serious enough to require treatment at a hospital emergency room associated with the use, and more often the misuse, of first-aid equipment. Thousands of these injuries were associated with the use of first-aid kits. The CPSC maintains the National Electronic Injury Surveillance System (NEISS), which monitors a statistically selected sample of all the emergency rooms in the United States, so that data collected on each consumer injury associated with the categories of consumer products that fall under the jurisdiction of the CPSC can be extrapolated to a national estimate.
It is not immediately obvious how so many injuries could be associated with first-aid equipment. And this reaction is an excellent lesson about the reliability of intuition for determining risk.12 One soon learns that the cotton swabs in a first-aid kit can puncture eardrums; the ointments, pills, and antibiotic creams can be ingested by infants; the ice packs can cause thermal burns to the skin; and the cotton can become lodged in all sorts of unintended places.
With the understanding that there are no risk-free products, then we have no choice but to define safe in terms of the amount of risk. Of course with “risk” defining “safe,” the task of defining “safety,” or “safe enough,” has been replaced
10. The rigorously scientific portion of this field is a product of the past 50 years. Although it has no single father, the seminal contributions of Dr. Chauncey Starr, ultimately recognized through his receipt in 1990 of the National Medal of Technology from President George H.W. Bush, deserve mention. http://www.rpi.edu/about/hof/starr.html.
11. S.C. Black & F. Niehaus, How Safe Is“Too” Safe, 22 IAEA Bull. 1 (1980); Water Quality: Guidelines, Standards and Health 207 (Lorna Fewtrell & Jamie Bartram eds., 2001).
with the task of defining “risk” or “acceptable risk.” This then is the definition of safe; something is “safe” when it presents “acceptable risk”13 (the reader is again reminded that we are discussing an engineering, not legal, construct).
“Risk” is another of those deceptively simple four-letter words that society uses in a wide variety of ways. Perhaps a few decades ago, when the field was developing its rigorous intellectual underpinnings, a term other than “risk” could have been chosen. But now there is a Harvard Center for Risk Analysis,14 a Food Safety Risk Analysis Clearinghouse at the University of Maryland,15 and many other academically oriented risk analysis organizations too numerous to name. There is a convenient Web site, Risk World, that lists many of the other Web sites that reference risk analysis.16 Risk as the technical term for safety is now too institutionalized to be changed, and for the remainder of this discussion we are concerned with safety as the risk of physical harm, that is, health or safety risk.
The concept of risk is slightly more complicated and significantly more rigorous than the concept of safety. Again we have an intuitive understanding of the concept of “risk,” and that it involves some concept of probability, more specifically the probability of some “bad” thing. In the case of safety the “bad” thing is injury or physical harm.
Risk is often empirically measured and expressed quantitatively, and a “risk” number always contains units of frequency (or probability) and severity. This is a substantial advantage over the concept of “safe.” It would make no sense to say, “this product was found to be 2.73 safe.” Risk on the other hand is the measure of safety. For example, the fatal risk of driving in the United States in 2007 was 1.36 fatalities for every hundred million vehicle-miles traveled (note: 100 million = 100,000,000 = 108).17 This is a risk number because it contains a severity, “fatal,” and the frequency, per every 108 miles. This fatality risk is not a complete measure of the safety or risk of U.S. vehicular travel in 2007 because the same 108 vehicle miles traveled that produced the 1.36 fatalities also produced 82 injuries; “injuries” as defined by the National Highway Traffic Safety Administration (NHTSA).18
13. International Organization for Standardization & International Electrotechnical Commission, ISO/IEC Guide 51: Safety Aspects—Guidelines for Their Inclusion in Standards (2d ed. 1999); William W. Lowrance, Of Acceptable Risk: Science and the Determination of Safety 8 (1976); Fred A. Manuele, On the Practice of Safety 58 (3d ed. 2003); National Safety Council, Accident Prevention Manual—Engineering & Technology 6 (Philip Hagan et al. eds., 12th ed. 2000).
17. National Highway Traffic Safety Administration, Motor Vehicle Traffic Crash Fatality Counts and Estimates of People Injured for 2007, DOT HS 811 034 (Sept. 2008, updated Feb. 2009) (hereinafter “NHTSA, Motor Vehicle Traffic Crash Fatality Counts”).
18. Id., slide 9.
These two different risk metrics, one for injuries and one for fatalities, naturally invite the question of a single metric that characterizes the risk, and therefore safety, of highway travel in the United States.
Sadly, the answer is that the single metric does not exist. For decades the risk analysis community has worked on developing some calculus through which injuries of differing severity could be rigorously combined and expressed as a defensible “average” severity. Some safety data are collected in a form that naturally lends itself to this exercise. When occupational injury severity is characterized by a “lost workday” metric, that is, the more severe injuries obviously result in more lost days of work, then the average number of lost work days is a defensible average severity with which one can characterize a population of occupational injuries. But this exercise quickly breaks down in the face of permanently disabling occupational injuries and deaths. Obviously, one could impute an entire career’s worth of lost workdays in the case of fatal injury or permanent injury, but then these injuries would completely overwhelm all other types of occupational injury. And the issue of whether a permanently disabling injury is really of the same severity as a fatal injury remains unresolved.
Similarly the CPSC attempted soon after its creation in the early 1970s, to develop a geometric sliding scale to numerically categorize the differing consumer product–associated injury severities being treated in the hospital emergency rooms that the agency monitored. The CPSC scale had six to eight severity categories over the years, to which numerical weights were applied, ranging from 10 for severity category 1, mild injuries and sprains, to 34,721 for severity category 8, all deaths, in its original configuration. The weighting for deaths has changed and has been as low as 2516. An amputation was accorded a weight of 340, and fell into category 6, unless it resulted in hospitalization, at which time it became a category 7 with a weight of 2516. In the end, this scheme has proved generally unsatisfactory, but it still appears in the occasional CPSC document, and is used to generate a “mean severity” for emergency room–treated injuries.19 Even if somehow a calculus for comparing and combining various injury severities could be developed, the challenge of how to compare the risk of differing injury frequencies at different severity levels would remain. There is practically no chance that the relationship would be linear, and the nonlinear characteristics would be highly subjective.
Instead of trying to develop a calculus to combine severities with differing frequency, it has become the custom and practice in the risk analysis community to express risk frequency or probability by stratified severity. That is, if a level of severity is specified, then the risk likelihood is stated. There is no agreement on the proper stratification, but rather a de facto consensus that fatal injuries are the most severe, and fatality risk is commonly measured. In addition, calculations of accident risk with no injury, injury risk, and hospitalized injury risk are often seen
19. U.S. Consumer Product Safety Commission, 1995 Annual Report to Congress A-5 (1995).
in the risk literature. Rather than being combined into a single metric, these risks are expressed as independent risk frequencies or probabilities. Specialized average severities, such as average number of lost workdays as a risk metric for the severity of average occupational injury, are occasionally used.
The upshot of our inability to develop a severity calculus means that risk metrics cease to be parameters with units of both frequency and severity, and become merely frequencies or likelihood of an injury of a given severity. Frequencies are easier to compute and merely require what is called “numerator” data (i.e., the actual number of adverse events for which the risk is being calculated) and “denominator” data (i.e., some measure of the opportunity to have the adverse event). In the previously cited 1.36 fatal injuries per 108 miles of vehicle travel, the numerator datum was the 41,059 deaths in 2007 traffic accidents and the denominator datum was the 3,029,822 million vehicles miles traveled by all vehicles in 2007.20 The division of these two numbers gives 1.36 deaths per 108 miles. Vehicle-miles traveled (VMT) is one obvious measure of the opportunity to have a vehicular accident. However, it is not the only measure. If the data are available, vehicle hours can be substituted for vehicle miles, and then the fatal risk can be expressed as a frequency per vehicle hour. Measures such as miles and hours are often called “exposure” data, and must be some empirical measure of the opportunity to encounter the hazard (the adverse event itself) for which the risk is being calculated. The “correct” exposure measure is usually determined by the analysis being performed. Miles is appropriate for on-road vehicles, because travel is what the automotive products are intended to produce. For off-road recreational vehicles, where recreation as opposed to travel is the purpose of the product, hours of use would probably be a more appropriate exposure measure.
Having determined that the fatality risk of driving in the United States is 1.36 deaths per 108 VMT, does that mean one’s risk of dying in a traffic accident is 1.36 × 10−8 every time a mile is driven? No. With the danger of presenting too much detail, we can use this one risk parameter as a tool to briefly illustrate that many assumptions are inherent in any risk calculation, and that questions should arise in the court’s mind when encountering any number that purports to represent the “risk” and therefore “safety” of any product or activity.
First, the number 1.36 is the gross fatality risk for vehicular travel in the United States. It is the risk we as a society de facto accept for the benefits of vehicular travel. Some of those deaths are pedestrians, motorcyclists, passengers, and bicyclists, and their deaths are part of the risk society must accept to have motorized vehicular travel. But, the fatal risk to you as a driver, by your “exposure” driving a mile, clearly does not involve any pedestrian risk or bicycle risk
20. NHTSA, Motor Vehicle Traffic Crash Fatality Counts, supra note 18, slide 40.
or motorcycle risk or vehicle passenger risk. Thus, fatally injured pedestrians (4654),21 pedal cyclists (698),22 motorcyclists (5154),23 vehicle passengers (8657),24 and others (147 skate boarders, etc.)25 have to be subtracted from the 41,059 traffic deaths in 2007 numerator datum, to compute a “fatal risk of you being a driver” number, because none of them was driving a car. That leaves us with 21,64726 fatally injured vehicle drivers in 2007. Because all of the vehicles had to have a driver to go even a mile, it might be tempting to just use the 3,029,822 million vehicle miles number in 2007 as the denominator without adjustment. But, to be accurate, the motorcycle operators were “vehicle” drivers, and so we cannot remove their 5154 deaths from the numerator without removing the approximately 13,610 million27 vehicle miles those motorcycles were driven from the denominator datum. Because the motorcycle operator fatal injury risk per mile is 37.86 per 108 VMT,28 more than 52 times that of an automobile driver, removing the motorcycle data entirely when trying to compute an automotive risk number is sound. If we do the appropriate adjustments, then we compute a fatality risk for a nonmotorcycle vehicle driver of 0.718 deaths per 108 VMT.
Now, we can again ask the question, “Is 0.718 × 10–8 one’s risk of being killed every time a mile is driven?” The answer is now “Possibly, but unlikely.” This risk number is the composite risk for all drivers in society for 2007. And, because of lifestyle choices, this number might serendipitously be accurate for some but not for everyone. Every driver has significant control over the majority of his or her risk of being killed on the road. For example, 33.6% of the fatally injured drivers, 7283, almost exactly one-third, had blood alcohol levels at or above 0.08 g/dL.29 Exactly how much a blood alcohol level of 0.08 increases one’s risk of dying per mile driven is a topic of some debate, but the consensus would fall somewhere between 3 and 5 times. You are much more likely to be killed if you drive on the weekends during the early morning hours. Even restricting ourselves to passenger vehicle fatalities in the daytime, when 82% of vehicle occupants wear their seat belts, 45% of the drivers killed in the daytime were unrestrained by their seat belts.30 Numerous other decisions that we make concerning our driving or circumstances that affect us, such as the size of the car we drive, cell phone use,
22. National Highway Traffic Safety Administration, Traffic Safety Facts 2007 Data: Pedestrians, DOT HS 810 994, at 3 (hereinafter “NHTSA, Pedestrians”).
23. National Highway Traffic Safety Administration, Traffic Safety Facts 2007 Data: Motorcycles, DOT HS 810 990, at 1 (hereinafter “NHTSA, Motorcycles”).
24. NHTSA, Motor Vehicle Traffic Crash Fatality Counts, supra note 18, slides 52, 74, 85.
25. Id., slide 9.
26. Id., slide 40.
28. NHTSA, Motorcycles, supra note 23, at 1.
29. National Highway Traffic Safety Administration, Traffic Safety Facts: 2007 Traffic Safety Annual Assessment—Alcohol-Impaired Driving Fatalities, DOT HS 811 016, at 2 (Aug. 2008).
30. NHTSA, Pedestrians, supra note 22, at 3.
regard for yellow lights, aggressiveness, medication, vision correction, etc., may contribute in some way to the likelihood that we will be fatally injured driving the next mile, but are beyond the scope of this brief discussion.
With some understanding of what comprises the calculation of a risk metric, we can now turn to the more important questions related to its meaning. A fair question about the vehicular risk we just examined might be: “Is a fatal motor vehicle risk of 1.36 × 108 VMT good or bad?” Should society be ashamed or proud? This question for vehicle safety, and every other arena of risk analysis, can only be answered comparatively. The only absolute risk standard is “zero,” but this ideal can never be achieved. So, to answer the question of how “good” the 1.36 number is, we can look to several comparisons. A logical starting point might be previous years; are we getting better or worse? Fortunately, with a few singular exceptions (such as motorcycles), everything is getting safer, and has been for the past 100 years. Although 1.36 people dead for every 108 VMT is surely not desirable, in 1966, that same number was over 5.31
Table 1. Fatalities per 100 Million Vehicle Miles Traveled
|Year||Fatalities per 108 VMT|
These data illustrate the fact that a risk number such as 1.36 deaths per 108 VMT in isolation is practically meaningless. But when put in a historical context, or in the context of other products or activities, a perspective is gained to evaluate the magnitude of the risk. As can be seen in Table 1, we as a society are making steady progress on reducing the fatal risk of driving, and our current risk number
31. Matthew L. Wald, Deaths on Motorcycles Rise Again, N.Y. Times, Aug. 15, 2008, at A11.
32. NHTSA, Motor Vehicle Traffic Crash Fatality Counts, supra note 18, slides 52, 74, 85.
does not look too bad. Similarly, if our risk number were presented in the context of the fatal highway risk of other industrialized nations, it would compare very favorably as well.
With an understanding of how risk is calculated, and that risk must necessarily be viewed in a comparative context, we now turn our discussion back to our original question “how safe is safe enough?” which in light of what we have learned must be rephrased “how much risk is acceptable?”
How much risk is acceptable is not a simple question; books are written with “acceptable risk” in the title. However, a simple answer to this question is typically another question: “acceptable to whom?” As individuals we exhibit radically different de facto risk acceptance, and the same individual will exhibit significantly different risk acceptance throughout his or her lifetime. Certainly, as compared with a stuntman, the average person would have widely variant views on what is an acceptable risk. And, neither could nor probably should make this decision for society as a whole. There is no absolute standard of how much risk is too much or too little, but innumerable federal, state, and voluntary standards prescribe maximum risk levels, and we touch on them briefly.
Risk acceptance has been studied extensively, and there are more than a dozen factors that influence how much risk is acceptable either to an individual, or to society as a whole, in a given situation. And they are not always the same factors. Examining and discussing all these factors is beyond the scope of this guide, but a few of the most important are illustrated.
Probably the single most important factor for determining how much risk is “acceptable” is how much “benefit” we gain from accepting the risk. We are willing to accept substantial risk for substantial benefit. Motorized vehicular transportation confers tremendous benefits in our society and almost the entire population participates, and by our participation we indisputably evidence our de facto “acceptance” of the known risks for the known benefits, even if we do not find the risks of driving intellectually “acceptable.” That does not mean we have to like the level of risk, or that we “accept” the current level of risk in the sense that we do not need to do anything about it. Indeed we spend billions and billions of dollars trying to reduce the level of risk associated with motorized vehicular travel. That being said, the overwhelming majority of the current population finds the current level of motorized vehicular travel risks low enough, given the benefits, to participate. This would not be too surprising if the level of risk associated with motorized transportation were low, because the benefits of motorized transport are clearly high. However, the fatal risk associated with motorized vehicle travel is not low.
33. Although acceptable risk is also a legal concept, we are merely using engineering vernacular in this chapter, and no legal construct is intended.
Returning to our fatality risk for a nonmotorcycle vehicle driver of 0.718 deaths per 108 VMT, this translates into a fatal risk of 0.718 × 10−8 for each mile. That 10−8 term makes this number quite small, and the fatal risk per mile low. However, very few people drive just a mile in a week or year. In fact, according to the Federal Highway Administration, the average U.S. driver logs about 13,500 miles behind the wheel every year.34 That means for the year, the average U.S. driver faces 0.718 × 1.35 × 104 × 10−8 = 0.97 × 10−4 risk of a fatal accident every year, or about 1/10,000. But, very few people drive for a year. It is not uncommon to drive 60 years. Certainly the mileage we drive when young and old is less each year, and when middle-aged, more, but for the purpose of calculation let’s assume the average value for 60 years. Then the risk of driving for one’s adult lifetime, on average, is 0.97 × 6 × 10 × 10−4 = 5.82 × 10−3 Stated another way, if we drive for a lifetime, even at the low fatal risk of 2007, the average driver runs a risk of 0.00582 of being killed in his lifetime in a vehicular accident, a little more than a chance of 1/200. So, one out of every 200 drivers will die in his or her lifetime from the activity of driving a vehicle.
Needless to say, this number does not look so small any more. This brings us to the most commonly advanced argument against de facto “risk acceptance” being the measure of “acceptable risk” or “safe enough.” Critics argue that no one can be said to “accept” a risk if they do not know what the risk is. Logically this is true, but it ignores the fact that even if we cannot cite a specific risk parameter, that does not mean we do not have an intuitive grasp of the risk. For example, in the case of motorized vehicle travel above, relatively few people can go through the calculation above and derive the number 1/200. But, we all have personally known in our lifetime more than one person (not just luminaries such as James Dean, Jayne Mansfield, Princess Grace Kelly, General George Patton, and Princess Diana, and even Barack Obama Sr., father of our current President) who has died in a vehicular accident. For the 1/200 number to be true, since we all know a few hundred people, we must know at least a couple who have died in vehicular accidents. Therefore, even though we may not be able to calculate the number, society has an excellent grasp of the risk associated with vehicular travel.
This 1/200 risk of fatal vehicular injury also illustrates the important difference between a “unit of participation risk” and a “lifetime risk.” Because, fortunately, average lifetime is so long, when a risk to which we are constantly exposed is summed over a lifetime, the resulting fraction can become uncomfortably large. For example, the lifetime risk of developing cancer from merely exposure to the background levels of environmental chemicals has been estimated to between 1/1000 to 1/100.35
Indeed, people who study common perceptions of risk have found that people do a fair job of estimating the national death toll from a great many com-
35. C.C. Travis & S.T. Hester, Global Chemical Pollution, 25 Envtl. Sci. & Tech. 814–19 (1991).
mon risks, such as vehicular travel, but typically overestimate risks, such as airplane crashes, that have significant publicity associated with them.36 We all know there is a small, but highly controllable, risk of drowning when we go swimming. Yet most of the U.S. population participates in this activity on some basis.
After the benefits gained from assuming the risk, probably the second most important factor determining the acceptability of a given risk level is “control.” Is the risk under our control or in the hands of fate? We are willing to voluntarily assume up to 1000 times more risk if we perceive we are assuming the risk voluntarily and it is under our control. This is certainly a substantial factor in the acceptance of the risk of motorized vehicular travel. It is also observed very commonly in sports recreation activities. We perceive that the overwhelming majority of this risk is under our direct control, so we are almost universally willing to accept it for the perceived benefits on a societal basis. On the other hand, if we perceive the risk is imposed on us involuntarily, and it is out of our control, such as a nuclear power plant being built in our city, then the amount of risk we are willing to “accept” being imposed on us is dramatically less.
Another important factor in determining if a particular risk is “acceptable” is the cost of reducing or eliminating the risk. This issue is commonly encountered in product-related injury tort litigation, and it is often not a simple one. As mentioned above, there is practically no product that cannot be made safer by reducing the product benefits or increasing the product cost, or both.
Unfortunately, plaintiffs and defendants often muddy the intellectual landscape related to safety in products litigation. Plaintiffs will often assert that the product should be completely risk free, an impossible ideal to achieve, even if the product is being misused. Defendants will often assert that safety is the “highest” priority in their product’s design. However, this cannot be true either. If safety were the highest priority in any product’s design, the cost would be uneconomical, because at no point in the design, no matter how low the risk, would the level of risk be as low as could be achieved with more cost. Everyone knows that a big car is safer than a small car. And this is demonstrably true. It is particularly true when the big car hits the small car, and death risk in the small car is commonly 8 to 10 times higher than that of occupants of the big car in such collisions. Big cars also present less risk to their occupants, even hitting stationary obstacles such as trees. But, big cars cost more than small cars. If safety were the highest priority in vehicle design, we would all have to pay for vehicles with the weight, complexity of design, and handwork found in nameplates such as Mercedes. In the real world we can choose among more than 300 car models. Some of the very smallest and lightest mass-produced models are very inexpensive relative to a Mercedes, but they also do not remotely protect their occupants to the degree of a Mercedes. All cars must provide their occupants a minimum level of protection by compliance with the Federal Motor
36. Baruch Fischhoff et al., Acceptable Risk (1981).
Vehicle Safety Standards (FMVSS). But even with the FMVSS, there is demonstrably more risk associated with driving a small car.
How much risk is “acceptable” is further complicated by the fact that risk cannot be spread uniformly in society. The fatal risk of motorized vehicle travel is borne by those relatively few who die, and by the rest of us only by taxes and insurance premiums. Unfortunately, some purchasers are willing to assume the additional risk of a small car in the showroom for the very substantial cost savings, but change their minds after a collision demonstrates the complete cost of the tradeoff. Our economic system permits purchasers to trade cost for safety in innumerable other products, from helmets, to tools, to furniture and houses. In reality, consumers and manufacturers must engage in consideration of cost versus safety virtually every day, because there are few products where a safer and more expensive model is not available, and no products exist that cannot be made safer by being made less convenient and/or more expensive. Denying or obfuscating this process does not advance safety, science, engineering, or justice.
In light of all the preceding considerations, we last examine the question of whether there is any absolute level of risk low enough that it almost always is regarded as “acceptable” and therefore “safe.” Unfortunately, there are a multitude of such levels from a myriad of sources. In the United States, Chauncey Starr in 1969 quantified the risk of disease to the general population as one fatality per million hours of exposure, and after studying risk acceptance and participation of society in many activities concluded that “the statistical risk set by disease appears to be a psychological yardstick for establishing the level of acceptability of other risks.”37 Starr observed the de facto level of risk people accept, not necessarily that which they would say is “acceptable,” was about one in a million chance of fatality per hour, or unit of, exposure. If an activity presents this level of fatal risk, and a person wants the benefit of that activity, he or she will almost always accept this level of risk for the perceived benefit. As a consequence of this initial observation, “one in a million risk” calculations are now commonplace in the risk literature.38
As the risk level rises above this threshold, a decreasing fraction of the population will find the risk worth the benefits. This is why very high risk sports, such as skydiving, have many fewer participants. Let us return one more time to our driver fatality risk of 0.718 × 10–8 for each mile. This can be conveniently converted into a risk per hour by recognizing that the average driving speed in the United States is about 30 miles per hour.39 That means in an hour, the fatal risk to the average driver is 30 × 0.718 × 10−8 = 0.214 × 10−6 or about 0.2 per million hours or 2 in 10 million hours. It is perhaps more appropriate to return
37. C. Starr, An Overview of the Problems of Public Safety, in Proceedings of Symposium on Public Safety 18 (1969).
38. R. Wilson & C. Crouch, Risk-Benefit Analysis 208–09 (2d ed. 2001).
39. See, for example, government calculations at http://www.epa.gov/OMS/models/ap42/apdx-g.pdf or http://nhts.ornl.gov/briefs/Is%20Congestion%20Slowing%20us%20Down.pdf.
to the 1.36 × 10−8 per mile traveled for the overall risk to society of motorized vehicular travel, not just the driver risk, to compute the risk level that society de facto accepts for the benefits of motorized vehicular transport. Then the risk per hour becomes 30 × 1.36 × 10−8 = 0.408 × 10−6 or a little less than half a fatality per million hours of exposure. This is well below the “one in a million” threshold, and thus 98%+ of the society will participate in this activity. As a final sanity check on our work, let’s return to the 37.86 risk of fatal injury per 108 VMT for motorcycles. This translates into a risk per hour of 30 × 37.86 × 10−8 = 1.136 × 10−5 or more than 11 fatal injuries per million exposure hours. This is above the one-in-a-million threshold, and, understandably, motorcycle riding is regarded as an unacceptable risk by a large fraction of the population.
This threshold of “one in a million” as the “acceptable” risk level has many variants. In the United Kingdom, for example, the Health and Safety Executive40 adopted the following levels of risk, in terms of the probability of an individual dying in any one year:
- 1 in 1000 as the “‘just about tolerable risk’” for any substantial category of workers for any large part of a working life;
- 1 in 10,000 as the “‘maximum tolerable risk’” for members of the public from any single nonnuclear plant;
- 1 in 100,000 as the “‘maximum tolerable risk’” for members of the public from any new nuclear power station;
- 1 in 1,000,000 as the level of “‘acceptable risk’” at which no further improvements in safety need to be made.
There are essentially innumerable regulations promulgated by different agencies within states and the federal government that are beyond the scope of this guide, but which mandate expenditures to maintain certain maximum risk levels either implicitly or explicitly. These regulations cover everything from food additives to acceptable levels of remediation at toxic Superfund sites. Regrettably, there is little or no coordination among regulating agencies, and no standardized procedures for addressing risk within the federal government, or within the states. As a result, the amount spent to “save a life,” which should be termed “forestall a fatality” (because everyone eventually dies) varies by six orders of magnitude. Table 2 lists a number of regulations, the year that they were mandated, their issuing agency, and the cost they effectively mandate be expended “per life saved.” Needless to say, these are estimates, and the data are somewhat dated, but the relative costs will be approximately the same. Executive orders from recent Presidents starting with Reagan have attempted to introduce “cost-effectiveness” in one form or another into the regulatory process, but with little observable effect at this writing.
40. Water Quality: Guidelines, Standards and Health 208–09 (L. Fewtrell & J. Bartram eds., 2001).
Table 2. Relative Cost of Selected Regulations as a Function of Lives Saved
|Regulation||Year||Agency||Cost per Life Saved (Millions of Dollars in 1990)|
|Unvented space heater ban||1980||CPSC||0.1|
|Aircraft cabin fire protection||1985||FAA||0.1|
|Aircraft seat cushion flammability||1984||FAA||0.5|
|Trenching and excavation standards||1989||OSHA||1.8|
|Rear lap/shoulder belts for cars||1989||NHTSA||3.8|
|Asbestos occupational exposure limit||1972||OSHA||9.9|
|Ethylene oxide occupational exposure limit||1984||EPA||24.4|
|Acrylonitrile occupational exposure limit||1978||OSHA||61.3|
Note: CPSC = Consumer Product Safety Commission; EPA = Environmental Protection Agency; FAA = Federal Aviation Administration; NHTSA = National Highway Traffic Safety Administration; OSHA = Occupational Safety and health Administration.
Source: W. Kip Viscusi & Ted Gayer, Safety at Any Price? Regulation 54, 58 (Fall 2002).
Finally, although we acknowledge that this section on safety is quite extensive, we also believe it is extremely important for the court to recognize how engineers think about safety. Engineers are dedicated to making safe products. At the same time, they recognize that every increment in safety has an expense associated with it. Just as there is no product or environment that is risk-free, there is no bright-line threshold that universally divides safe and unsafe products; safety is not binary. For each properly designed product, there is a unique set of constraints (including cost), and a safe-enough level exists that balances constraints with acceptable risk.
To illustrate ways in which flawed design processes lead to adverse outcomes, a number of examples are selected covering a range of incidents that occurred during the past century. In each instance, the link in the design process that was either missing or corrupted is highlighted and discussed. The reader may wish to refer to Figure 1 when considering these examples.
Insertion of objects into a woman’s uterus has long been a means of contraception. In the twentieth century, IUDs were designed, manufactured, and mass marketed
around the world. Many of them were associated with adverse health consequences, in particular, pelvic inflammatory disease, which led to long-term disabilities and even death in substantial numbers of women. An example was the Dalkon Shield, marketed and sold by A.H. Robbins. The health problems of wearers of this device put its manufacturer in bankruptcy and led Congress to pass legislation to enhance medical device regulation generally, including most IUDs. Thus, those authorities corrected the flawed design process employed by A.H Robins which had led it to conclude that the product could be initially marketed and even continued to be marketed in the face of reports of serious health problems and death.
The Copper-7 IUD, marketed and sold by G.D. Searle, represented a somewhat different situation. That device received FDA approval as a drug. After it reached the market, Searle received reports of health problems. In litigation brought by women who used the product, some courts concluded that the risk associated with its use was “unacceptable.”41
With all IUDs, the inserted device has a “string” attached to it that passes from the uterus through the cervix and into the vagina. The “string” is used for the purposes of removal as well as to provide certainty to the woman that the IUD remains in place and has not been expelled. But, to provide these functions, it compromises a biological firewall that ensures sterility of the uterus—the cervix. Therefore, in choosing the string material and fabrication method, designers had to assess choices that if properly made, reduced, if not eliminated, the potential for bacteria to migrate from the vagina into the uterus. With both the Dalkon Shield and the Copper-7, the designers set aside this consideration and traded it for the ability to enhance manufacturability and appearance by using strings that resulted in the unacceptable transmission of infectious agents into the uterus. These design choices were made for the purpose of reducing expense and gaining a competitive marketing edge, not to enhance consumer safety, and therefore led to unacceptable risk. They turned out to be lethal choices, two more examples of failures to adhere to the well-established and time-honored design process.
For 17 years, over 35 million gallons of industrial waste were deposited in pits dug into the ground in what had been presumably certified to be a granite-lined impermeable geological formation that would not leak. These were known as the Stringfellow Acid Pits located near the Riverside suburb of Glen Avon, California, some 50 miles east of Los Angeles. History proved otherwise and millions of gallons of toxic materials escaped containment and contaminated groundwater supplies and exposed local inhabitants to chemical vapors.42
41. See Robinson v. G.D. Searle & Co., 286 F. Supp. 2d 1216 (N.D. Cal. 2003); Kociemba v. G.D. Searle & Co., 683 F. Supp. 1577 (D. Minn. 1988).
42. See State v. Underwriters at Lloyd’s London, 54 Cal. Rptr. 3d 343 (Cal. Ct. App. 2006), pet. for review granted, 156 P.3d 1014 (Cal. 2007) for general overview and United States v. Stringfellow, No.
In this instance, the design process was flawed from the very beginning (i.e., the incomplete and incorrect geological analysis) and led to an “engineered” site for the containment of toxic wastes, which had no chance of performing properly. This is an excellent example showing that once the design process is corrupted, everything that follows in the design cascade, although perhaps done correctly, will most likely not lead to a successful design outcome.
In low-humidity locales, it is possible to “air condition” a structure using evaporative cooling of water. This is done in devices known as “swamp coolers.” They either sit beside or in most instances on the roofs of the structures being cooled. The operation is simple. They consist of an enclosure or a box in which a small pump is used to saturate porous panels through which air is drawn thereby evaporating the water and cooling the air that is directed into the interior spaces. The pumps are electrically powered and are known to short-circuit and fail, thus becoming a potential source of ignition and fire. A simple design solution is to make the box inflammable. This, of course, is the case when the box is metal, but then one has to be concerned with corrosion and subsequent maintenance. To obviate the corrosion issue, the box can be made of plastic. Plastic does not corrode but it is potentially flammable unless flame retardants are added as part of the materials formulation. Foreseeing this occurrence and making the conscious choice not to add flame retardants is an abdication of the design process and with that comes tragic consequences.
This scenario was played out in Vanasen v. Tradewinds,43 where a 5-year-old girl was killed as the result of a foreseeable pump failure, subsequent electrical short circuit, and ignition of a non-fire retardant plastic swamp cooler attached to the roof of her home. Again, failure to adhere to the straightforward tenets of the design process (i.e., designing out the known tendency of many plastics to burn) is tantamount to “rolling the dice” and hoping for the best. Experience teaches us time and again that taking design “shortcuts” seldom translates into an acceptable design outcome.
Radiant heating has been in use since Roman times, and a common variant of this heating method involves placement of tubes that circulate heated fluids beneath floors, thus warming the floors that then in turn heat the surrounding structure. Even though metallic tubes once frequented this application, their cumbersome
CV 83-2501 JMI, 1993 WL 565393 (C.D. Cal. Nov. 30, 1993) for discussion of specific findings of fact by the special master; P. Kemezis, Stringfellow Cleanup Settlement: Companies Agree to Pay $150 Million, Chemical Week, Aug. 12, 1992, at 11; http://www.dtsc.ca.gov/PressRoom/upload/t-01-99.pdf.
43. Tulare County, CA Sup. Ct., No. 93-161828.
installation and susceptibility to corrosion led to the development of plastic tubes. One manufacturer recognized that rubber hose would be even easier to install than the somewhat rigid plastic conduits, and engaged a major rubber company to design a hose for the radiant heating market. The rubber company supplied a hose formulation that was designed for and used in automotive cooling applications, which made some sense given that similar fluids at similar temperatures are circulated in both cases. The rubber company failed to test the newly developed hose under end-use conditions, and thereby neglected to detect a failure mode caused by hose hardening and embrittlement. Engineering experts for the plaintiffs conducted a simple end-use test that verified that the hose would degrade under foreseeable conditions, thus completing the step in the design process that was not performed by the rubber company.44
On July 17, 1981, during a tea dance in the vast atrium at the Hyatt Regency Hotel in Kansas City, two elevated walkways collapsed onto the people celebrating in the lobby, killing 114 of them and injuring more than 200.
The determination of what happened focused on the design and construction of the walkways. The 40-story complex featured a unique main lobby design consisting of a 117-foot by 145-foot atrium that rose to a height of 50 feet. Three walkways spanned the atrium at the second, third, and fourth floors. The second-floor walkway was directly below the fourth, and the third was offset to the side of the other two walkways. The third- and fourth-floor walkways were suspended directly from the atrium roof trusses, while the second-floor walkway was suspended from the fourth-floor walkway. During construction, the design, fabrication, and installation of the walkway hanger system were changed from that originally intended by the design engineer. Instead of one hanger rod connecting the second- and fourth-floor walkways to the roof trusses, two rods were used—one to connect the second- to the fourth-floor walkway, and another to connect the fourth-floor walkway to the roof, thus doubling the stresses in the ill-conceived connection.
Just prior to the collapse, about 2000 people had gathered in the atrium to participate in and watch a dance contest, including dozens who filled the walkways. At 7 p.m., the walkways on the second, third, and fourth floor were packed with visitors as they looked down to the lobby, also full of people. It was the second- and fourth-floor walkways—the ones that experienced the design changes—that collapsed. Clearly then, in the iterative cycle of the design process, modifications to the original design need to be validated, and failure to do so can
44. http://www.entraniisettlement.com/PDFs/PreliminaryApprovalAmended.pdf; J. Moalli et al., Failure Analysis of Nitrile Radiant Tubing, ANTEC 2006 Plastics: Annual Technical Conference, Society of Plastics Engineers, May 7–11, 2006, Charlotte, NC (2006).
have severe consequences. Further details of this event can be found in the second edition of this manual.45
Spanning a strait, the third longest suspension bridge of its time, the Tacoma Narrows Bridge opened on July 1, 1940. In November of that same year, it collapsed into Puget Sound. During the design process, engineers failed to adequately account for the effects of aerodynamic flutter on the structure, a phenomenon in which forces exerted by winds couple with the natural mode of vibration of the structure to establish rapid and growing oscillations. In essence, the bridge self-destructed.
It is fair to say, however, that aerodynamic flutter was not well understood at the time this bridge was constructed. Indeed, the term was not coined until the late 1940s, years after the bridge collapsed. The root cause of this unfortunate circumstance was a desire to build a bridge with enhanced visual elegance (i.e., long and narrow) and to use an untested girder system that offered significant cost savings. This should have led to a thorough testing and validation program to ensure that venturing into uncharted waters in bridge design would not result in unintended or unanticipated consequences. Indeed, after the bridge was constructed and put into use on July 1, 1940, it gained a reputation for its unusual oscillations and was known as “Galloping Gertie.” It was only then that engineers built a scale model of the bridge and began testing its behavior in a wind tunnel. Those studies were completed and remedies proposed in November 1940, just days before the bridge fell into the Tacoma Narrows channel.
A substantial departure from the norm of appropriate testing and validation is an unacceptable application of the design process, and the collapse of this bridge is an all too sobering reminder of this. Stated in another way, end-use testing should not be done by the “consumer” and in cases where this occurs, a clear violation of the design process tenets has taken place.46
Automotive lifts are often used in dealerships and service stations to raise vehicles and provide access to components on the bottom of the vehicle for service. To reduce the propensity for injury, ANSI and the American Lift Institute (ALI) promulgate standards that specify, among other things, the minimum resistance on the horizontal swing-arm restraints. The lift in question had a label on the lift support structure that indicated it was in compliance with these specifications, so
45. Henry Petroski, Reference Guide on Engineering Practice and Methods, in Reference Manual on Scientific Evidence 577, 601–02 (2d ed. 2000).
46. This example is also further discussed in the second edition of this manual.
when a Jeep Wrangler fell from the lift and injured the owner of a service station, verification of conformity to the standards was assessed by the plaintiff.
Testing by the plaintiff’s expert revealed that the swing-arm lift restraints resisted only 30% of the criteria specified in the standard, and that simple reconfiguration of the restraint components could create a conforming lift. Furthermore, the plaintiff’s expert calculated that for the vehicle-lift configuration in question, the amount of force required to provide positive restraint was less than that required by the standards, and therefore the accident would have been prevented had the standards been met. Finally, the plaintiff’s expert opined that the label on the lift that claimed compliance with the standard would tend to convey to the end-user of the product that the presence of the swing-arm restraint added a layer of insurance for the operator in the event that there was an imperfect placement of the vehicle over the lifting pads.
In response, the lift manufacturer claimed that the intended swing-arm restraining forces arose from the friction created when the lifting pad contacted the vehicle undercarriage, and further argued that the swing-arm restraint was nothing but “fluffery” forced upon lift manufacturers to remain competitive in the marketplace. The jury found for the plaintiff, implicitly recognizing the tenant of the design process that calls for testing and validation of design claims and features.
After 2 years of construction the St. Francis dam in southern California was completed in 1926 and the reservoir behind it began to fill. As the reservoir reached near capacity behind the 195-foot-high concrete arch dam, the eastern abutment gave way shortly before midnight on March 12, 1928, unleashing a wall of water over 100 feet high that eventually dissipated into the Pacific Ocean some 50 miles downstream. The flood killed more than 600 people and most likely more. The collapse of the St. Francis dam is one of the worst American civil engineering failures of the twentieth century.47
The dam was designed and certified by a single individual, William Mulholland, chief engineer and general manager of the Los Angeles Department of Water & Power (at the time known as the Bureau of Water Works & Supply). Mulholland had no formal education and was a self-taught individual. While the ultimate physical cause of the failure was the proximity of a paleomegalandslide to the eastern dam abutment, a geological anomaly that geologists argue today as to whether such a feature could have been detected in the 1920s, the inquest that followed the disaster determined that improper engineering, design, and governmental inspection was where the responsibility for this tragedy resided.
47. St. Francis Dam Disaster Revisited (Doyce B. Nunis. Jr., ed., 2002).
Indeed, we now know that the design of this structure failed to meet accepted design principles already in place in the 1920s. The dam height was increased by 10 feet at the start of construction, and another 10 feet midway through construction, bringing the final capacity to 38,000 acre feet. No modifications were made to the base to accommodate this additional capacity, and there were a number of weaknesses in the design of the base. It is estimated that the factor of safety, which was meant to be above 4 in the initial design, may have been as low as 0.77 on the dam that was actually constructed.
Geoforensics expert J. David Rogers enumerated many other design deficiencies associated with the St. Francis Dam, among them the lack of hydraulic uplift theory being incorporated into the dam’s design; lack of uplift relief wells on the sloping abutment sections of the dam; failure to batter the upstream face of the dam to reduce tensile forces via cantilever action; failure to analyze arch stresses of the main dam; failure to remove high-water-content cement paste (laitance layer) between concrete lifts; failure to account for the mass concrete heat-of-hydration; failure to recognize the tendency of the Vasquez formation to slake upon submersion and failure to provide the dam with grouted contraction joints; failure to recognize that the dam concrete would eventually become saturated; and failure to wash concrete aggregate before incorporation in the dam’s concrete.48
In this instance there simply was no credible design process from concept, through design, execution, and postconstruction surveillance. As a result, a massive failure ensued.49
On January 28, 1986 the space shuttle Challenger and its accompanying liquid hydrogen and oxygen external tank (ET) disintegrated over the Atlantic Ocean after only about 70 seconds of flight. The two attached solid rocket boosters (SRB) separated from the shuttle and ET and were remotely destructed by the range safety officer. All seven of the NASA crewmembers were killed.
We now know the physical reason for this catastrophe. Two rubber “O”-rings placed at the aft joint where two sections of the right SRB came together had failed to “extrude” themselves as the SRB metal shell deformed during the early moments of ignition. Because of this, hot gases escaped through the breach created by the ineffective seal at the O-ring joint and led to the separation of the aft strut that attached the right SRB to the ET. This was followed by failure of the
48. J. D. Rogers, The St. Francis Dam Disaster Revisited, 77 Southern California Q. (1-2) (2003); J. D. Rogers, The St. Francis Dam Disaster Revisited, 40 Ventura County Q. (3-4) (2003).
49. Donald C. Jackson & Norris Hundley, Privilege and Responsibility: William Mulholland and the St. Francis Dam Disaster, California History (Fall 2004).
aft dome of the liquid hydrogen tank. The massively uneven thrust created by the escaping hydrogen gas altered the trajectory of the shuttle and aerodynamic forces destroyed it. The failure of the “O”-rings to alter their conformations with SRB shell deformation was attributed to the low ambient temperature at the time of launch. The O-rings had “hardened” and as a result lost their required flexibility.
Two investigations into the circumstances surrounding this disaster took place. Reports and findings were issued by the Presidential Rogers Commission50 and the U.S. House Committee on Science and Technology.51 While both reports agreed on the technical causes of the catastrophe (failure of the “O”-rings to perform as intended), their conclusions as to the root cause were stated somewhat differently but in the end pointed to the same basic issue. The Rogers Commission concluded that the National Aeronautics and Space Administration (NASA) and the O-ring manufacturer, Morton Thiokol, failed to respond adequately to a known design flaw in the O-ring system and communicated poorly in reaching the decision to launch the shuttle under extremely low ambient temperature conditions. The House Committee concluded that there was a history of poor decisionmaking over a period of several years by NASA and Morton Thiokol in that they failed to act decisively to solve the increasingly serious anomalies in the SRB joints.
Another way of stating what both reports essentially say is that the design process resulting in the double O-ring (now a triple O-ring system) was flawed. Moreover, NASA managers knew of this problem as early as 1977. Warnings by engineers not to launch that cold morning were disregarded. Each SRB consisted of six pieces, three welded together in the factory and the remaining three fastened together at the launch facility in Florida using the double O-ring seal system. Thiokol engineers lacked sufficient data to guarantee seal performance of the O-rings below 53 degrees Fahrenheit (°F). Temperature at launch hovered at 31°F. When originally designed, the O-rings were intended to remain in circumferential grooves. After several shuttle launches, it became evident that the SRB shell was deforming and that hot gases could escape but that the O-rings were “extruding” to seal these temporary breaches. As a result, the design specifications were changed to accommodate this process. The design itself, however, remained unchanged.
If one considers that the original design concept was to ensure a seal between the SRB field-joined sections using two O-rings, the question on the table is whether the actual design and subsequent execution were consistent with the design process. Clearly this was not the case. First, the system performed differently than expected (i.e., extrusion occurred). Validation and testing to ensure that
50. Rogers Commission, Report of the Presidential Commission on the Space Shuttle Challenger Accident (1986).
51. Committee on Science and Technology, Investigation of the Challenger Accident, H.R. Rep. No. 99-1016, (Oct. 29, 1986).
this aberrant behavior of the original seal system actually was acceptable never was done other than to monitor shuttle launches and hope for the best. Second, the O-rings were known to have insufficient resiliency at temperatures substantially higher than those encountered on the day of the Challenger launch; therefore, launching at such a low ambient temperature equated to misuse of the system. The unfortunate truth of all this is that an unsound design process most certainly will produce a flawed product.
On July 25, 2000, Air France flight 4590, a Concorde supersonic passenger jet departed Charles de Gaulle Airport and crashed into a nearby hotel killing 100 passengers, 9 crew, and 4 others on the ground. The physical cause was readily determined. The Concorde was designed to take off without flaps or leading-edge slats as a weight-saving measure. Because of this, it required a very high takeoff roll speed to become airborne. This placed unusually high stresses on the tires. A piece of titanium metal approximately 1 × 16 inches was lying on the departure runway. It had fallen from a thrust reverser assembly on a Continental Airlines DC-10 that had departed minutes earlier. During its takeoff roll, the Concorde struck the metal debris and this punctured and subsequently shredded one of its tires. The tire remnants broke an electrical cable and created a shock wave that fractured a fuel tank. The fuel ignited and an engine caught fire. The plane had reached a ground speed such that the pilot elected that it was prudent to continue the takeoff rather than abort. The crew shut down the burning engine. Unable to retract the landing gear, and now experiencing problems with the remaining engines, the crew was unable to climb and the aircraft rolled substantially to the left and contacted the ground.52
In this instance, a design decision was made to save weight by not having retractable flaps and slats. This led to higher than normal landing and takeoff speeds. This in turn placed additional demands on the tires. They would be rotating at higher speeds and contain much increased kinetic energy. This meant that when one or more failed, the rubber shrapnel would be released with additional force. This led to a greater risk of puncture of the aircraft structure and therefore special consideration to ensure that the aircraft skin could maintain integrity in the foreseeable event of a tire rupture. Making the skin more resilient to puncture implied additional weight and this would work against the primary reasoning for not having the slats and flaps. And there we have the design conundrum.
Having made what was initially regarded as a reasonable compromise in the aircraft design, the manufacturer subsequently gained experience with the Concorde, learning that tire failures could be potentially catastrophic (the type
of experience illustrated by the “Performance” arrow from the “Go” stage to the “Design/Formulate” and “Test/Validate” stages in Figure 1). Between July 1979 and February 1981, there were four documented tire ruptures on takeoff. In two of these instances, substantial damage was done to the aircraft structure, but the planes were able to land without incident. Despite having these critical data related to the initial design assumptions and associated compromises in hand, no remedial changes were made to either the tire or aircraft design. After the 2000 crash, design changes were made to the electrical cables, the fuel tanks were lined with Kevlar, and specially designed burst-resistant tires were put into use. The Concorde fleet was retired from service in 2003, with declining passenger revenues cited as the major cause.
In the case of the Concorde, the record appears to indicate that designers chose not to alter the design, even in the face of significant data, until a fatal accident occurred. Although these actions may be consistent with the above discussion on risk, and how it is perceived, the crash is illustrative of how the fundamentally simple design process works, and that departures from it can have serious consequences.
Having earned a bachelor’s degree in an engineering curriculum is generally sufficient to enter the professional workplace and begin to immediately solve a wide variety of problems. It is less so the case for students who graduate with degrees in the basic sciences such as physics, chemistry, or biology or in mathematics. Typically, but not always, these basic science students will go on to earn graduate degrees.
It is also the case that some students who have earned an engineering degree will continue to the master’s or even doctorate level of study. In 2004, U.S. colleges and universities awarded approximately 75,000 bachelor’s degrees, 36,000 master’s degrees, and 6000 doctoral degrees in all areas of engineering.53
One can think of the educational process as providing engineering students with a toolkit from which they select “tools” to enable them to either individually or in teams participate in scientific and technological innovation. Because these students are educated, as opposed to having been trained, one can never be quite sure how they will choose to use their tools, or add to the kit, or delete from the kit. Although carpenters share a common toolkit, we know the structures they build can be appreciably different in size, shape, and scope. So it is with engineers.
53. Report 1004D: Total Numbers of Bachelor’s, Master’s and Doctoral Degrees Awarded per Million Population Since AY1945-46—Including Data for Degrees Awarded to US Citizens Since AY1970-71, Engineering Trends Quarterly Newsletter, Oct. 2004, at 1.
One example that scientists and engineers can be one and the same is epitomized by Renaissance humanism during a period almost five centuries past. There, Leonardo da Vinci, with a minimalist toolkit by today’s standards, lived a life equally as an engineer and a scientist, and indeed an artist. Four centuries later, Buckminster Fuller seamlessly combined elements of geometry (aka “science”), structures (aka “engineering”), and architecture (aka “art”) to conceive and develop an entirely new approach to architectural design. Architects Norman Foster and Frank Ghery seized on recent advances in computer science and engineering to provide innovative platforms for architectural design that paved the way for radical changes in structural and visual renderings. Striking examples include the Guggenheim Museum Bilbao, the Walt Disney Concert Hall Los Angeles, the Experience Music Project Seattle, the City Hall London, the Beijing Airport, and the Reichstag Berlin.
Searching for ways to create or define the “bright line” that classifies da Vinci, Fuller, Foster, or Gehry as engineers, scientists, architects, or artists is as empty an exercise today as it would have been five centuries ago. This, of course, does not preclude one from considering himself or herself as an “engineer” or a “scientist”; however, the subtler point is that one can also be both or either at different points in time or at the same time. This can be overlooked or ignored in the quest for limiting or excluding expert testimony.
Without knowing how an engineer or scientist will use his or her toolkit and to what extent it will be replenished or modified as time goes on, it is not possible to begin to even second-guess what any particular individual may do to shape his or her career as time passes. There is a great deal of truth to the notion of “learning on the job.” Indeed, as one’s career unfolds, the number of opportunities expands and with that comes additional skills and an ever-increasing ability to make wise and informed choices and decisions. Being an engineer affords one the opportunity to continually remodel oneself as new and unexpected problems and challenges become evident.
And so it is with the passage of time that the “title” of one’s degree becomes an increasingly murky description of who one is and what one does. This is why it is so critical when evaluating whether an “engineer” is testifying within his or her realm of expertise that titles do not overshadow the actual context of a degree (i.e., the name may not reflect the knowledge attributes accurately) and the experience base at hand. Even though it is an all too common tactic to attempt to confine expert witness testimony to the asserted domain of his or her named academic credentials, it is one that may necessarily lead to less-informed testimony than otherwise would be the case. This is a high price to pay when the desired outcome is finding the right path to both truth and justice.
Licenses are required for engineering professionals in all 50 states and the District of Columbia, if their services are offered directly to the public and they would affect public health and safety. Licensed engineers are called professional engineers (PEs). In general, to become a PE, a person must have a degree from an ABET54accredited engineering college or university, have a specified time of practical and pertinent work experience, and pass two examinations. The first examination—Fundamentals of Engineering (FE)55—can be taken after 3 years of university-level education, or can be waived in lieu of pertinent experience. The FE examination is a measure of minimum competency to enter the profession. Many colleges and universities encourage students to take the FE exam as an outcome assessment tool following the completion of the education coursework. Students who pass this examination are called engineering interns (EIs) or engineers in training (EIT) and take the second examination after some work experience. This is the Principles and Practice of Engineering examination. The earmark that distinguishes a licensed/registered PE is the authority to sign and seal or “stamp” documents (reports, drawings, and calculations) for a study, estimate, design, or analysis, thus taking legal responsibility for it.
Many engineering professionals do not seek a PE license because their services are not offered directly to the public or they have no need to sign, seal, or “stamp” engineering documents. Whether an individual is licensed as a PE is neither sufficient nor necessary to establish his or her competency as an engineer. Furthermore, the two examinations test only for knowledge gained and assimilated at the undergraduate level. It is therefore common for professors of engineering in colleges and universities not to have PE licensure—indeed, they are the ones who teach and prepare those who do take these examinations. Despite this, a common litigation practice is to attempt to preclude “engineering” testimony offered by professionals who have had no need to obtain PE licensure as if this was intended to be some sort of requirement for practicing in the profession or for testifying in court. Such an approach is unwarranted and inconsistent with the way in which engineers behave and think about the work they do.
54. Founded in 1932 as the Engineer’s Council for Professional Development (ECPD), it was later renamed ABET (Accreditation Board for Engineering and Technology). In the United States, accreditation is a nongovernmental, peer-review process that ensures the quality of the postsecondary education that students receive. Educational institutions or programs volunteer to undergo this review periodically to determine if certain criteria are being met. ABET accreditation is assurance that a college or university program meets the quality standards established by the profession for which it prepares its students. The quality standards that programs must meet to be ABET-accredited are set by the ABET professions themselves. This is made possible by the collaborative efforts of many different professional and technical societies. These societies and their members work together through ABET to develop the standards, and they provide the professionals who evaluate the programs to make sure that they meet those standards.
55. In the past, this examination was known as the Engineer in Training (EIT) exam.
PE licensure is quite different from board certification for a physician or bar certification for a lawyer. Physicians and lawyers may not practice their professions without having such board certification. Such is not always the case for engineers and therefore it is not appropriate or correct to construe this to be so. The title “engineer” is legally protected in many states, meaning that it is unlawful to use it to offer engineering services to the public unless permission is specifically granted by that state, through a professional engineering license, an “industrial exemption,” or certain other nonengineering titles such as “operating engineer.” Employees of state or federal agencies may also call themselves engineers if that term appears in their official job title. In some states, businesses generally cannot offer engineering services to the public or have a name that implies that it does so unless it employs at least one PE. For example, New York requires that the owners of a company offering engineering services be PEs. In summary, licensing procedures and requirements are state specific, but such licensure is not a requirement to testify in federal court.
As a postscript to this discussion, civil engineers often seek PE registration because of their association with public works projects. This can be traced directly back to the failure and subsequent legacy of the St. Francis dam collapse in southern California in the late 1920s. More about this disaster is discussed in Section III.C.8.
Engineers are treated like other witnesses when it comes to determining whether they can testify as factual or expert witnesses. Thus, if they have information regarding facts in dispute, an engineer can be a fact witness describing that information. In the context of the design of a product or the conception of an allegedly protectable method or device, that may take the form of describing what the engineer did to create the product or construct at issue, how he or she conceived of the subject of that product or construct, and how the product or allegedly
56. Daubert standards were established in the trilogy of cases, Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999), General Elec. Co. v. Joiner, 522 U.S. 136 (1997), and Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579 (1993), and refer to factors to be considered when assessing the admissibility of expert testimony. See generally Margaret A. Berger, The Admissibility of Expert Testimony, in this manual.
protectable property compares to other designs or intellectual property that are claimed to relate to the subject of the dispute.
As discussed above, engineers can also be expert witnesses. Like any other proffered expert, an engineer’s training, background, and experience play a role in qualifying him or her to provide expert opinions. Education, licensing, professional activities, patents, professional society involvement, committee work, standards development, professional consulting experience, and involvement in a business based on similar technology or engineering principles can all help to fortify an engineering expert’s qualifications. The work that the engineer did to acquire facts about the matter at issue (described below) and to test the engineer’s hypothesis as to how the incident in question occurred and what caused it provide still stronger bases for allowing the engineer to testify as an expert witness.
Ultimately, the court’s application of Daubert standards to the qualifications asserted by the engineer and the opinions that the engineer seeks to give determines whether the engineer may testify. In the role of gatekeeper of scientific or technical testimony, the trial judge determines whether the engineering testimony is both “relevant” and “reliable.” The relevance and reliability of engineering testimony is judged in the context of the design process and the way that engineers approach a problem as described above. And as the court clarified in Kumho Tire, Daubert extends to all expert testimony, including testimony based on experience alone.57
Under Federal Rule of Civil Procedure 26(a)(2)(B)(i), the expert report must contain the basis and reasons for all opinions expressed, and certainly the expectation is that oral testimony will do the same. Apart from opinions based purely on knowledge, skill, experience, training, or education, nearly all expert opinion is based on observations, calculations, experimentation, or some combination thereof.
When called as an expert in a products liability case, engineers will often complete a physical inspection of a failed product or accident scene. ASTM and the National Fire Protection Association (NFPA) have published several standard practices that
57. See Margaret A. Berger, The Admissibility of Expert Testimony, in this manual.
offer guidance on inspections and related issues.58 Although it is not required that engineers adopt and follow these standards, if the court has questions as to whether the techniques or procedures used by an engineer are reasonable, reference to the standards can certainly be helpful.
As a first step in the inspection process, engineers will typically document evidence or the accident scene using photography and videography. It may be worth noting that just as 2009 represents the first year that the official presidential portrait is digital, most engineers will record photos and video digitally. Other measurements and readings can also be made at the initial inspection, as engineers establish the state of the evidence and attempt to determine if it has been altered subsequent to the incident.
One important issue that often arises during an inspection is the destruction of evidence, and engineers sometimes argue as to whether testing is truly destructive. ASTM E 860 provides some guidance that could be useful to the court in terms of providing a reference to engineers:
Destructive testing—testing, examination, re-examination, disassembly, or other actions likely to alter the original, as-found nature, state or condition of items of evidence, so as to preclude or adversely affect additional examination or testing.
In terms of inspections, destruction of evidence typically relates to disassembly or displacement of parts, and disputes can usually be resolved by establishing an agreed-on protocol between parties. If items that have physically broken or separated are at issue, it should be remembered that two fracture surfaces are created, each a mirror image of the other, and one can be preserved while the other is evaluated. Microscopic examination of failure surfaces, also known as fractography, is commonly used by engineers to determine the cause of failure. Fractography can be used to establish such things as how the product failed (overload versus a fatigue or time-dependent failure) and whether manufacturing defects (poor welds, voids, inclusions) exist.
b. Experiments and testing
After performing inspections of the evidence, engineers develop hypotheses as to the cause of what they are investigating and evaluate these hypotheses. One common method of testing a hypothesis is experimentation, and engineers are educated and trained to conduct experiments, often to the displeasure of their
58. Although not intended to be an exhaustive list, these standards include:
- ASTM E 860—Standard Practice for Examining and Preparing Items That Are or May Become Involved in Criminal or Civil Litigation,
- ASTM E 2332—Standard Practice for Investigation and Analysis of Physical Component Failures,
- ASTM E 1188—Standard Practice for Collection and Preservation of Information and Physical Items by A Technical Investigator, and
- NFPA 921—Guide for Fire and Explosion Investigations.
client-attorneys who would rather not perform any test for which the outcome is uncertain. Engineers can design tests to study kinematics (motions) and kinetics (forces) and to recreate accidents; to evaluate physical, mechanical, and chemical properties of materials; or to assess specific characteristics against claims in a patent. Because the circumstances surrounding accident and product failure investigation can be quite complex, and often novel as well, engineers sometimes must design experiments that have never before been performed. This notion, experiments conducted for the first time for purposes of litigation, has been the topic of much debate.
Although it is typically suggested that such work is biased and therefore ought to be excluded, an experiment that is designed and executed for the purposes of litigation is not inherently suspect. If the experiment has a well-defined protocol that can be interpreted and duplicated by others, articulates underlying assumptions, uses instrumentation and equipment that is properly calibrated, and is demonstrated to be reliable and reproducible, it should not be summarily discarded simply because it is new. It is often the case that the precise matter in dispute has not been the subject of engineering or scientific studies, because in the normal course of events, the problem at hand was never addressed in a public forum and no peer-reviewed literature spoke directly to it. In typical engineering problems, because a multitude of factors can vary, it is often difficult to find suitable preexisting information, and the question at hand may not have been asked in such a way as is before the court.
The fact that problem identification occurs within the course of a legal dispute does not mean that the problem cannot then be explored directly using either the scientific method or the engineering design process or both to ascertain and understand the physical or chemical behavior of the issue at hand. In point of fact, an experiment that is designed for litigation will better fit the issues standing before the court, and either the plaintiff or the defendant is free to pursue this and to subsequently criticize the results. Not only will experiments designed to specifically address the matter at issue be more directly relevant to questions at hand, they will also provide data the court can use in thoughtful deliberation.
Indeed our personal experience has found this not only to be helpful in adjudicating complex issues for which no directly relevant prior work had been done, but in the end, after the litigation had been completed, peer-reviewed articles were written about the work that was done for the purposes of studying an issue for litigation.59
59. Richard D. Hurt & Channing R. Robertson, Prying Open the Door of the Tobacco Industry’s Secrets About Nicotine: The Minnesota Tobacco Trial, 280 JAMA 1173 (1998); John Moalli et al., supra note 44; Monique E. Muggli et al., Waking a Sleeping Gaint: The Tobacco Industry’s Response to the Polonium-210 Issue, 98 Am. J. Pub. Health 1643 (2008); M.S. Warner et al., Performance of Polypropylene as the Copper-7 Intrauterine Device Tailstring, 2 J. Applied Biomaterials 73 (1991); Richard Hurt et al., Open Doorway to Truth: Legacy of the Minnesota Tobacco Trial, 84 Mayo Clinic Proc. 444 (2009).
Of course not all situations require novel techniques to be developed, and in those instances an abundance of standards for testing materials and products exist. Typically promulgated by organizations such as ASTM, ANSI, CEN, and others, these standards envelop everything from sample preparation, to sampling procedures, to test equipment operation and calibration, to analysis of data acquired during testing. Although such a broad array of standards and guidelines exist, it is possible that some portion of even the more novel test may not be covered. It is also common for engineers to follow a standard to the maximum extent allowed by the circumstances and state of the evidence, and to note deviations from that standard in their protocols and reports.
A substantial portion of an engineer’s education is spent learning how to calculate things, so it should come as no surprise that when litigation is involved, engineers would be making calculations as well. As part of this education, engineers learn how to derive equations based on scientific and mathematical principles, and consequently become aware of the limitations of a particular equation or expression. Although it would be convenient if a single equation could be used to solve every engineering problem, this is clearly not the case, and so engineers must learn what principles to apply, and when to apply them.
The difference between a good calculation and a marginal one is related to how applicable the equations used in the calculation are to the situation at hand, and how valid the underlying assumptions are. As mentioned above, it is the rare case in which an engineering analysis contains no assumptions. For example, there are well-known equations that relate the pressure inside a cylindrical vessel to the stresses in the wall of that vessel. These equations assume, however, that the wall thickness of the pressure vessel is small compared with the inner diameter, and if this is not the case, significant error may result. If an engineer uses the more simplified approach, he should assess whether his analysis is conservative (i.e., how the assumptions affect the overall calculated result).
In the modern age, it is simple to download programs from the Internet that will make calculations based on input variables. These programs can save engineers considerable time, because they can reduce hours of “paper” calculation to minutes. Used blindly, though, without proper understanding of core assumptions or approximations, these programs can be precarious. Computer programs should always be validated, and the simplest way to accomplish that task is to have the program calculate a range of solutions for which the result is already known. The program is then validated within that range.
When hand calculations become overly tedious, or are too simplified to handle a highly complex problem, engineers will often use computer models to examine
systems, processes, or phenomena. Quite distinct from the simple programs mentioned above used to solve an equation or two, these computer models employ enormous bodies of code that can solve thousands of equations. One of the most common techniques employed by these programs is the finite element method (FEM), which can be used to solve problems in stress analysis, heat transfer, and fluid flow behavior.
FEM is dependent on the computational power of computers, and basically divides the system or component into small units, or elements, of uniform geometry. This mesh, as it is called, reflects the geometry of the actual system or component as closely as possible. Boundary conditions are established on the basis of known applied loads, and the fundamental equations of Newtonian mechanics are solved by iterative calculations for each individual cell. The resulting loads and displacements (or stresses and strains) in each cell are then summed at each increment of time to give an overall picture of the load/displacement (or stress/strain) history of the system or component. The literally millions of calculations required for each time step can only be handled by a computer. These data can then be used to determine the loads and displacements at the time of failure, information that otherwise could not be obtained from hand (or “back of the envelope”) calculations.
In its early stages, FEM code could only be found in universities and corporate and governmental laboratories, and was executed by doctoral-level engineers who used separate programs to postprocess results into usable graphical output. Today, commercial FEM programs are widely available, and are capable of generating eye-catching graphics that appeal to juries. Other software programs are available that create similar graphics for car-crash or mechanical simulations. This tool is as much an accepted part of the engineering design community as the slide rule was in the 1960s. In addition, engineers involved in determining the cause of failure of mechanical systems have been using FEM since the 1980s to determine the loads and strains at critical points in complex geometries as part of root-cause analysis efforts. This is often a principal means to determine what actually caused something to break, and ultimately to determine whether a design or manufacturing defect or overload or abuse was ultimately at fault. FEM can, in certain circumstances, be a valuable tool to assess the cause of a design failure.
To be sure, FEM, like any scientific tool, must be properly applied and interpreted within its limitations. It can be abused and misused, and because the output from these models can be made to appear extremely realistic, especially when coupled with computer graphics, their use needs to be carefully considered. To summarily reject FEM as a simulation, though, would be to deprive a modern-day engineer of a tool that is regularly used. There is an old adage in the modeling world, called “garbage in—garbage out,” or GIGO, that gets to the heart of the issue. No matter how sophisticated the software, or how realistic the output seems, if the data fed to the program are inaccurate, the results will be poor, and thus can be misleading. The proper way to evaluate the efficacy of the model or simulation is to validate it, and this is usually done by processing known scenarios
or input conditions, and making certain the results are representative of the known output within the validated range. Regardless of the qualifications of the engineer, if any mathematical model has not been validated within the boundaries at issue, its use in the courtroom should be carefully considered. Additionally, once the model is used in litigation, engineers should be prepared to provide a fully executable copy of the model if requested during discovery.
Engineers are trained to rely on literature as part of their work, and the literature they employ is nearly as varied as engineers themselves. Structural and mechanical engineers use codes and regulations when they design everything from buildings to bridges, and pressure vessels to heating systems (an extended discussion on the use and misuse of codes is provided below). Engineers rely on published standard methods when they conduct run-of-the-mill tests, scientific literature to test the efficacy of complex calculations and experiments, and textbooks to validate techniques and methods from their educational training.
It is common for engineers to gather literature that addresses an issue about which they are testifying. Industrial engineers may gather literature related to warnings, materials engineers may collect literature related to development and processing of a compound, and mechanical engineers may assemble literature related to stress analysis. Inevitably, literature exists that is not concurrent with the engineer’s perspective, and a proper analysis of the available literature should include this as well, with the engineer addressing discrepancies directly.
Engineers may also rely on scientific and technical literature to assess the state of knowledge at a given period in time. This is especially useful in matters involving intellectual property (discussions related to prior art, best mode, and the like) or product design (state-of-the-art analysis). The appropriateness of reliance on this type of literature should not only be weighed by its applicability to the case in discussion, but also by the engineer’s mastery and frequency of use of the particular subject. The topic of peer review is often raised concerning scientific and technical literature, and although the peer review process aids in the promotion of sound science and engineering, its presence does not ensure accuracy or validity, and its absence does not imply that a reference is scientifically unsound.
Engineers called as experts by either party in a products or personal injury case will likely review documents produced during discovery that relate to the design process of the product in question. From these documents, engineers can often assess whether appropriate actions were taken during the product design process, including product development, product testing and validation, warning and risk communication, and safety and risk assessment. Because the specific constraints imposed on a design are not always apparent from internal engineering
documents, and understanding constraints can be critical in terms of effective critical review, engineers called as experts may need to review deposition testimony relating to the design to supplement what they learn from the documents themselves.
Because engineers are problem solvers, their work frequently becomes the subject of disputes, which eventually involve lawyers and courtrooms. Many times these disputes involve the sort of “scientific, technical or other specialized knowledge” that may be best understood with the help of one or more engineers.60 Stated differently, these issues may be difficult for a jury of laypersons, or even judges, to understand and resolve without the assistance of an engineer who was not directly involved in the facts of the case. As a result, disputes involving engineering concepts and principles may be properly the subject of expert testimony from one or more witnesses qualified in the field of engineering.
Just as there are a multitude of disciplines within engineering, there are a multitude of issues upon which engineers may be called upon to testify. Some examples follow.
Generally speaking, a product may be defective if it contains a design defect, a manufacturing defect, or inadequate warnings or instructions. Therefore, disputes regarding the efficacy or safety of products typically involve questions regarding whether the product was properly designed, tested, manufactured, sold, or marketed. These issues are examined from the perspective of what was known at the time of first sale and also what was done after information became available about the product’s performance.
The conception and design of a product is often a focus of dispute in a product liability case. An understanding of the way that engineers think and the engineering design process described above is essential to determine the nature of and extent to which engineering testimony should be admitted. For example, in medical device litigation, it may be significant to know the purpose for which the medical device was designed and the process by which the design at issue was
60. Fed. R. Evid. 702.
achieved. To gain that understanding, testimony from the product designer as well as testimony by engineers with experience in design may be helpful.61
The adequacy of testing done on a product is closely related to the issue of design defect. This is true whether the testing in question occurred before the product was first sold (“premarket”) or after the product had been on the market for a time and information regarding its performance became available (“postmarket”).62 Engineering testimony may be helpful to the court and to the trier of fact in these circumstances as well.
An engineer’s examination of products that have failed in use may result in valuable evidence for a court and trier of fact to consider. For example, an engineer skilled in fractography can testify regarding how and why a product failed.63 Such testimony may prove helpful to the court and a trier of fact on such issues as whether the subject product was defective as originally designed, whether an alternative design could have been used, what the cost of such response would be and whether the manufacturer’s response to such incidents of product failure was reasonable.
61. Russell v. Howmedica Osteonics Corp., No. C06-4078-MWB, 2008 WL 913320 (N.D. Iowa 2008) (biomechanical expert allowed to testify that medical device’s inability to handle weight loads was a design defect and hence caused the plaintiff’s injuries, and that the defendant’s failure to warn surgeons of this fact that caused the failure of the device); Poust v. Huntleigh Healthcare, 998 F. Supp. 478 (D.N.J. 1998) (engineer with expertise in medical device use, safety, and design allowed to testify about defects concerning lack of instructions, the alarm, lack of fail-safe mechanism, and lack of pressure gauge in pneumatic compression device); see also Dunton v. Arctic Cat, Inc., 518 F. Supp. 2d 296 (D. Me. 2007) (admitting expert testimony of mechanical engineer and product designer regarding, among other things, purpose and design of certain components of allegedly defectively designed snowmobile); Floyd v. Pride Mobility Prods. Corp., No. 1:05-CV-00389, 2007 WL 4404049 (S.D. Ohio 2007) (three engineering experts, including mechanical engineer with expertise as product designer, allowed to testify about defects in design of scooter); Tunnell v. Ford Motor Co., 330 F. Supp. 2d 731 (W.D. Va. 2004) (engineer allowed to testify about feasibility of proposed safer auto design).
62. See, e.g., Smith v. Ingersoll-Rand Co., 214 F.3d 1235 (10th Cir. 2000) (human factors engineering expert allowed to testify that defendant’s failure to conduct human factors analysis of milling machine was inadequate); Montgomery v. Mitsubishi Motors Corp., 448 F. Supp. 2d 619 (E.D. Penn. 2006) (engineer allowed to testify improper or deficient testing rendered vehicle design defective and unsafe, based in part on his review of test results of another engineer); accord Phelan v. Synthes (U.S.A.), 35 F. App’x 102 (4th Cir. 2002) (biomedical engineer not allowed to testify about inadequacy of premarket testing of medical device when underlying opinion that device was unreasonably dangerous was not supported by reliable methodology).
63. See Parkinson v. Guidant Corp., 315 F. Supp. 2d 754 (W.D. Pa. 2004) (metallurgist who reviewed fractographs was allowed to testify in product liability action that manufacturing flaws caused the premature fracture of guidewire used in angioplasty); Hickman v. Exide, Inc., 679 So. 2d 527 (La. Ct. App. 1996) (expert in, among other things, fracture analysis was allowed to testify about cause of explosion of car battery in product liability action); Reif v. G & J Pepis-Cola Bottlers, Inc., No. CA87-05-041, 1988 WL 14052 (Ohio Ct. App. Feb. 15, 1988) (fractography expert was allowed to testify about cause of break in broken glass bottle).
The manufacture of a product and the quality process through which uniformity of ingredients, processes, and the final product are ensured may properly be the subject of product safety litigation. Testimony of engineers with experience in designing and implementing manufacturing systems to ensure product quality may be critical in resolving product disputes and helpful to the court and trier of fact.64
Warnings issues in product liability cases are at the intersection of factual evidence and legal standards and thus are particularly difficult for the court and/or other trier of fact to resolve. Many product disputes involve claims concerning the adequacy of warnings that accompanied the product when it was first sold. In these cases, the focus may be on what was known through the conception and design phases of the design process and the necessity for and adequacy of warnings that accompanied the product in view of that knowledge. Other disputes center upon the warnings that were added or could have been added after the product had been used and the company received feedback from users of the product. The reasonableness of the company’s response to these reports may be an issue. Thus, the case may be decided on the basis of whether the company conducted, or failed to conduct, design and testing activities in view of that information or whether the company modified the product or communicated to users of the product what it knew.
But not all warnings issues are properly the subject of expert testimony, particularly with respect to products that are regulated by federal law.65 Properly qualified engineers may be able to provide opinions that could help the court and the trier of fact to understand such issues with respect to such products, but
64. See, e.g., Galloway v. Big G Express, Inc., 590 F. Supp. 2d 989 (E.D. Tenn. 2008) (defendant’s expert with significant experience in engineering fields, including product design, allowed to testify about manufacturing process used by the defendant); Schmude v. Tricam Indus., Inc., 550 F. Supp. 2d 846 (E.D. Wis. 2008) (discussing generally the propriety of admitting testimony of expert who studied mechanical engineering and had degree in product design regarding manufacturing process for rivets used in ladder that collapsed); Yanovich v. Sulzer Orthopedics, Inc., No. 1:05 CV 2691, 2006 WL 3716812 (N.D. Ohio 2006) (discussing testimony of engineering experts regarding manufacture of medical device). See also Pineda v. Ford Motor Co., 520 F.3d 237 (3d Cir. 2008) (metallurgical engineer allowed to testify about explicit procedure for replacing allegedly defective product in order to reduce likelihood of product failure).
65. The FDA’s drug approval process may preempt state law product liability claims based on a failure to warn. See, e.g., Riegel v. Medtronic, Inc., 552 U.S. 312, 128 S. Ct. 999 (2008). See also Bates v. Dow Agrosciences LLC, 332 F.3d 323 (2005) (discussing preemption of state law product liability claims by the Federal Insecticide, Fungicide, and Rodenticide Act). But see Wyeth v. Levine, 555 U.S. 555, 129 S. Ct. 1187 (2009) (holding that FDA approval of a drug did not preempt state law tort claim based on inadequate drug warnings).
nonetheless may not be allowed to testify based on the substantive law applicable to such products.66
For example, industrial engineers, or engineers educated in human factors, may have training that allows them not only to testify when warnings are necessary from an engineering perspective (recall the discussion above about the design process), but also about the efficacy of warnings, and development of risk communications including text, pictures, auditory, or visual signals.
Issues regarding the sale and marketing of products often concern promises made regarding the expected performance of the product, including both the positive results that a product is able to achieve and, especially, what possible harm that a product may cause. The efficacy of a product may be proved by a straightforward comparison between premarket data on product performance and sales and marketing claims, and engineers may provide helpful testimony regarding the interpretation of such data. There may be a dispute about whether the claims made about the product’s safety exceeded the testing results that had been obtained for the product or led to a hazardous situation because the product was not properly tested. These may also be the subject of appropriately qualified engineering testimony.
Common personal injury cases may also present issues on which engineering testimony may be helpful. Such disputes often turn on testimony as to how a particular trauma occurred. Our discussion of biomechanical engineering highlights some of these issues.67 In a car accident case, properly qualified engineers may provide opinion testimony regarding how an accident occurred, including reconstructing the conduct of each of the parties and how that conduct affected the accident. In a slip-and-fall case, engineering testimony can concern such basic issues as why the injured person slipped and what could have been done to prevent it.
In addition to the above, engineers may also testify about various aspects of a party’s damages and give an opinion about whether those alleged damages were caused by the conduct in question. Testimony about causation in a products dispute often involves both factual and legal questions. Through experience, training, and activities in the case, engineers may have the ability to understand the interrelationship between events and thus can provide helpful testimony on whether the asserted damages had a relationship to the asserted misconduct so as to have
66. See Pineda v. Ford Motor Co., 520 F.3d 237 (3d Cir. 2008) (metallurgical engineer permitted to testify that safety manual should have contained warning about glass failure in SUV); Michaels v. Mr. Heater, Inc., 411 F. Supp. 2d 992 (W.D. Wis. 2006) (human factors engineering expert allowed to testify about the adequacy of product warning); Nesbitt v. Sears, Roebuck & Co., 415 F. Supp. 2d 530 (E.D. Pa. 2005) (expert with practical experience as engineer allowed to testify that the plaintiff would have responded to an additional warning); Santoro v. Donnelly, 340 F. Supp. 2d 464 (S.D.N.Y. 2004) (mechanical engineer allowed to testify about adequacy of warnings for fireplace heater).
67. Supra Section I.C.
been “caused” by it.68 But issues regarding the standard to apply for the sufficiency of causal proof may be both scientific and legal issues. Thus, the adequacy or admissibility of an engineer’s opinion on causation will be evaluated in light of the law, as well as the adequacy of the science that forms the basis for the opinion.69
Situations where property damages are asserted may pose special problems on which engineering testimony may be appropriate. For example, determining whether a product problem is an isolated occurrence or whether it is part of a widespread product problem may be difficult to resolve in the absence of engineering testing and analysis, which aims at determining a product defect and a product breakdown process.
Although the definition of what is defective may be the subject of a jury instruction at trial,70 proof of a product defect may involve identifying key facts that
68. See, e.g., Nemir v. Mitsubishi Motors Corp., 381 F.3d 540 (6th Cir. 2004) (automotive safety engineer allowed to testify that defective seatbelt latching mechanism caused plaintiff’s injuries); Babcock v. General Motors, 299 F.3d 60 (1st Cir. 2002) (structural and mechanical engineer allowed to give testimony about impact speed, cause of injuries, how the product allegedly ultimately failed, and testing procedures for the product); McCullock v. H.B. Fuller Co., 61 F.3d 1038 (2d Cir. 1995) (engineer allowed to testify regarding whether plaintiff was within “breathing zone” for hot-melt glue in workplace); Perez v. Townsend Eng’g Co., 545 F. Supp. 2d 461 (M.D. Penn. 2008) (engineer allowed to testify that product was defective, that defect caused plaintiff’s injury, and that alternative design would have prevented injury); Farmland Mut. Ins. Co. v. AGCO Corp., 531 F. Supp. 2d 1301 (D. Kan. 2008) (electrical engineer allowed to testify about cause of farm equipment fire); Phillips v. Raymond Corp., 364 F. Supp. 2d 730 (N.D. Ill. 2005) (biomechanical engineer testified as to the mechanics of plaintiff’s injury resulting from allegedly defective forklift); Tunnell v. Ford Motor Co., 330 F. Supp. 2d 731 (W.D. Va. 2004) (engineer allowed to testify there was an absence of evidence that the accident was caused by electrical arcing); Figueroa v. Boston Scientific Corp., 254 F. Supp. 2d 361 (S.D.N.Y. 2003) (expert with substantial experience, education, and knowledge in engineering field allowed to testify about cause of damage to plaintiff); Yarchak v. Trek Bicycle Corp., 208 F. Supp. 2d 470 (D.N.J. 2002) (expert in forensic and safety engineering, among other subjects, allowed to testify that bicycle seat caused the plaintiff’s erectile dysfunction); Traharne v. Wayne Scott Fetzer Co., 156 F. Supp. 2d 690 (N.E. Ill. 2001) (electrical engineer allowed to testify about cause of deceased’s electrocution); Bowersfield v. Suzuki Motor Corp., 151 F. Supp. 2d 625 (E.D. Pa. 2001) (engineer allowed to testify about causation of automobile passenger’s injuries).
69. See generally Margaret A. Berger, The Admissibility of Expert Testimony, in this manual; see also Michael D. Green et al., Reference Guide on Epidemiology, Section V, in this manual.
70. Restatement (Third) of Torts § 2 (1998) provides that the general definition of a product defect is as follows:
A product is defective when, at the time of sale or distribution, it contains a manufacturing defect, is defective in design, or is defective because of inadequate instructions or warnings. A product:
(a) contains a manufacturing defect when the product departs from its intended design even though all possible care was exercised in the preparation and marketing of the product;
(b) is defective in design when the foreseeable risks of harm posed by the product could have been reduced or avoided by the adoption of a reasonable alternative design by the seller or other distributor,
relate to that definition. These issues may be the subjects of the testimony of an engineer. The issue of whether a product is unreasonably dangerous may involve proof of available alternative designs, the existence of modifications to the product that would make it safer (which directs us back to the discussion on risk, above; all products can be made safer at the expense of cost and/or convenience) and what consumers expect the product to do or not to do, to identify a few such issues.71 Engineers may be asked to engage in testing to determine a cause and/or mechanism of failure and as a basis for an opinion regarding product defect. Such testing may include accelerated testing and end-use testing to replicate the conditions that the products see in use.
To understand the product and its expected or anticipated uses, engineers may review documents regarding the product at issue and published literature about like products or product elements. Visits to sites where the product is or was in use may provide information to engineers about recurring characteristics of product performance and aspects of the environment of use, which bear on that performance. Visual examination, measurements made at the site and experiments conducted at the site and in the laboratory may provide valuable information regarding the characteristics of the product that affect product performance or nonperformance.
In sum, the engineer’s problem-solving approach using the design process as described above can provide valuable information about the nature and cause of product problems and the limitations of the design of the product at issue, including the characteristics of the environment of use and the choice of materials for the subject product. Armed with this and other information, properly qualified engineers can provide valuable opinions on issues going to the heart of the question of whether the product at issue is defective and caused the claimed damages.72
or a predecessor in the commercial chain of distribution, and the omission of the alternative design renders the product not reasonably safe;
(c) is defective because of inadequate instructions or warnings when the foreseeable risks of harm posed by the product could have been reduced or avoided by the provision of reasonable instructions or warnings by the seller or other distributor, or a predecessor in the commercial chain of distribution, and the omission of the instructions or warnings renders the product not reasonably safe.
See also Restatement (Second) of Torts § 402A (1965), which defines a defect as one that makes a product “unreasonably dangerous.”
71. Martinez v. Triad Controls, Inc., 593 F. Supp. 2d 741 (E.D. Pa. 2009) (engineer allowed to testify about design defects and warnings); Page v. Admiral Craft Equip. Corp., No. 9:02-CV-15, 2003 WL 25685212 (E.D. Tex. 2003) (mechanical engineer allowed to testify about defect in design of bucket and safer alternative).
72. “While an expert’s legal conclusions are not admissible, an opinion as to the ultimate issue of fact is admissible, so long as it is based upon a valid scientific methodology and supported by facts. See Fed. R. Evid. 704. The ‘ultimate issue of fact,’ as used in Rule 704, means that the expert furnishes an opinion about inferences that should be drawn from the facts and the trier’s decision on such issue necessarily determines the outcome of the case.” Strickland v. Royal Lubricant Co., Inc., 911 F. Supp. 1460, 1469 (M.D. Ala. 1995).
Engineering testimony may be helpful in disputes regarding patents and other forms of intellectual property. Knowledge has become a key source to wealth in our economy,73 and we increasingly depend on innovation and the protection of innovation.74 The federal government’s power to protect patents and copyrights is one of only a handful of enumerated powers in the U.S. Constitution.75 Engineers are at the very heart of technology and innovation and therefore often become natural contributors to the resolutions of disputes involving these subjects.
The issues for factual or expert engineering testimony in this area are closely allied to those highlighted in the above description of the product design process. Key issues concern conception and development of the invention or protected trade secret, commercialization and sales/marketing of the protected concept, infringement or theft of the protected concept, and damages, including proof of willfulness or bad intent. There are a myriad of situations in which engineering testimony may be received. We will highlight a few of them.
The patentability of an idea is measured by its advance over prior art in the relevant field. Almost all new inventions are combinations or uses of known elements. What constitutes prior art and what is the relevant field for such art are thus questions that relate to the conception stage of the design process. Who is a qualified engineer to testify about these issues is answered under the Daubert standard. Thus, engineering testimony may be helpful on such issues as whether the invention is new or novel and whether it is non-obvious to one who has ordinary skill in the art. And engineers can help to define the description of the person with ordinary skill and interpret what such a person would learn from the art in question. Prior art is meant to include all prior work in the field. It sometimes connotes “public” prior art, not hidden or unknown art. A properly qualified engineer witness can provide relevant and reliable testimony regarding these and other prior art–related questions.
The rules for using engineering experts in patent infringement proceedings in federal courts are reasonably well defined. For example, under the U.S. Supreme Court’s decision in KSR International Co. v. Teleflex, Inc.,76 non-obviousness is
73. See, e.g., Thomas A. Stewart, Intellectual Capital (1997).
74. “There is established within the [National Institute of Standards and Technology] a program linked to the purpose and functions of the Institute, to be known as the ‘Technology Innovation Program’ for the purpose of assisting United States businesses and institutions of higher education or other organizations, such as national laboratories and nonprofit research institutions, to support, promote, and accelerate innovation in the United States through high-risk, high-reward research in areas of critical national need.” 15 U.S.C. § 278n. See also Prioritizing Resources and Organization for Intellectual Property (PRO-IP) Act of 2008, Pub. L. No. 110-403, 122 Stat. 4256 (2008); America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science Act, Pub. L. No. 110-69 (2007).
75. U.S. Const. art. I, § 8.
76. 550 U.S. 398 (2007). See also Dennison Mfg. Co. v. Panduit Corp., 475 U.S. 809 (1986).
ultimately a question of law for the court to decide. However, the underlying factual determinations, including the secondary factors involved in determining patent validity, remain jury questions concerning which expert engineering testimony may be admitted.77 Questions regarding the scope and teachings of prior art also may invite engineering testimony and interpretation.78
A similar analysis applies to trade secret matters. The nature and scope of the claimed trade secret, including the existence of steps taken to protect the trade secret are all issues where the engineer as a witness may be involved. This is true of protected processes and methods as well as devices or other products of the subject trade secret.79
When an assertion is made that intellectual property or protected trade secrets have been infringed, engineering testimony may be necessary on a number of issues as courts attempt to resolve issues regarding identifying features of the challenged device, methods or processes that infringe the protected property.80 Additional issues may relate to knowledge regarding the protected property and conception and design of the subject of the alleged infringement.81
Proof of damages may involve a number of issues that relate to or derive from an engineer’s analysis of the scope of claims or protected methods and processes, commercial viability of the subject intellectual property, and scope of protection afforded by the subject patent. Qualification of engineers as witnesses to provide testimony in these areas may present its own challenges under Daubert as to both reliability and relevance.
There are many other areas where engineering testimony may be helpful to the court and trier of fact. Because the range of such possible situations is virtually limitless, we will list only a few examples.
77. See Finisar Corp. v. DirecTV Group, Inc., 523 F.3d 1323, 1338 (Fed. Cir. 2008).
78. See, e.g., Rosco, Inc. v. Mirror Lite Co., 506 F. Supp. 2d 137 (E.D.N.Y. 2007) (mechanical engineer allowed to present testimony of his review of patent for teaching or suggestion as to meaning of the claims).
79. Am. Heavy Moving & Rigging Co. v. Robb Technologies, LLC, No. 2:04-CV-00933-JCM (GWF), 2006 WL 2085407 (D. Nev. 2006) (in case involving misuse of trade secrets, engineer appointed by the court to assist in making discovery rulings).
80. See, e.g., The Post Office v. Portec, Inc., 913 F.2d 802 (10th Cir. 1990).
81. See State Contracting & Eng’g Corp. v. Condotte Am., Inc., 346 F.3d 1057 (Fed. Cir. 2003) (expert in civil and structural engineering testified about whether different pieces of prior art are in the same field of endeavor as patents at issue); Philips Indus., Inc. v. State Stove & Mfg. Co., 522 F.2d 1137 (6th Cir. 1975) (use of engineering expert to establish the presence of design concept in prior art); Mayview Corp. v. Rodstein, 385 F. Supp. 1122 (C.D. Cal. 1974) (tool engineer testifying about concept of balance in hand-tool design in prior art).
- Claims of personal injury or property damage resulting from the spread of a toxic substance may involve a number of issues where engineering testimony may be both reliable and relevant.82
- Environmental disputes regarding the necessity for and nature of an environmental problem and the responsibility for and cost of its cleanup involve numerous issues concerning which properly qualified engineers may provide reliable and relevant evidence.83
- The testimony of an engineer can be helpful in determining causation in both product liability cases and nonproducts liability cases as well. Recent cases in the electrical engineering area demonstrate the range of possible situations where such issues may arise and engineering testimony may be admitted.84 For example, an electrical engineer was allowed to testify about lightning in Walker v. Soo Line Railroad Co.85 The plaintiff in that case filed suit under the Federal Employees Liability Act. Claiming that he had been injured by lightning while working in a railroad tower, the plaintiff sought to introduce the testimony of the chairman of the electrical engineering department at the University of Florida to the effect that lightning could have struck a number of places in the yard and penetrated the tower without a direct hit. The district court excluded the evidence and the Seventh Circuit reversed, finding that the jury would have been helped by hearing the engineer’s testimony about the ways in which lightning could have struck the tower, even if he could not testify which of the locations was struck or if any of them were struck at all.
- In a slightly different context than might be expected, an electricity transmission line planning engineer testified as an expert at a administrative hearing in California Public Utilities Commission v. California Energy Resources Conservation & Development Commission.86 The dispute in that case was the extent of the CERCDC’s jurisdiction over transmission line siting, and more specifically the interpretation of a section in California’s
82. See, e.g., Jaasma v. Shell Oil Co., 412 F.3d 501 (3d Cir. 2005) (civil and environmental engineer permitted to testify about environmental status of real property, which was relevant to damages and efforts to mitigate); In re Train Derailment Near Amite La., No. Civ. A. MDL. 1531, 2006 WL 1561470 (E.D. La. 2006) (court relied on declaration of environmental engineer regarding exposure to airborne contaminants in concluding that claims of potential class were not based on actual physical harm).
83. Olin Corp. v. Lloyd’s London, 468 F.3d 120 (2d Cir. 2006) (admission of environmental civil engineer testimony on issue of property damage in pollution liability insurance coverage case not an abuse of discretion).
84. See, e.g., Newman v. State Farm Fire & Cas. Co., 290 F. App’x. 106 (10th Cir. 2008) (electrical engineer was allowed to testify about the origin of fire that destroyed insureds’ house); McCoy v. Whirlpool Corp., 287 F. App’x 669 (10th Cir. 2008) (electrical engineer was allowed to testify in product liability case that manufacturing defect in dishwasher caused fire).
85. 208 F.3d 581 (7th Cir. 2000).
86. 50 Cal. App. 3d 437 (Cal. Ct. App. 1984).
Public Resources Code which defined “electric transmission line” as “any electric power line carrying electric power from a thermal power plant located within the state to a point of junction with any interconnected transmission system.”87 The engineer, employed by Pacific Gas & Electric, testified at the hearing before the CERCD about the use of certain terms in the industry related to that definition and about electricity transmission principles. The court subsequently relied on that testimony in part in determining that “electric transmission line” had a plain meaning, and that the plain meaning cut off the CERCDC’s jurisdiction at the first point at which a power line emanating from a thermal power plant joined to the interconnected transmission grid.
- Civil engineers have been allowed to testify in a broad range of circumstances also, including those involving an improper application of the design process in the building of a bridge. Numerous examples of situations involving roles of engineers involved in building bridges, which ultimately failed, have been described in an earlier version of this guide.88 In each of those situations, engineering testimony from engineers who were involved in the design of the bridge or who had experience in designing other bridges or who had experience with design generally could be qualified to testify in an inquiry or lawsuit about the causes and financial implications of the failures.
Following are several issues with which engineers frequently are confronted in the course of attempting to give testimony as experts. Each of them is controlled by the specific fact pattern that gives rise to the case and the way in which the case is presented. In this section we describe the issues as they are perceived by the engineer in the courtroom. Because this is not a treatise on the procedural or substantive law at issue, we do not summarize the state of the law on each issue. We assume that the court and other readers of this guide are familiar with the applicable law on these issues.
87. Id. at 440.
88. Petroski, supra note 45, at 593–94, 597–600, 604–06, 608–09, 612–13.
In an earlier section of this guide (Section III.C.2), we referred to the Stringfellow case.89 An important aspect of this litigation revolved around estimates of the mass of toxic materials released over a long time period some 20 to 30 years after the fact. To reconstruct events long past, a chemical engineer used aerial photographs of the site taken during its period of operation to estimate the surface areas of the toxic waste ponds. His qualifications were challenged under Frye v. United States90 because he was not a photogrammist. Despite having a background and credentials to support the work he did using the aerial photographs, but not the pedigree, the court found that a photogrammist would need to confirm his findings. In the end a photogrammist corroborated the engineer’s work. This is but one example of an all too common situation where an engineering expert’s qualifications have been challenged based on “name” rather than on relevant and documented experience. Under Daubert, there may be even more pressure on the court to assess who can or cannot testify as an expert. But this example illustrates that a court should be cautious about drawing conclusions about an expert’s qualifications based solely on titles, licenses, registration, and other such documentation.
Another common issue for engineers to confront in their testimony is the standard of care. Engineers do not think of the concepts of standard of care and duty of care as they relate to tort law, particularly negligence. Instead, for many engineers, “standard of care” means “how we do it in my office” or some variation thereof.
Following the Oklahoma City bombing, a structural engineering expert prepared a report regarding blast damage and progressive collapse for the U.S. Attorney prosecuting McVeigh. In an attempt to block this testimony, McVeigh’s defense team obtained an affidavit from an engineer with a well-known structural engineering firm to the effect that the prosecution expert’s report did not meet minimal standards for a building condition report because it did not include detailed architectural and structural drawings, measurements, and specifications, all of which were irrelevant to the issues at bar. The defense expert engineer argued essentially that he and his firm were leaders in the field of building assessment reports and therefore what they did set the standard. He was wrong on two counts: (1) the practice of any single firm or office does not establish the standard of care, and (2) the standard of care for one technical purpose (condition assessment of commercial buildings with leaky curtain walls) cannot be applied to another technical purpose (determination of number of bombs employed to destroy a building) just because both involve buildings and engineers.
89. United States v. Stringfellow, No. CV 83-2501 JMI, 1993 WL 565393 (C.D. Cal. Nov. 30, 1993).
90. 293 F. 1013 (D.C. Cir. 1923).
The phrase “standard of care” has various meanings and connotations to engineers that are somewhat discipline specific. Standard of care in the medical sciences may be different than standard of care in some other context. In engineering, it can be said that the standard of care is met whenever the design process was properly employed at the point in time that the event or incident happened. Although the design process itself is “fixed,” when properly applied to a problem in the 1940s and again to the same problem in 2009, the design outcome can be quite different and indeed might be expected to be so. Even so, the standard of care may be met each time.
“State of the art” has a specific meaning in the law and may be the subject of a particular statute in many jurisdictions. In addition, state of the art can be a distinct defense in many states.91 To engineers, however, its meaning may be slightly different.
Simply put, this phrase refers to the current stage of development of a particular technology or technological application. It does not imply that it is the best one can ever hope for but is merely a statement that at whatever point in time referenced, technology was in a certain condition or form. For instance, the Intel 4004 4-bit microprocessor was state of the art in 1971 whereas the Intel 64-bit microprocessor was the state of the art in 2006. Of course, there is the question as to whether in either of these cases those microprocessors were state of the art for just Intel, for all American semiconductor companies, or for all semiconductor companies in the world. The question of the context in which this phrase is used often lies at the heart of disputes. Because appropriate context may be difficult to pin down, experts are often challenged with defining the “state of the art” in relation to a particular technology or application. The answer from an engineering perspective is often an assumption, nothing more, nothing less. As such, from an engineering perspective, it is best to accept this phrase as a general colloquialism that is difficult to define even though it is simple to state.
Although this term is used colloquially and oftentimes in “business” activities, to engineers it is not a phrase that is easily quantifiable and suffers from meaning different things to different people. Despite this, it generally refers to the notion that at any point in time there exists a method, technique, or process that is preferred over any other to deliver a particular outcome. That being said, there is great latitude in how one goes about determining that preference and associating it with the desired outcome. So, although it sounds good, this phrase is fraught
91. See, e.g., Ariz. Rev. Stat. § 12-683(1) (2009); Colo. Rev. Stat. § 13-21-403(1)(a) (2009); Ind. Code § 34-20-5-1(1) (2009).
with ambiguity. In the end, the more important issue is whether there was adherence to the design process.
An issue that often arises in matters involving buildings and structures is the distinction between design codes and physics (political laws vs. physical laws) in the context of failure analysis. Design codes and standards are very conservative political documents. They are conservative because they are intended to address the worst-case scenario with a comfortable margin of safety. But buildings do not fail because of code violations—they fail according to the laws of physics. They do not fail when the code-prescribed loads exceed the code-prescribed strength of the materials—they fail when the actual imposed loads exceed the physical strength of the components. Buildings fail not when the laws of man are ignored but when the laws of physics are violated. Examples of this are most common in the context of earthquake-damaged structures. Buildings are not designed to resist 100% of expected earthquake forces. Rather, they are designed to resist only a fraction of the expected load (typically about one-eighth) without permanently deforming. The code implicitly recognizes that buildings are much stronger than assumed in design and also have considerable ability to absorb overloads without failure or collapse. Yet following an earthquake, engineers may inappropriately compare the ground accelerations recorded by the U.S. Geological Survey with design values in the code.
In the Northridge, California earthquake, recorded acceleration values were 2–3 times greater than the design code values. Many engineers concluded that the buildings had been “overstressed” by 200–300% and were thus extensively damaged, even if that damage was not visually apparent. In a line of reasoning remarkably similar to that of the plaintiff’s expert in Kumho,92 the damage was “proved” analytically, even though it could not be physically seen (or disproved) in the building itself. (If the same logic were applied to cars, every car that sustained an impact greater than the design capacity of the bumper would be a total loss.) If this approach was accepted, the determination of damage could only be done by a few wizards with supposedly sophisticated, yet often unproven, analytical tools. The technical issues in the Northridge situation were thus removed from the realm of observation and common sense (where a jury has a chance of understanding the issues) to the realm of arcane analysis where the experts have the final say.
This is not to say that standards and codes do not have their place in the courtroom. We described above how standards are often used by engineers to conduct tests, and cases that involve malpractice or standard-of-care may often critically examine if a particular code was followed in the course of a design. On
92. 526 U.S. 137 (1999). In Kuhmo, the expert inferred the defect from an alleged set of conditions, even though the alleged defect was not observed.
the other hand, failure to use a code, or comparison of code values to actual values does not guarantee that a disaster will occur. Common sense is often the best judge in these situations—if a code value is exceeded, yet no damage is observed, it is likely that the conservative nature of the code met its objectives and protected its subject.
From an evidentiary perspective, evidence of similar or like circumstances has a number of evidentiary hurdles to overcome before it can be admitted into evidence.93 To an engineer, however, the concept of similarity or “other similar incidents” (OSIs) has a somewhat different meeting and describes the types of circumstances and documentation of such circumstances that an engineer can rely on as a basis for his or her opinions. Although this section focuses primarily on product design issues, the underlying theme is nonetheless broadly applicable across the domain of engineering forensics.
Sometimes these other events are recorded in documentary form and relate to events regarding product performance characteristics, product failures, product anomalies, product performance anomalies, operational problems associated with product use, product malfunctions, or other types of product failures. These events are sometimes alleged by a party to a dispute to be substantially similar in kind to an event or circumstance that had precipitated the subject case. Alleged OSIs can be documented in multiple forms: (1) written narratives from various sources (consumers, employees of the manufacturer, bystanders to a reported event, insurers’ representatives, investigators, law enforcement personnel, owners of a location involved in the dispute at bar, etc.) who might prepare and submit a record of observation to a legal entity who retains those records of submission; (2) telephonic reports of the same character and source as written reports, but documented through telephone reports made to a recording representative or office staff responsible for collecting event reports of interest to a legal entity; (3) electronic submissions of the same character and source as written narratives; (4) reports in a standardized format that are intended to record and document events of interest (the forms may be in written or electronic media; (5) images of events in film or electronic media that may or may not also have been recorded and submitted in alternative formats. As a result, each may have its evidentiary hurdles to overcome before it is admitted into evidence.
Similarly, each OSI may have legal issues regarding authentication, which may be overcome by the repository where the underlying documentation is
93. For evidence of other similar incidents (OSIs) to be admissible, the proponent must show that the OSIs are (1) relevant, see Fed. R. Evid. 401; (2) “substantially similar” to the defect alleged in the case at bar; and (3) the probative value of the evidence outweighs its prejudicial effect, see Fed. R. Evid. 403. Some courts merge the first two requirements; to be relevant, the OSIs must be substantially similar to the incident at issue.
found. The repositories of documents and reports that may be alleged to be OSIs to an issue at bar can have many original purposes, and a collection of such documents may serve multiple purposes for the owner institution. Such document collections may be used by the owner of the repository for various administrative purposes, accounting, claims management and resolution, an archive of information and/or data, database management, institutional knowledge building, warranty management, in-service technology performance assessment and discovery, service records, customer interactions, and satisfaction of regulatory specifications or requirements, to name a few. Discovery requests may call for the owner of the materials to search and retrieve records, documents, and reports from such repositories even if the collections and repositories themselves may not have been constructed for the purposes of document search and retrieval. Sometimes engineers can be of use in searching and retrieving potentially relevant materials.
OSIs are discovered and may be offered into evidence to (1) demonstrate prior knowledge on the part of the record owner regarding an alleged defect or danger manifest to the consuming public that is causally related to the issue at bar; (2) demonstrate by the number, volume, or rate of reports that a defect exists; and/or (3) demonstrate careless disregard for the safety of others.94 To be admitted or relied upon by an engineering expert, the proponent must demonstrate that the event recorded and reported is “substantially similar” to the issue at bar.95 Testifying engineers can be useful in identifying and describing the specific characteristics that must be known and shown to make an assessment of similarity, including specifying objective parameters for determinations of the degree of similarity or dissimilarity and detailing the objective parameters and physical measurements necessary and sufficient to determine substantial similarity. The conditions that are necessary and sufficient to demonstrate substantial similarity include the following: (1) the product or circumstance in the alleged OSI must be of like design to the product or condition at issue in the instant case; (2) the product or circumstance in the alleged OSI must be of like function to the product or condition at issue in the instant case; (3) the application to which the product had been subjected must be like the application to which the product at issue in the instant case was subjected; and (4) the condition of the product, its state of repair, and/or its relevant state of wear must be like the state of repair and the relevant state of wear of the product that had been involved in the instant case.96 Engineers can contribute to a technical understanding of each of these dimensions and, in some cases, they may be able
94. See, e.g., Sparks v. Mena, No. E2006-02473-COA-R3-CV, 2008 WL 341441, at *2 (Tenn. Ct. App. Feb. 6, 2008); Francis H. Hare, Jr. & Mitchell K. Shelly, The Admissibility of Other Similar Incident Evidence: A Three-Step Approach, 15 Am. J. Trial Advoc. 541, 544–45 (1992).
95. See, e.g., Bitler v. A.O. Smith Corp., 391 F.3d 1114, 1126 (10th Cir. 2004); Whaley v. CSX Transp. Inc., 609 S.E.2d 286, 300 (S.C. 2005); Cottrell, Inc. v. Williams, 596 S.E.2d 789, 793–94 (Ga. Ct. App. 2004).
96. See, e.g., Brazos River Auth. v. GE Ionics, Inc., 469 F.3d 416, 427 (5th Cir. 2006); Steele v. Evenflo Co., 147 S.W.3d 781, 793 (Mo. Ct. App. 2004).
to apply objective measures to questions of substantial similarity and thus quantify the level of similarity between an event proffered as an OSI and the instant case.
The reverse is also true. Failure to establish likeness in any of these dimensions is failure to demonstrate substantial similarity to the circumstances of the subject case.97 If one or more of the necessary and sufficient conditions are unknown or unknowable, the test of substantial similarity also fails; the lack of demonstrable similarity is a lack of substantial similarity.
To demonstrate like design, a product or condition need not be identical in all aspects of form.98 It must simply be similar in form to the product or condition at issue in the instant case.99 Consider a machine control design with a feature alleged to have been the proximate cause of an injury-producing event that gave rise to a product liability lawsuit. Events proffered as OSIs that involve products having an identical control design meet the test of “likeness” in design. In addition, other control designs that differ in aspects not related to the feature that is alleged to have served as the proximate cause for the instant injury event may also be considered to be “like” if the relevant design elements on the two products cannot be differentiated. Engineers can assess the design elements of the control, determine which features may be relevant to questions of design likeness, and provide testimony to answer such questions.
Like function can be demonstrated if the operational purpose of the product or condition defined in the alleged OSI is similar to the function of the product or condition in the instant case. In the control design hypothesized above, a control that is applied to command the dichotomous functional states to start and stop (either “on” or “off”) a crane winch might serve the same operational purpose to start and stop another type of equipment or winch. In such a case, the functions and purpose of the control design may be alike. If however, that same control design is applied to a machine in which the operational purpose is not simply to command a dichotomous “on” or “off” signal, but rather its purpose is to provide a modulated signal to which the machine response is a continuously variable function of control placement, the control design function is unlike the purpose of dichotomous positioning. Engineers can provide assessments and analyses of the functions embedded in a specific design and assist in the determination of likeness or lack of likeness between an instant condition and one proffered as an OSI.
Like application can be demonstrated if it can be shown that the operational conditions to which the product is subject are alike in the proffered OSI and in the instant case.100 The environmental exposure to which a product is subjected must be of like condition. A control design function can vary with temperature,
97. See, e.g., Peters v. Nissan Forklift Corp. N. Am., No. 06-2880, 2008 WL 262552, at *2 (E.D. La. Feb. 1, 2008); Whaley v. CSX Transp., Inc., 609 S.E.2d 286, 300 (S.C. 2005).
98. See, e.g., Bitler v. A.O. Smith, 391 F.3d 1114, 1126 (10th Cir. 2004).
100. See, e.g., Steele v. Evenflo Co., 147 S.W.3d 781, 793 (Mo. Ct. App. 2004).
air or water exposure, reactions to corrosive elements, reactions to acid or base contaminants, and in potential interactions with surrounding materials and components that can be of differing electrochemical potential. Engineers with the appropriate technical background can evaluate operating conditional applications and determine if the conditions that obtain for a proffered OSI are similar to those that had obtained in an instant case, thereby assisting the determination of substantial similarity.
Differing environmental exposures resulting from differing applications may render an event proffered as an OSI unlike and not substantially similar. Further, like applications must comprehend that the load and stress conditions to which a product or condition is placed is substantially similar to the circumstances that obtained in the instant case to which the OSIs are being proffered for comparison. In our control design identified above, the control device may be manually actuated through a lever. Levers of differing length will apply differing forces to the control device and produce differing operational stresses upon the control device itself. The durability and performance of the control design itself can be affected by these differing operating applications, and anomalies or failures under one application may not be at all similar to those that obtain under differing circumstances in which the operating loads and applied stresses are different. Engineers are well qualified to assess conditions of comparative loading and applied stresses.
A like state of repair can be demonstrated if there is reasonable evidence that products involved in the proffered OSI are (1) in a specific working order, (2) in a condition of adjustment (if possible to adjust), (3) in a state of wear, and (4) within an expected range of tolerance that would not differentiate the product or condition from that which obtained in the product or condition involved in the instant case. Additionally, the products or conditions reported in the proffered OSIs must be shown to be free of modification from an original design state, or must be shown to be in a state of modification that is reflective of the product or condition involved in the instant case.101 An absence of evidence to demonstrate a state of likeness in application, operating environment, state of repair and wear, or state of modification is not sufficient to show similarity. Engineers with appropriate background can review data and information about modifications and service conditions related to wear and wear rates, as well as assess information related to the state of repair or disrepair, and thereby contribute to understanding of the level of similarity or dissimilarity among specific events and operational conditions.
For evidentiary reasons, OSIs generally are not admissible to demonstrate the truth of the matter recorded therein.102 Event records are necessarily reports of noteworthy events made after the fact by parties who may or may not have an interest in establishing a specific fact pattern, may or may not be qualified to
101. See, e.g., Cottrell, Inc. v. Williams, 596 S.E.2d 789, 791, 794 (Ga. Ct. App. 2004).
102. Fed. R. Evid. 801 & 802.
make the observations and assertions included in such reports, and may or may not have any specialized training necessary to evaluate proper system function or state of repair. The persons who report events collected and offered as OSIs may not be fully informed of the set of circumstantial conditions that are necessary and sufficient to determine causation of the reported event. Thus, often-reported events have incomplete or insufficient data and information to determine substantial similarity. Even if informed, persons reporting events may not have the correct observational powers, tools, and insights necessary for accurate evaluation and reporting. The individuals who make reports regarding recorded events may be unable to factually assess and accurately report all of the conditions relevant to determination of event causation and resolution of questions regarding substantial similarity. Reports of events made by parties who may have an interest in economic recovery or other compensation may not always accurately disclose known or knowable facts that could bear on determinations of causation and substantial similarity. Furthermore, some parties may have an economic or other interest in the outcome of a report or claim. Therefore, such reports, if offered to prove the truth of the other incidents, are typically excluded as hearsay (unless the business records exception applies).103
Computer animations, simulation models, and digital displays have become more common in television and movies, especially in entertainment media concerning forensic investigation, law enforcement, and legal drama. The result is an increased expectation among the court and juries that visual graphics and displays will be used by engineering experts and other expert witnesses to explain and illustrate their testimony. Additionally, boxed presentation software such as PowerPoint, is often a technology used. Attorneys and their clients typically expect their experts will use computer animations, simulations, and/or exhibits to educate the jury and demonstrate the bases for their opinions. When used correctly, these tools can make the expert’s testimony understandable and can leave a lasting impression with the trier of fact of that party’s theory of the case. For that very reason, the role of the court as the gatekeeper for use of these demonstratives has become increasingly critical. As the technology underlying these tools rapidly advances, the court’s task likewise becomes more difficult. In assessing the validity of these tools, the court is often forced to decide whether the visual display accurately represents the evidence and/or is supported by the
103. See Willis v. Kia Motors Corp., No. 2:07CV062-PA, 2009 WL 2351766 (N.D. Miss. July 29, 2009) (finding customer complaints of similar accidents were not hearsay because they were offered to notice, not the truth of the matter asserted, and even if they were hearsay, they fell under the business records exception of Fed. R. Evid. 803(6)).
expert’s opinions and qualifications.104 To assist the court in this difficult task, we present some guidance regarding the types of technology presently in use and the strengths and weaknesses of each.
A primary basis for misunderstanding and uncertainty is the difference between a computer animation and a computer simulation. An animation is a sequence of still images graphically illustrated (two dimensions) or modeled (three dimensions), and are often textured and rendered to create the illusion of motion. A cartoon is a simple example. There are no constraints inherent in an animation, and the laws of physics, or any other science, do not necessarily apply (a black mouse can be dressed in red shorts with yellow shoes and be made to dance, sing, and fly). The lack of imposed restriction does not make the animation deficient a priori; if the still images that comprise the animation are accurate in their representation of individual snapshots of time, then the animation itself can be proven precise. The converse, of course, is also true.
Animations contain key frames that define the starting and ending points of actions, with sequences of intermediate frames defining how movements are depicted. For example, a series of still photographs can depict the path of a vehicle vaulting off an embankment, with a single image at the takeoff, mid-flight, and landing positions each correct in its representation. However, when an animation of the event is created, the intermediate frames fill in the missing areas, and if so desired, contrary to known physical phenomena, the animation could show the vault trajectory of the vehicle to remain flat and then suddenly drop, similar to the inaccurate representation of motion experienced by a cartoon coyote momentarily contemplating his fate after chasing a bird off a cliff. Thus, in an animation, some of the inputs (stills) may represent reality, but the sum of the parts (intermediate frames) may not.
Unlike an animation, a simulation is a computer program that relies on source data and algorithms to mathematically model a particular system (see, e.g., the discussion on finite element modeling, above), and allows the user to rapidly and inexpensively gain insight into the operation and sensitivity of that system to certain constraints. Perhaps the most common example of a simulation can be found daily as a computer-generated image showing the predicted growth of a storm system.
On the surface, a simulation would seem to provide more accuracy than an animation. However, this is not necessarily the case. The simulation model is only as accurate as its input data and/or constraining variables and the equations that form its calculation stream. Simulation models also require a sensitivity analysis—just because a model produces an answer does not mean that it is the best model or
104. See Lorraine v. Market Am. Ins. Co., 241 F.R.D. 534 (D. Md. 2007) (distinguishing between demonstrative computer animations and scientific computer simulations and discussing the evidentiary requirements, including authentication, for each); People v. Cauley, 32 P.3d 602 (Colo. Ct. App. 2001) (same).
the most correct answer. For example, a computer model depicting the motions of a vehicle prior to and after an impact with a pole may be correct if it matches the known physical evidence (e.g., tire marks and vehicle damage). However, whether the model is accurate depends on the accuracy of the inputs for tire friction, vehicle stiffness, vehicle weight, location of the vehicle’s center of gravity, etc. Even if the inputs are accurate, once a solution is found, other solutions may exist that also match the evidence. Assessing the accuracy of each solution requires an iterative process of making changes to those variables believed to have the greatest effect on the output. Simply put, the difference between a vehicle accident simulation model that predicts 10 inches of crush deformation and two complete revolutions post impact versus 14 inches of crush and three complete revolutions may depend on just a few selected vehicle characteristics. Thus, compared to an animation, in a simulation model, the sum of the constraining variables and equations may represent reality, but some of the user-selected inputs may not.
The difficulty for the court is the need to decide whether some or all of the computer animation or simulation accurately represents the facts and/or opinions of the expert.105 This is not an easy endeavor, but can usually be executed in a reasonable fashion for simulations by evaluating whether the simulation has been validated. If the underlying program predicts the behavior of vehicles in a crash, it can be validated by crashing vehicles under controlled conditions, and comparing the actual results to those predicted by the simulation. If the software in question predicts the response of a complex object to applied forces, it can be validated by modeling a simple object, the response of which can be calculated by hand, and comparing the simulation to those known results.106
Similarly, for animations, engineers need to establish authenticity, relevance, and accuracy in representing the evidence using visual means.107 They may rely on blueprint drawings, CAD (computer-aided design) drawings, U.S. Geological Survey data, photogrammetry, geometric databases (vehicles, aircraft, etc.), eyewitness statements, and field measurements to establish accuracy of an animation.
Most engineers are not educated in the law and to them the setting of a deposition or a courtroom is peculiar and often uncomfortable. The rules are different
105. See id.
106. See Lorraine v. Markel Am. Ins. Co., 241 F.R.D. 534 (D. Md. 2007); Livingston v. Isuzu Motors, Ltd., 910 F. Supp. 1473 (D. Mont. 1995) (finding computer simulation of rollover accident by expert to be reliable and admissible under Daubert whether computer program was made up of various physical laws and equations commonly understood in science, program included case-specific data, and expert’s computer simulation methodology had been peer reviewed).
107. See, e.g., Friend v. Time Mfg. Co., No. 03-343-TUC-CKL, 2006, WL 2135807 (D. Ariz. July 28, 2006); People v. Cauley, 32 P.3d 602 (Colo. Ct. App. 2001).
from those to which they are accustomed. The conversations are somewhat alien. Treading in this unfamiliar territory is a challenge. And so, although it is important for the engineer to “fit” into this environment, it is equally important for the triers of fact and the court to understand the engineer’s world. We hope this chapter has provided a glimpse into that world, and by considering it, the reader will have some insight as to why engineers respond to questions as they do. The foundation that underlies and supports essentially all that has been done and all that will be done by engineers is the design process. It is the roadmap for innovation, invention, and reduction to practice that characterizes those who do engineering and who call themselves “engineers.” It is the key metric against which products and processes can be and should be evaluated.
The authors would like to thank the following for their significant contributions: Dr. Roger McCarthy, Robert Lange, Dr. Catherine Corrigan, Dr. John Osteraas, Michael Kuzel, Dr. Shukri Souri, Dr. Stephen Werner, Dr. Robert Caligiuri, Jeffrey Croteau, Kerri Atencio, and Jess Dance.