Bernard D. Goldstein, M.D., is Professor of Environmental and Occupational Health and Former Dean, Graduate School of Public Health, University of Pittsburgh.
Mary Sue Henifin, J.D., M.P.H., is a Partner with Buchanan Ingersoll, P.C., Princeton, New Jersey.
A. On What Species of Animals Was the Compound Tested? What Is Known About the Biological Similarities and Differences Between the Test Animals and Humans? How Do These Similarities and Differences Affect the Extrapolation from Animal Data in Assessing the Risk to Humans?
A. Does the Proposed Expert Have an Advanced Degree in Toxicology, Pharmacology, or a Related Field? If the Expert Is a Physician, Is He or She Board Certified in a Field Such as Occupational Medicine?
B. Has the Proposed Expert Been Certified by the American Board of Toxicology, Inc., or Does He or She Belong to a Professional Organization, Such as the Academy of Toxicological Sciences or the Society of Toxicology?
The discipline of toxicology is primarily concerned with identifying and understanding the adverse effects of external chemical and physical agents on biological systems. The interface of the evidence from toxicological science with toxic torts can be complex, in part reflecting the inherent challenges of bringing science into a courtroom, but also because of issues particularly pertinent to toxicology. For the most part, toxicological study begins with a chemical or physical agent and asks what impact it will have, while toxic tort cases begin with an individual or a group that has suffered an adverse impact and makes claims about its cause. A particular challenge is that only rarely is the adverse impact highly specific to the toxic agent; for example, the relatively rare lung cancer known as mesothelioma is almost always caused by asbestos. The more common form of lung cancer, bronchial carcinoma, also can be caused by asbestos, but asbestos is a relatively uncommon cause compared with smoking, radon, and other known causes of lung cancer.1 Lung cancer itself is unusual in that for the vast majority of cases, we can point to a known cause—smoking. However, for many diseases, there are few if any known causes, for example, pancreatic cancer. Even when there are known causes of a disease, most individual cases are often not ascribable to any of the known causes, such as with leukemia.
In general, there are only a limited number of ways that biological tissues can respond, and there are many causes for each response. Accordingly, the role of toxicology in toxic tort cases often is to provide information that helps evaluate the causal probability that an adverse event with potentially many causes is caused by a specific agent. Similarly, toxicology is commonly used as a basis for regulating chemicals, depending upon their potential for effect. Assertions related to the toxicological predictability of an adverse consequence in relation to the stringency of the regulatory law are not uncommon bases for legal actions against regulatory agencies.
Identifying cause-and-effect relationships in toxicology can be relatively straightforward; for example, when placed on the skin, concentrated sulfuric acid will cause massive tissue destruction, and carbon monoxide poisoning is identifiable by the extent to which carbon monoxide is attached to the oxygen-carrying portion of blood hemoglobin, thereby decreasing oxygen availability to the body. But even these two seemingly straightforward examples serve to illustrate the complexity of toxicology and particularly its emphasis on understanding dose–response relationships. The tissue damage caused by sulfuric acid is not specific to this chemical, and at lower doses, no effect will be seen. Carbon monoxide is not only an external poison but is a product of normal internal metabolism such
1. Contrast this issue with the relatively straightforward situation in infectious disease in which the disease name identifies the cause; for example, cholera is caused by Vibrio cholerae, tuberculosis by the Mycobacterium tuberculosis, HIV-AIDs by the HIV virus, and so on.
that about 1 out of 200 hemoglobin molecules will normally have carbon monoxide attached, and this can increase depending upon concomitant disease states. Furthermore, the complex temporal relation governing the uptake and release of carbon monoxide from hemoglobin also must be considered in assessing the extent to which an adverse impact may be ascribable to carbon monoxide exposure. Thus the diagnosis of carbon monoxide poisoning requires far more information than the simple presence of detectable carbon monoxide in the blood.
Complexity in toxicology is derived primarily from three factors. The first is that chemicals often change within the body as they go through various routes to eventual elimination.2 Thus absorption, distribution, metabolism, and excretion are central to understanding the toxicology of an agent. The second is that human sensitivity to chemical and physical agents can vary greatly among individuals, often as a result of differences in absorption, distribution, metabolism, or excretion, as well as target organ sensitivity—all of which can be genetically determined. The third major source of complexity is the need for extrapolation, either across species, because much toxicological data are obtained from studies in laboratory animals, or across doses, because human toxicological and epidemiological data often are limited to specific dose ranges that differ from the dose suffered by a plaintiff alleging a toxic tort impact. All three of these factors are responsible for much of the complexity in utilizing toxicology for tort or regulatory judicial decisions and are described in more detail below.
Classically, toxicology is known as the science of poisons. It is the study of the adverse effects of chemical and physical agents on living organisms.3 Although it is an age-old science, toxicology has only recently become a discipline distinct from pharmacology, biochemistry, cell biology, and related fields.
There are three central tenets of toxicology. First, “the dose makes the poison”; this implies that all chemical agents are intrinsically hazardous—whether they cause harm is only a question of dose.4 Even water, if consumed in large quantities, can be toxic. Second, each chemical or physical agent tends to produce a specific pattern of biological effects that can be used to establish disease
2. Direct-acting toxic agents are those whose toxicity is due to the parent chemical entering the body. A change in chemical structure through metabolism usually results in detoxification. Indirect-acting chemicals are those that must first be metabolized to a harmful intermediate for toxicity to occur. For an overview of metabolism in toxicology, see R.A. Kemper et al., Metabolism: A Determinant of Toxicity, in Principles and Methods of Toxicology 103–178 (A. Wallace Hayes ed., 5th ed. 2008).
3. Casarett and Doull’s Toxicology: The Basic Science of Poisons 13 (Curtis D. Klaassen ed., 7th ed. 2007).
4. A discussion of more modern formulations of this principle, which was articulated by Paracelsus in the sixteenth century, can be found in David L. Eaton, Scientific Judgment and Toxic Torts—A Primer in Toxicology for Judges and Lawyers, 12 J.L. & Pol’y 5, 15 (2003); Ellen K. Silbergeld, The Role of Toxicology in Causation: A Scientific Perspective, 1 Cts. Health Sci. & L. 374, 378 (1991). A short review of the field of toxicology can be found in Curtis D. Klaassen, Principles of Toxicology and Treatment of Poisoning, in Goodman and Gilman’s The Pharmacological Basis of Therapeutics 1739 (11th ed. 2008).
causation.5 Third, the toxic responses in laboratory animals are useful predictors of toxic responses in humans. Each of these tenets, and their exceptions, is discussed in greater detail in this reference guide.
The science of toxicology attempts to determine at what doses foreign agents produce their effects. The foreign agents classically of interest to toxicologists are all chemicals (including foods and drugs) and physical agents in the form of radiation, but not living organisms that cause infectious diseases.6
The discipline of toxicology provides scientific information relevant to the following questions:
- What hazards does a chemical or physical agent present to human populations or the environment?
- What degree of risk is associated with chemical exposure at any given dose?7
Toxicological studies, by themselves, rarely offer direct evidence that a disease in any one individual was caused by a chemical exposure.8 However, toxicology can provide scientific information regarding the increased risk of contracting a disease at any given dose and help rule out other risk factors for the disease. Toxicological evidence also contributes to the weight of evidence supporting causal inferences by explaining how a chemical causes a specific disease through describing metabolic, cellular, and other physiological effects of exposure.
The growing concern about chemical causation of disease is reflected in the public attention devoted to lawsuits alleging toxic torts, as well as in litigation concerning the many federal and state regulations related to the release of potentially toxic compounds into the environment.
Toxicological evidence frequently is offered in two types of litigation: tort and regulatory. In tort litigation, toxicologists offer evidence that either supports
5. Some substances, such as central nervous system toxicants, can produce complex and nonspecific symptoms, such as headaches, nausea, and fatigue.
6. Forensic toxicology, a subset of toxicology generally concerned with criminal matters, is not addressed in this reference guide, because it is a highly specialized field with its own literature and methodologies that do not relate directly to toxic tort or regulatory issues.
7. In standard risk assessment terminology, hazard is an intrinsic property of a chemical or physical agent, while risk is dependent both upon hazard and on the extent of exposure. Note that this first “law” of toxicology is particularly pertinent to questions of specific causation, while the second “law” of toxicology, the specificity of effect, is pertinent to questions of general causation.
8. There are exceptions, for example, when measurements of levels in the blood or other body constituents of the potentially offending agent are at a high enough level to be consistent with reasonably specific health impacts, such as in carbon monoxide poisoning.
or refutes plaintiffs’ claims that their diseases or injuries were caused by chemical exposures.9 In regulatory litigation, toxicological evidence is used to either support or challenge government regulations concerning a chemical or a class of chemicals. In regulatory litigation, toxicological evidence addresses the issue of how exposure affects populations10 rather than addressing specific causation, and agency determinations are usually subject to the court’s deference.11
Dose is a central concept in the field of toxicology, and an expert toxicologist will consider the extent of a plaintiff’s dose in making an opinion.12 But dose has not been a central issue in many of the most important judicial decisions concerning the relation of toxicological evidence to toxic tort decisions. These have mostly been general causation issues: For example, is a silicon breast implant capable of causing rheumatoid arthritis, or is Bendectin capable of causing deformed babies.13 However, in most specific causation issues involving exposure to a chemical known to be able to cause the observed effect, the primary issue will be whether there has been exposure to a sufficient dose to be a likely cause of this effect.
9. See, e.g., Gen. Elec. Co. v. Joiner, 522 U.S. 136 (1997); Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579 (1993). Courts have held that toxicologists can testify as to disease causation related to chemical exposures. See, e.g., Bonner v. ISP Techs, Inc., 259 F.3d 924, 928–31 (8th Cir. 2001); Paoli R.R. v. Monsanto Co., 915 F.2d 829 (3d Cir. 1990); Loudermill v. Dow Chem. Co., 863 F.2d 566, 569–70 (8th Cir. 1988).
10. Again, there are exceptions. For example, certain regulatory approaches, such as the control of hazardous air pollutants, are based on the potential impact to a putative maximally exposed individual rather than to the general population.
11. See, e.g., Int’l Union, United Mine Workers of Am. v. U.S. Dep’t of Labor, 358 F.3d 40, 43–44 (D.C. Cir. 2004) (determinations by Secretary of Labor are given deference by the court, but must be supported by some evidence, and cannot be capricious or arbitrary); N.M. Mining Ass’n v. N.M. Water Quality Control Comm., 150 P.3d 991, 995–96 (N.M. Ct. App. 2006) (action by a government agency is presumptively valid and will be given deference by the court. The court will only overturn a regulatory decision if it is capricious and arbitrary, or not supported by substantial evidence).
12. Dose is a function of both concentration and duration. Haber’s rule is a century-old simplified expression of dose effects in which the effect of a concentration and duration of exposure is a constant (e.g., exposure to an agent at 10 parts per million for 1 hour has the same impact as exposure to 1 part per million for 10 hours). Exposure levels, which are concentrations, are often confused with dose. This can be particularly problematic when attempting to understand the implications of exposure to a level that exceeds a regulatory standard that is set for a different time frame. For example, assume a drinking water contaminant is a known cause of cancer. To avoid a 1 in 100,000 lifetime risk caused by this contaminant in drinking water, and assuming that the average person will drink approximately 2000 mL of water daily for a lifetime, the regulatory authority sets the allowable contaminant standard in drinking water at 10 µg/L. Drinking one glass of water containing 20 µg/L of this contaminant, although exceeding the standard, does not come close to achieving a “reasonably medically probable” cause of an individual case of cancer.
13. See, e.g., In re Silicone Gel Breast Implants Prods. Liab. Litig., 318 F. Supp. 2d 879, 891 (C.D. Cal. 2004); Joseph Sanders, From Science to Evidence: The Testimony on Causation in the Bendectin Cases, 46 Stan. L. Rev. 1, 19 (1993).
This reference guide focuses on the scientific issues that arise most frequently in toxic tort cases. Where it is appropriate, the guide explores the use of regulatory data and how the courts treat such data. It also provides an overview of the basic principles and methodologies of toxicology and offers a scientific context for proffered expert opinion based on toxicological data.14 The reference guide describes research methods in toxicology and the relationship between toxicology and epidemiology, and it provides model questions for evaluating the admissibility and strength of an expert’s opinion. Following each question is an explanation of the type of toxicological data or information that is offered in response to the question, as well as a discussion of its significance.
Toxicological studies usually involve exposing laboratory animals (in vivo research) or cells or tissues (in vitro research) to chemical or physical agents, monitoring the outcomes (such as cellular abnormalities, tissue damage, organ toxicity, or tumor formation), and comparing the outcomes with those for unexposed control groups. As explained below,15 the extent to which animal and cell experiments accurately predict human responses to chemical exposures is subject to debate.16 However, because it is often unethical to experiment on humans by exposing them to known doses of chemical agents, animal toxicological evidence often provides the best scientific information about the risk of disease from a chemical exposure.17
In contrast to their exposure to drugs, only rarely are humans exposed to environmental chemicals in a manner that permits a quantitative determination of adverse outcomes.18 This area of toxicological study may consist of individual or multiple case reports, or even experimental studies in which individuals or groups of individuals have been exposed to a chemical under circumstances that permit analysis of dose–response relationships, mechanisms of action, or other aspects of
14. The use of toxicological evidence in regulatory decisionmaking is discussed in Casarett and Doull’s Toxicology: The Basic Science of Poisons, supra note 3, at 13–14; Barbara D. Beck et al., The Use of Toxicology in the Regulatory Process, in Principles and Methods of Toxicology, supra note 2, at 45–102. For a more general discussion of issues that arise in considering expert testimony, see Margaret A. Berger, The Admissibility of Expert Testimony, Section IV, in this manual.
15. See infra Section I.D.
16. The controversy over the use of toxicological evidence in tort cases is described in Bernard D. Goldstein, Toxic Torts: The Devil Is in the Dose, 16 J.L. & Pol’y 551 (2008); Joseph V. Rodricks, Evaluating Disease Causation in Humans Exposed to Toxic Substances, 14 J.L. & Pol’y 39 (2006); Silbergeld, supra note 4, at 378.
17. See, e.g., Office of Tech. Assessment, U.S. Congress, Reproductive Health Hazards in the Workplace 8 (1985).
18. However, it is from drug studies in which multiple animal species are compared directly with humans that many of the principles of toxicology have been developed.
toxicology. For example, individuals occupationally or environmentally exposed to polychlorinated biphenyls (PCBs) prior to prohibitions on their use have been studied to determine the routes of absorption, distribution, metabolism, and excretion for this chemical. Human exposure occurs most frequently in occupational settings where workers are exposed to industrial chemicals such as lead or asbestos; however, even under these circumstances, it is usually difficult, if not impossible, to quantify the amount of exposure. Moreover, human populations are exposed to many other chemicals and risk factors, making it difficult to isolate the increased risk of a disease that is the result of exposure to any one chemical.19
Toxicologists use a wide range of experimental techniques, depending in part on their area of specialization. Toxicological research may focus on classes of chemical compounds, such as solvents and metals; body system effects, such as neurotoxicology, reproductive toxicology, and immunotoxicology; and effects on physiological processes, including inhalation toxicology, dermatotoxicology, and molecular toxicology (the study of how chemicals interact with cell molecules). Each of these areas of research includes both in vivo and in vitro research.20
Animal research in toxicology generally falls under two headings: safety assessment and classic laboratory science, with a continuum between them. As explained in Section I.E, safety assessment is a relatively formal approach in which a chemical’s potential for toxicity is tested in vivo or in vitro using standardized techniques often prescribed by regulatory agencies, such as the Environmental Protection Agency (EPA) and the Food and Drug Administration (FDA).21
The roots of toxicology in the science of pharmacology are reflected in an emphasis on understanding the absorption, distribution, metabolism, and excretion of chemicals. Basic toxicological laboratory research also focuses on the mechanisms of action of external chemical and physical agents. Such research is based on the standard elements of scientific studies, including appropriate experimental design using control groups and statistical evaluation. In general, toxicological research attempts to hold all variables constant except for that of the chemical exposure.22 Any change in the experimental group not found in the control group is assumed to be perturbation caused by the chemical.
19. See, e.g., Office of Tech. Assessment, U.S. Congress, supra note 17, at 8.
20. See infra Sections I.C.1, I.C.2.
21. W.J. White et al., The Use of Laboratory Animals in Toxicology Research, in Principles and Methods of Toxicology 1055–1102 (A. Wallace Hayes ed., 5th ed. 2008); M.A. Dorato et al., The Toxicologic Assessment of Pharmaceutical and Biotechnology Products, in Principles and Methods of Toxicology 325–68 (A. Wallace Hayes ed., 5th ed. 2008).
22. See generally Alan Poole & George B. Leslie, A Practical Approach to Toxicological Investigations (1989); Principles and Methods of Toxicology (A. Wallace Hayes ed., 2d ed. 1989); see also discussion on acute, short-term, and long-term toxicity studies and acquisition of data in Frank C. Lu, Basic Toxicology: Fundamentals, Target Organs, and Risk Assessment 77–92 (2d ed. 1991).
a. Dose–response relationships
An important component of toxicological research is dose–response relationships. Thus, most toxicological studies generally test a range of doses of the chemical. Animal experiments are conducted to determine the dose–response relationships of a compound by measuring how response varies with dose, including diligently searching for a dose that has no measurable physiological effect. This information is useful in understanding the mechanisms of toxicity and extrapolating data from animals to humans.23
b. Acute Toxicity Testing—Lethal Dose 50
To determine the dose–response relationship for a compound, a short-term lethal dose 50% (LD50) may be derived experimentally. The LD50 is the dose at which a compound kills 50% of laboratory animals within a period of days to weeks. The use of this easily measured end point for acute toxicity to a large extent has been replaced, in part because recent advances in toxicology have provided other pertinent end points, and in part because of pressure from animal rights activists to reduce or replace the use of animals in laboratory research.24
c. No observable effect level
A dose–response study also permits the determination of another important characteristic of the biological action of a chemical—the no observable effect level (NOEL).25 The NOEL sometimes is called a threshold, because it is the level above which observable effects in test animals are believed to occur and below which no toxicity is observed.26 Of course, because the NOEL is dependent on the ability to
23. See infra Sections I.D, II.A.
24. Committee on Toxicity Testing and Assessment of Environmental Agents, National Research Council, Toxicity Testing in the 21st Century: A Vision and a Strategy (2007).
25. For example, undiluted acid on the skin can cause a horrible burn. As the acid is diluted to lower and lower concentrations, less and less of an effect occurs until there is a concentration sufficiently low (e.g., one drop in a bathtub of water, or a sample with less than the acidity of vinegar) that no effect occurs. This no observable effect concentration differs from person to person. For example, a baby’s skin is more sensitive than that of an adult, and skin that is irritated or broken responds to the effects of an acid at a lower concentration. However, the key point is that there is some concentration that is completely harmless to the skin.
26. The significance of the NOEL was relied on by the court in Graham v. Canadian National Railway Co., 749 F. Supp. 1300 (D. Vt. 1990), in granting judgment for the defendants. The court found the defendants’ expert, a medical toxicologist, persuasive. The expert testified that the plaintiffs’ injuries could not have been caused by herbicides, because their exposure was well below the reference dose, which he calculated by taking the NOEL and decreasing it by a safety factor to ensure no human effect. Id. at 1311–12 & n.11. But see Louderback v. Orkin Exterminating Co., 26 F. Supp. 2d 1298 (D. Kan. 1998) (failure to consider threshold levels of exposure does not necessarily render expert’s opinion unreliable where temporal relationship, scientific literature establishing an association between exposure and various symptoms, plaintiffs’ medical records and history of disease, and exposure to or
observe an effect, the level is sometimes lowered once more sophisticated methods of detection are developed.
d. Benchmark dose
For regulatory toxicology, the NOEL is being replaced by a more statistically robust approach known as the benchmark dose (BD). The BD is determined based on dose–response modeling and is defined as the exposure associated with a specified low incidence of risk, generally in the range of 1% to 10%, of a health effect, or the dose associated with a specified measure or change of a biological effect. To model the BD, sufficient data must exist, such as at least a statistically or biologically significant dose-related trend in the selected end point.27
e. No-threshold model and determination of cancer risk
Certain genetic mutations, such as those leading to cancer and some inherited disorders, are believed to occur without any threshold. In theory, the cancer-causing mutation to the genetic material of the cell can be produced by any one molecule of certain chemicals. The no-threshold model led to the development of the one-hit theory of cancer risk, in which each molecule of a cancer-causing chemical has some finite possibility of producing the mutation that leads to cancer. (See Figure 1 for an idealized comparison of a no-threshold and threshold dose–response.) This risk is very small, because it is unlikely that any one molecule of a potentially cancer-causing agent will reach that one particular spot in a specific cell and result in the change that then eludes the body’s defenses and leads to a clinical case of cancer. However, the risk is not zero. The same model also can be used to predict the risk of inheritable mutational events.28
the presence of other disease-causing factors were all considered). See also DiPirro v. Bondo Corp., 62 Cal. Rptr. 3d 722, 750 (Cal. Ct. App. 2007) (judgment for the maker of auto touchup paint based on finding that there was substantial evidence in the record to show that the level of a particular toxin [toluene] present in the paint fell 1000 times below the NOEL of that toxin and therefore no warning label needed on paint can).
27. See S. Sand et al., The Current State of Knowledge on the Use of the Benchmark Dose Concept in Risk Assessment, 28 J. Appl. Toxicol. 405–21 (2008); W. Slob et al., A Statistical Evaluation of Toxicity Study Designs for the Estimation of the Benchmark Dose in Continuous Endpoints, 84 Toxicol. Sci. 167–85 (2005). Courts also recognize the benchmark dose. See, e.g., Am. Forest & Paper Ass’n Inc. v. EPA, 294 F.3d 113, 121 (D.C. Cir. 2002) (EPA’s use of benchmark dose takes into account comprehensive dose–response information unlike NOEL and thus its use was not arbitrary in determining that methanol should remain on the list of hazardous air pollutants); California v. Tri-Union Seafoods, LLC, 2006 WL 1544384 (Cal. Super. Ct. May 11, 2006) (benchmark dose should not be equated with LOEL (lowest observable effect level) and thus toxicologist’s testimony regarding methylmercury in tuna was unreliable for purposes of California’s Proposition 65).
28. For further discussion of the no-threshold model of carcinogenesis, see James E. Klaunig & Lisa M. Kamendulis, Chemical Carcinogens, in Casarett and Doull’s Toxicology: The Basic Science of Poisons, supra note 3, at 329. But see V.P. Bond et al., Current Misinterpretations of the Linear No-Threshold
Figure 1. Idealized comparison of a no-threshold and threshold dose–response relationship.
Hypothesis, 70 Health Physics 877 (1996); Marvin Goldman, Cancer Risk of Low-Level Exposure, 271 Science 1821 (1996).
Although the one-hit model explains the response to most carcinogens, there is accumulating evidence that for certain cancers there is in fact a multistage process and that some cancer-causing agents, so-called epigenetic or nongenotoxic agents, act through nonmutational processes, Committee on Risk Assessment Methodology, National Research Council, Issues in Risk Assessment 34–35, 187, 198–201 (1993). For example, the multistage cancer process may explain the carcinogenicity of benzo[a]pyrene (produced by the combustion of hydrocarbons such as oil) and chlordane (a termite pesticide). However, nonmutational responses to asbestos, dioxin, and estradiol cause their carcinogenic effects. The appropriate mathematical model to use to depict the dose–response relationship for such carcinogens is still a matter of debate. Id. at 197–201. Proposals have been made to merge cancer and noncancer risk assessment models. Committee on Improving Risk Analysis Approaches Used by the U.S. EPA, National Research Council, Toward a Unified Approach to Dose–Response Assessment 127–87 (2009).
Courts continue to grapple with the no-threshold model. See, e.g., In re W.R. Grace & Co. 355 B.R. 462, 476 (Bankr. D. Del. 2006) (the “no threshold model…flies in the face of the toxicological law of dose-response…doesn’t satisfy Daubert, and doesn’t stand up to scientific scrutiny”); Cano v. Everest Minerals Corp., 362 F. Supp. 2d 814, 853–54 (W.D. Tex. 2005) (even accepting the linear, no-threshold model for uranium mining and cancer, it is not enough to show exposure, you must show causation as well). Where administrative rulemaking is the issue, the no-threshold model has been accepted by some courts. See, e.g., Coalition for Reasonable Regulation of Naturally
f. Maximum tolerated dose and chronic toxicity tests
Another type of study uses different doses of a chemical agent to establish over a 90-day period what is known as the maximum tolerated dose (mTd) (the highest dose that does not cause significant overt toxicity). The MTD is important because it enables researchers to calculate the dose of a chemical to which an animal can be exposed without reducing its lifespan, thus permitting the evaluation of the chronic effects of exposure.29 These studies are designed to last the lifetime of the species.
Chronic toxicity tests evaluate carcinogenicity or other types of toxic effects. Federal regulatory agencies frequently require carcinogenicity studies on both sexes of two species, usually rats and mice. A pathological evaluation is done on the tissues of animals that died during the study and those that are sacrificed at the conclusion of the study.
The rationale for using the MTD in chronic toxicity tests, such as carcinogenicity bioassays, often is misunderstood. It is preferable to use realistic doses of carcinogens in all animal studies. However, this leads to a loss of statistical power, thereby limiting the ability of the test to detect carcinogens or other toxic compounds. Consider the situation in which a realistic dose of a chemical causes a tumor in 1 in 100 laboratory animals. If the lifetime background incidence of tumors in animals without exposure to the chemical is 6 in 100, a toxicological test involving 100 control animals and 100 exposed animals who were fed the realistic dose would be expected to reveal 6 control animals and 7 exposed animals with the cancer. This difference is too small to be recognized as statistically significant. However, if the study started with 10 times the realistic dose, the researcher would expect to get 10 additional cases for a total of 16 cases in the exposed group and 6 cases in the control group, a significant difference that is unlikely to be overlooked.
Unfortunately, even this example does not demonstrate the difficulties of determining risk. Regulators are responding to public concern about cancer by regulating risks often as low as 1 in 1,000,000—not 1 in 100, as in the example given above. To test risks of 1 in 1,000,000, a researcher would have to either increase the lifetime dose from 10 times to 100,000 times the realistic dose or
Occurring Substances v. Cal. Air Res. Bd., 19 Cal. Rptr. 3d 635, 641 (Cal. Ct. App. 2004) (use of the no-threshold model to establish no safe level of asbestos exposure by regulatory agency upheld).
29. Even the determination of the MTD can be fraught with controversy. See, e.g., Simpson v. Young, 854 F.2d 1429, 1431 (D.C. Cir. 1988) (petitioners unsuccessfully argued that FDA improperly certified color additive Blue No. 2 dye as safe because researchers failed to administer the MTD to research animals, as required by FDA protocols); Valentine v. PPG Indus., Inc., 821 N.E.2d 580, 607–08 (Ohio Ct. App. 2004) (summary judgment for defendant upheld based in part on expert’s observation that “there is no reliable or reproducible epidemiological evidence that shows that chemicals capable of causing brain tumors in animals at maximum tolerated doses over a lifetime can cause brain tumors in humans. The biological plausibility of those chemicals causing brain tumors in humans is lacking.”).
See L.R. Rhomberg et al., Issues in the Design and Interpretation of Chronic Toxicity and Carcinogenicity Studies in Rodents: Approaches to Dose Selection, 37 Crit. Rev. Toxicol. 729–837 (2007).
expand the numbers of animals under study into the millions. However, increases of this magnitude are beyond the world’s animal testing capabilities and are also prohibitively expensive. Inevitably, then, animal studies must trade statistical power for extrapolation from higher doses to lower doses.
Accordingly, proffered toxicological expert opinion on potentially cancer-causing chemicals almost always is based on a review of research studies that extrapolate from animal experiments involving doses significantly higher than that to which humans are exposed.30 Such extrapolation is accepted in the regulatory arena. However, in toxic tort cases, experts often use additional background information31 to offer opinions about disease causation and risk.32
In vitro research concerns the effects of a chemical on human or animal cells, bacteria, yeast, isolated tissues, or embryos. Thousands of in vitro toxicological tests have been described in the scientific literature. Many tests are for mutagenesis in bacterial or mammalian systems. There are short-term in vitro tests for just about every physiological response and every organ system, such as perfusion tests and DNA studies. Relatively few of these tests have been validated by replication in many different laboratories or by comparison with outcomes in animal studies to determine if they are predictive of whole animal or human toxicity.33 However, these tests, and their validation, are becoming increasingly important.
30. See, e.g., International Agency for Research on Cancer, World Health Organization, Preamble, in 63 IARC Monographs on the Evaluation of Carcinogenic Risks to Humans 9, 17 (1995); James Huff, Chemicals and Cancer in Humans: First Evidence in Experimental Animals, 100 Envtl. Health Persp. 201, 204 (1993); Joseph V. Rodricks, Evaluating Disease Causation in Humans Exposed to Toxic Substances, 14 J.L. & Pol’y 39 (2006).
31. Central to offering an expert opinion on specific causation is a comparison of the estimated risk with the likelihood of the adverse event if the individual had not suffered the alleged exposure. This will differ depending on factors specific to that individual, including age, gender, medical history, and competing exposures.
Researchers have developed numerous biomathematical formulas to provide statistical bases for extrapolation from animal data to human exposure. See generally S.C. Gad, Statistics and Experimental Design for Toxicologists (4th ed. 2005). See also infra Sections III, IV.
32. Policy arguments concerning extrapolation from high doses to low doses are explored in Troyen A. Brennan & Robert F. Carter, Legal and Scientific Probability of Causation of Cancer and Other Environmental Disease in Individuals, 10 J. Health Pol., Pol’y & L. 33 (1985). For a general discussion of dose issues in toxic torts, see also Bernard D. Goldstein, Toxic Torts: The Devil Is in the Dose, 16 J.L. & Pol’y 551–85 (2008).
33. See R. Julian Preston & George R. Hoffman, Genetic Toxicology, in Casarett and Doull’s Toxicology: The Basic Science of Poisons, supra note 3, at 381, 391–404. Use of in vitro data for evaluating human mutagenicity and teratogenicity is described in John M. Rogers & Robert J. Kavlock, Developmental Toxicology, in Casarett and Doull’s Toxicology: The Basic Science of Poisons, supra note 3, at 415, 436–40. For a critique of expert testimony using in vitro data, see Wade-Greaux v. Whitehall Laboratories, Inc., 874 F. Supp. 1441, 1480 (D.V.I. 1994), aff’d, 46 F.3d 1120 (3d Cir. 1994); In re Welding Fume Prods. Liab. Litig., 2006 WL 4507859, at *13 (N.D. Ohio Aug. 8, 2005)
The criteria of reliability for an in vitro test include the following: (1) whether the test has come through a published protocol in which many laboratories used the same in vitro method on a series of unknown compounds prepared by a reputable organization (such as the National Institutes of Health (NIH) or the International Agency for Research on Cancer (IARC)) to determine if the test consistently and accurately measures toxicity, (2) whether the test has been adopted by a U.S. or international regulatory body, and (3) whether the test is predictive of in vivo outcomes related to the same cell or target organ system.
Two types of extrapolation must be considered: from animal data to humans and from higher doses to lower doses.34 In qualitative extrapolation, one can usually rely on the fact that a compound causing an effect in one mammalian species will cause it in another species. This is a basic principle of toxicology and pharmacology. If a heavy metal, such as mercury, causes kidney toxicity in laboratory animals, it is highly likely to do so at some dose in humans. However, the dose at which mercury causes this effect in laboratory animals is modified by many internal factors, and the exact dose–response curve may be different from that for humans. Through the study of factors that modify the toxic effects of chemicals, including absorption, distribution, metabolism, and excretion, researchers can improve the ability to extrapolate from laboratory animals to humans and from higher to lower doses.35 The mathematical depiction of the process by which an external dose moves through various compartments in the body until it reaches the target organ is often called physiologically based pharmacokinetics or toxicokinetics.36
Extrapolation from studies in nonmammalian species to humans is much more difficult but can be done if there is sufficient information on similarities in absorp-
(Toxicologist qualified to testify on relationship between welding fumes and Parkinson’s disease including epidemiology and animal and in vitro toxicology studies).
34. See J.V. Rodricks et al., Quantitative Extrapolations in Toxicology, in Principles and Methods of Toxicology 365 (A. Wallace Hayes ed., 5th ed. 2008).
35. For example, benzene undergoes a complex metabolic sequence that results in toxicity to the bone marrow in all species, including humans. Robert Snyder, Xenobiotic Metabolism and the Mechanism(s) of Benzene Toxicity, 36 Drug Metab. Rev. 531, 547 (2004).
The exact metabolites responsible for this bone marrow toxicity are the subject of much interest but remain unknown. Mice are more susceptible to benzene than are rats. If researchers could determine the differences between mice and rats in their metabolism of benzene, they would have a useful clue about which portion of the metabolic scheme is responsible for benzene toxicity to the bone marrow. See, e.g., Lois D. Lehman-McKeeman, Absorption, Distribution, and Excretion of Toxicants, in Casarett and Doull’s Toxicology: The Basic Science of Poisons, supra note 3, at 131; Andrew Parkinson & Brian W. Ogilvie, Biotransformation of Xenobiotics, in Casarett and Doull’s Toxicology: The Basic Science of Poisons, supra note 3, at 161.
36. For an analysis of methods used to extrapolate from animal toxicity data to human health effects, see references cited in notes 21 and 22, supra.
tion, distribution, metabolism, and excretion. Advances in computational toxicology have increased the ability of toxicologists to make such extrapolations.37 Quantitative determinations of human toxicity based on in vitro studies usually are not considered appropriate. As discussed in Section I.F, in vitro or animal data for elucidating the mechanisms of toxicity are more persuasive when positive human epidemiological data or toxicological information also exists.38
Toxicological expert opinion also relies on formal safety and risk assessments. Safety assessment is the area of toxicology relating to the testing of chemicals and drugs for toxicity. It is a relatively formal approach in which the potential for toxicity of a chemical is tested in vivo or in vitro using standardized techniques. The protocols for such studies usually are developed through scientific consensus and are subject to oversight by governmental regulators or other watchdog groups.
After a number of bad experiences, including outright fraud, government agencies have imposed codes on laboratories involved in safety assessment, including industrial, contract, and in-house laboratories.39 Known as good laboratory practices (GLPs), these codes govern many aspects of laboratory standards, including such details as the number of animals per cage, dose and chemical verification, and the handling of tissue specimens. GLPs are remarkably similar across agencies, but the tests called for differ depending on the mission. For example, there are major differences between FDA’s and EPA’s required procedures for testing drugs
37. See R.J. Kavlock et al., Computational Toxicology: A State of the Science Mini Review, 103 Toxicological Sci. 14–27 (2008). See also D. Malacarne et al., Relationship Between Molecular Connectivity and Carcinogenic Activity: A Confirmation with a New Software Program Based on Graph Theory, 101 Envtl. Health Persp. 331–42 (1993), for validation of the use of a computational structure-based approach to carcinogenicity originally proposed by H.S. Rosenkranz & G. Klopman, Structural Basis of Carcinogenicity in Rodents of Genotoxicants and Non-genotoxicants, 228 Mutat. Res. 105–24 (1990). Structure–activity relationships have also been used to extend the threshold concept in toxicology to look at low-dose exposures to agents present in foods or cosmetics. See R. Kroes et al., Structure-Based Thresholds of Toxicological Concern (TTC): Guidance for Application to Substances Present at Low Levels in the Diet, 42 Food Chem. Toxicol. 65–83 (2004).
38. An example of toxicological information in humans that is pertinent to extrapolation is the finding in human urine of a carcinogenic metabolite found in studies of the same compound in laboratory animals. See, e.g., Goewey v. United States, 886 F. Supp. 1268, 1280–81 (D.S.C. 1995) (extrapolation of neurotoxic effects from chickens to humans unwarranted without human confirmation).
39. A dramatic case of fraud involving a toxicology laboratory that performed tests to assess the safety of consumer products is described in United States v. Keplinger, 776 F.2d 678 (7th Cir. 1985). Keplinger and the other defendants in this case were toxicologists who were convicted of falsifying data on product safety by underreporting animal morbidity and mortality and omitting negative data and conclusions from their reports. For further discussion of reviewing animal studies in light of the FDA’s Good Laboratory Practice guidelines, see Eli Lilly & Co. v. Zenith Goldline Pharm., Inc. 364 F. Supp. 2d 820, 860 (S.D. Ind. 2005).
and environmental chemicals.40 FDA requires and specifies both efficacy and safety testing of drugs in humans and animals. Carefully controlled clinical trials using doses within the expected therapeutic range are required for premarket testing of drugs because exposures to prescription drugs are carefully controlled and should not exceed specified ranges or uses. However, for environmental chemicals and agents, no premarket testing in humans is required by EPA. New European Union Regulation on Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) requires extensive testing of new chemicals and chemicals in commerce.41 Moreover, because exposures are less predictable, doses usually are given in a wider range in animal tests for nonpharmaceutical agents.42
Because exposures to environmental chemicals may continue over a lifetime and affect both young and old, test designs called lifetime bioassays have been developed in which relatively high doses are given to experimental animals. The interpretation of results requires extrapolation from animals to humans, from high to low doses, and from short exposures to multiyear estimates. It must be emphasized that less than 1% of the 60,000 to 75,000 chemicals in commerce have been subjected to a full safety assessment, and there are significant toxicological data on
40. See, e.g., 40 C.F.R. Parts 160, 792 (1993); Lu, supra note 22, at 89. There is a major difference between the information needed to establish a regulatory standard or tolerance, and that needed to establish causation for clinical or tort purposes.
41. For comparison of Toxic Substances Control Act (TSCA), 15 U.S.C. §§ 2601 et seq. (1978) and REACH, see E. Donald Elliott, Trying to Fix TSCA § 6: Lessons from REACH, Proposition 65, and the Clean Air Act, available at http://www.ucis.pitt.edu/euce/events/policyconf/07/PDFs/Elliott.pdf. For issues related to the intentional testing of environmental chemicals in humans, see Committee on the Use of Third Party Toxicity Research with Human Research Participation, National Research Council, Intentional Human Dosing Studies for EPA Regulatory Purposes: Scientific and Ethical Issues (2004).
42. It must be appreciated that the development of a new drug inherently requires searching for an agent that at useful doses has a biological effect (e.g., decreasing blood pressure), whereas those developing a new chemical for consumer use (e.g., a house paint) hope that at usual doses no biological effects will occur. There are other compounds, such as pesticides and antibacterial agents, for which a biological effect is desired, but it is intended that at usual doses humans will not be affected. These different expectations are part of the rationale for the differences in testing information available for assessing toxicological effects. Under FDA rules, approval of a new drug usually will require extensive animal and human testing, including a randomized double-blind clinical trial for efficacy and toxicity. In contrast, under TSCA, the only requirement before a new chemical can be marketed is that a premanufacturing notice be filed with EPA, including any toxicity data in the company’s possession. EPA reviews this information, along with structure–activity relationship modeling, in order to determine whether any restrictions on release should be imposed. For existing chemicals, EPA may require companies to undertake animal and in vitro tests if the chemical may present an unreasonable risk to health. The lack of toxicity data for most chemicals in commerce has led EPA to propose methods of evaluation using in vitro toxicity pathway testing, followed by whole-animal testing where warranted. See Committee on Toxicity Testing and Assessment of Environmental Agents, National Research Council, Toxicity Testing in the 21st Century: A Vision and a Strategy (2007); U.S. Environmental Protection Agency, Strategic Plan for Evaluating the Toxicity of Chemicals (March 2009), available at http://www.epa.gov/spc/toxicitytesting.
only 10% to 20% of them. Under the current U.S. and international approaches to testing chemicals with high production volume, and with the advent of the REACH legislation, the extent of toxicological information is expanding rapidly.43
Risk assessment is an approach increasingly used by regulatory agencies to estimate and compare the risks of hazardous chemicals and to assign priority for avoiding their adverse effects.44 The National Academy of Sciences defines four components of risk assessment: hazard identification, dose–response estimation, exposure assessment, and risk characterization.45
Risk assessment is not an exact science. It should be viewed as a useful framework to organize and synthesize information and to provide estimates on which policymaking can be based. In recent years, codification of the methodology used to assess risk has increased confidence that the process can be reasonably free of bias; however, significant controversy remains, particularly when actual data are limited and generally conservative default assumptions are used.46
Although risk assessment information about a chemical can be somewhat useful in a toxic tort case, at least in terms of setting reasonable boundaries regarding the likelihood of causation, the impetus for the development of risk assessment has been the regulatory process, which has different goals.47 Because of their
43. See John S. Applegate, The Perils of Unreasonable Risk: Information, Regulatory Policy, and Toxic Substances Control, 261 Colum. L. Rev. 264–66 (1991) for a discussion of REACH and its potential impact on the availability of toxicological and risk information. See Sven O. Hanssen & Christina Ruden, Priority Setting in the REACH System, 90 Toxicological Sci. 304–08 (2005), for a discussion of the toxicological needs for REACH and its reliance on exposure.
44. The use of risk assessment by regulatory agencies was spurred by the Supreme Court’s decision in Industrial Union Dep’t, AFL-CIO v. American Petroleum Institute, 448 U.S. 607 (1980). A plurality of the court overturned the Occupational Safety and Health Administration’s (OSHA) attempt to regulate benzene based on the intrinsic hazard of benzene being a human carcinogen. Instead, by requiring a risk assessment, the inclusion of exposure assessment and dose–response evaluation became a customary part of regulatory assessment. See John S. Applegate, supra note 43.
45. See generally National Research Council, Risk Assessment in the Federal Government: Managing the Process (1983); Bernard D. Goldstein, Risk Assessment and the Interface Between Science and Law, 14 Colum. J. Envtl. L. 343 (1989). Recently, a National Academy of Sciences panel has discussed potential approaches to updating the risk paradigm. See Committee on Improving Risk Analysis Approaches Used by the U.S. EPA, supra note 28.
46. An example of conservative default assumptions can be found in Superfund risk assessment. EPA has determined that Superfund sites should be cleaned up to reduce cancer risk from 1 in 10,000 to 1 in 1,000,000. A number of assumptions can go into this calculation, including conservative assumptions about intake, exposure frequency and duration, and cancer-potency factors for the chemicals at the site. See, e.g., Robert H. Harris & David E. Burmaster, Restoring Science to Superfund Risk Assessment, 6 Toxics L. Rep. 1318 (1992).
47. See Committee on Improving Risk Analysis Approaches Used by the U.S. EPA, supra note 28. See also Rhodes v. E.I. du Pont de Nemours & Co., 253 F.R.D. 365, 377–78 (S.D. W. Va. 2008) (putative class-action plaintiffs alleging that contamination of their drinking water with industrial perfluorooctanoic acid entitled them to medical monitoring could not rely upon regulatory risk assessment that does not provide the requisite reasonable certainty required to show a medical monitoring injury). Risk assessment also has come under heavy criticism from those who prefer the precautionary
use of appropriately prudent assumptions in areas of uncertainty and their use of default assumptions when there are limited data, risk assessments often intentionally encompass the upper range of possible risks.48 An additional issue, particularly related to cancer risk, is that standards based on risk assessment often are set to avoid the risk caused by lifetime exposure at this level. Exposure to levels exceeding this standard for a small fraction of a lifetime does not mean that the overall lifetime risk of regulatory concern has been exceeded.49
Risk assessment as practiced by government agencies involved in regulating exposure to environmental chemicals is highly dependent upon the science of toxicology and on the information derived from toxicological studies. EPA, FDA, OSHA, the Consumer Product Safety Commission, and other international (e.g., the World Trade Organization), national, and state agencies use risk assessment as a means to protect workers or the public from adverse effects.50 Acceptable risk levels, for example, 1 in 1000 to 1 in 1,000,000, are usually well below what
principle as an alternative. For advocacy of the precautionary principle, see Joel A. Tickner, Precautionary Principle Encourages Policies That Protect Human Health and the Environment in the Face of Uncertain Risks, 117 Pub. Health Rep. 493–97 (2002). Although variously defined, the precautionary principle in many ways is a hazard-based approach.
48. It is also claimed that standard risk assessment will underestimate true risks, particularly for sensitive populations exposed to multiple stressors, an issue of particular pertinence to discussions of environmental justice. Committee on Environmental Justice, Institute of Medicine, Toward Environmental Justice: Research, Education, and Health Policy Needs (1999). The EPA has been developing formal guidance for cumulative risk assessment, which has been defined as “the combined threats from exposure via all relevant routes to multiple stressors including biological, chemical, physical, and psychosocial entities.” Michael A. Callahan & Ken Sexton, If Cumulative Risk Assessment Is the Answer, What Is the Question? Envtl. Health Persp. 799–806 (2007). See also International Life Sciences Institute, A Framework for Cumulative Risk Assessment Workshop Report (1999). A related issue is aggregate risk assessment, which focuses on exposure to a single agent through multiple routes. For example, swimming in water containing a volatile organic contaminant is likely to lead to exposure through the skin, through inhalation of the contaminant off-gassing just above the water surface, and through swallowing water. For a discussion of aggregate risk assessment, see International Life Science Institute, Aggregate Exposure Assessment Workshop Report (1998). For a study of a child’s indoor exposure through different routes to a pesticide, see V.G. Zartarian et al., A Modeling Framework for Estimating Children’s Residential Exposure and Dose to Chlorpyrifos Via Dermal Residue Contact and Nondietary Ingestion, 108 Envtl. Health Persp. 505–14 (2000).
49. A public health standard to protect against the lifetime risk of inhaling a known carcinogen will usually be based on lifetime exposure calculations of 24 hours a day, everyday for 70 years. This is more than 25,000 days and 600,000 hours. Exceeding this standard for a few hours would presumably have little impact on cancer risk. In contrast, for a short-term standard set to avoid a threshold-based risk, exceeding the standard for this short time may make a major difference, for example, an asthma attack caused by being outdoors on a day that the ozone standard is exceeded.
50. Pharmaceuticals intended for human use are an exception in that a tradeoff between desired and adverse effects may be acceptable, and human data are available prior to, and as a result of, the marketing of the agent.
can be measured through epidemiological study. Inevitably, this means that risk assessment is based solely on toxicological data—or, if epidemiological findings of an adverse effect are observed, then toxicological reasoning must be used to extrapolate to the appropriate lower dose standard aimed at protecting the public.
The four-part risk paradigm is heavily based on toxicological precepts. Hazard identification reflects the toxicological “law” of specificity of effects, and dose–response assessment is based upon “the dose makes the poison.” The hazard identification process often uses “weight of evidence” approaches in which the toxicological, mechanistic, and epidemiological data are rigorously assessed to form a judgment regarding the likelihood that the agent produces a specific effect.51 Establishing the appropriate dose–response curve, threshold, or “one-hit” is an exercise in toxicological reasoning. Even for those chemicals known to be carcinogens, a threshold model is appropriate if the toxicological mechanism of action can be demonstrated to depend upon a threshold. Exposure assessment requires knowledge of specific toxicological dynamics; for example, the impact on the lung of an air pollutant varies by factors such as inhalation rate per unit body mass, which is affected by exercise and by age; by the size of a particle or the solubility of a gas, both of which will affect the depth of penetrance into the more sensitive parts of the airways; by the competence of the usual airway defense mechanisms, such as mucus flow and macrophage function; and by the ability of the lung to metabolize the agent.52
The biological, chemical, and physical phenomena that are the basis of life are astounding in their complexity. As a result, human subcellular, cellular, and organ function are both delicately balanced and highly robust. Small changes caused by external chemical and physical agents can have major effects; yet, through the millennia, evolutionary pressures have led to the emergence of safety mechanisms that defend against adverse environmental stresses.
The specialization that is a hallmark of organ development in vertebrates inherently leads to diversity in the underlying processes that are the basis of organ function. Certain chemicals poison virtually all cells by affecting a basic biological process essential to life. For example, cyanide interferes with the conversion of oxygen to energy in a subcellular component known as mitochondria.53 Other
51. See Section I.F for further discussion of weight-of-evidence approaches to potential human carcinogens.
52. Some toxic agents pass through the lung without producing any direct effects on this organ. For example, inhaled carbon monoxide produces its toxicity in essence by being treated by the body as if it is oxygen. Carbon monoxide readily combines with the oxygen combining site of hemoglobin, the molecule in red blood cells that is responsible for transporting oxygen from the lung to the tissues. By doing so, the effective transport and tissue utilization of oxygen is blocked.
53. Note that the diffuse toxicity of cyanide also reflects its ability to spread widely in the body. Certain mitochondrial poisons primarily affect the brain and active muscles, including the heart, which
chemical agents interfere selectively with an organ-specific process. For example, organophosphate pesticides, often known as nerve gases, specifically interact within the specialized intercellular nerve cell transmission of impulses—a process that is pertinent primarily to the nervous system. Table 1 provides arbitrarily selected examples of toxicological end points and agents of concern, which are not meant to be inclusive or exhaustive.
Despite this specialization, there are pathological processes common to diseases affecting many different organs. For example, chronic inflammation of the skin leads to fiber formation that is recognized as scarring. Similarly, cirrhosis of the liver can result from fibrogenic processes caused by repetitive inflammation of the liver, such as from the overuse of ethanol, and fibrosis of the lung is an important pathological process resulting from asbestos, silica, and other agents.54 The potential for endocrine disruption by chemicals, particularly those that persist within the body, has become an increasing concern. Many of these persistent agents belong to families of chemically similar compounds, such as dioxins or PCBs, that may differ in their effect. Particularly challenging to standard toxicological approaches are agents that react with different receptors present on the surface or internal components of the cell. These receptors often belong to complex families of related cellular components that are continually interacting with the broad range of hormones produced by our bodies.55 The intricate dynamic processes of normal endocrine activity include feedback loops that allow cyclic variation, such as in the menstrual cycle or in the variation of hormone and receptor levels that are linked to normal functions such as sleeping and sexual activity. These complex normal “up and down” variations produce conceptual difficulties when attempting to extrapolate the results from model systems to the functioning human.56
are particularly oxygen dependent. Others, unable to penetrate the blood-brain barrier, will primarily affect peripheral muscle including the heart.
54. Lung fibrosis is a key pathological finding in a group of diseases known as pneumoconiosis that includes coal miners’ black lung disease, silicosis, asbestosis, and other conditions usually caused by occupational exposures.
55. As a simplification, agent–receptor interactions often are described as a key in a lock, with the key needing to be able to both fit into the lock and turn the mechanism. An example from the nervous system is the use in treating a heroin overdose of another opiate that has a much higher affinity for the receptor site but produces little effect once bound. When given to a normal person, this second opiate would have a mild depressant effect, but it can reverse a near fatal overdose of heroin by displacing the heroin from the receptor site. Thus the directionality of opiate effect depends upon the interaction of the components of the mixture. This interaction is even more complex when dealing with estrogenic agents that are naturally occurring as well as made within the body at different levels in response to different external and internal stimuli and at different time intervals.
56. The complexity of the interaction of a mixture of dioxins with receptors governing the endocrine system can be contrasted with that of the reaction of carbon monoxide with the hemoglobin oxygen receptor discussed in note 52. The latter is unidirectional in that any additional carbon monoxide will interfere with oxygen delivery, of which there cannot be too much under normal physiological conditions.
Table 1. Sample of Selected Toxicological End Points and Examples of Agents of Concern in Humansa
|Organ System||Examples of End Points||Examples of Agents of Concern|
allergic contact dermatitis
nickel, poison ivy, cutting oils
polycyclic aromatic hydrocarbons
nonspecific irritation (reactive airway disease)
formaldehyde, acrolein, ozone
chronic obstructive pulmonary disease
fibrosis, pneumoconiosis cancer
silica, mineral dusts, cotton dust cigarette smoke, arsenic, asbestos, nickel
|Blood and the immune system||anemia||
arsine, lead, methyldopa
nitrites, aniline dyes, dapsone
benzene, radiation, chemotherapeutic agents
secondary lupus erythematosus
benzene, radiation, chemotherapeutic agents
|Liver and gastrointestinal tract||hepatic damage (hepatitis)||
acetaminophen, ethanol, carbon tetrachloride, vitamin A
|cancer||aflatoxin, vinyl chloride|
|Urinary tract||kidney toxicity||
ethylene and diethylene glycols, lead, melamine, aminoglycoside antibiotics
|bladder cancer||aromatic amines|
nervous system toxicity
cholinesterase inhibitors, mercury, lead, n-hexane, bacterial toxins (botulinum, tetanus)
|Reproductive and developmental toxicity||fetal malformations||thalidomide, ethanol|
|Organ System||Examples of End Points||Examples of Agents of Concern|
|Endocrine system||thyroid toxicity||radioactive iodine, perchlorate|
|Cardiovascular system||heart toxicity||anthracyclines, cobalt|
|high blood pressure||lead|
|arrhythmias||plant glycosides (e.g. digitalis)|
aThis table presents only examples of toxicological end points and examples of agents of concern in humans and is provided to help illustrate the variety of toxic agents and end points. It is not an exhaustive or inclusive list of organs, end points, or agents. Absence from this list does not indicate a relative lack of evidence for a causal relation as to any agent of concern.
The processes that result in the causation of cancer are also of particular interest to the public, to litigators, and to regulators. A common denominator for the various diseases that fall under the heading of cancer is uncontrolled cellular growth, usually reflecting the failure of the normal progression of precursor cells to maturation and cell death. Central to the mechanism of cancer causation is the production of a genetic change that leads a precursor cell to no longer conform to usual processes that control cell growth. In virtually all cancers, the overgrowth of cells can be traced to a single mutation, such that cancer cells are a clone of the one mutated precursor cell.57 The understanding of the relationship between mutation and cancer led to some of the first toxicological tests to determine whether an external agent could cause cancer. Such tests have grown in sophistication because of the advances in molecular biology and computational toxicology that have occurred concomitantly with an increased understanding of the variety of potential pathways that lead to mutagenesis.58
Toxicological testing for chemical carcinogens ranges from relatively simple studies to determine whether the substance is capable of producing bacterial mutations to observation of cancer incidence as a result of long-term administration of the substance to laboratory animals. Between these two extremes are a multiplicity of tests that build upon the understanding of the mechanism of cancer causation. In vitro or in vivo tests may focus on the evidence of effects in DNA, such as the presence of adducts of the chemical or its metabolites bound to the DNA molecule or the cross-linking of the DNA molecule to protein. Researchers may look for changes in the nucleus of the cell suggestive of DNA damage that could
57. There may, in fact, be multiple mutations as the initial clone of cells undergoes further transformation before or after the cancer becomes clinically manifest.
58. Committee on Toxicity Testing and Assessment of Environmental Agents, National Research Council, Toxicity Testing in the 21st Century: A Vision and a Strategy (2 007).
result in mutagenesis and carcinogenesis, for example, the micronucleus test or the comet assay. Certain mutagens cause an increase in the normal exchange of nuclear material among DNA components during normal cell division, which gives rise to a test known as the “sister chromatid exchange.”59 The direct observation of chromosomes to look for specific abnormalities, known as cytogenetic analysis, is providing more information about the pathways of carcinogenesis. For cancers such as acute myelogenous leukemia, it has long been recognized that those individuals who present with recognizable chromosomal abnormalities are more likely to have been exposed to a known human chemical leukemogen such as benzene.60 But at this time there is no chromosomal abnormality that is unequivocally linked to a specific chemical or physical carcinogen.61 These and other tests provide information that can be used in evaluating whether a chemical is a potential human carcinogen.
The many tests that are pertinent to estimating whether a chemical or physical agent produces human cancer require careful evaluation. The World Health Organization’s (WHO’s) IARC and the U.S. National Toxicology Program (NTP) have formal processes to evaluate the weight of evidence that a chemical causes cancer.62 Each classifies chemicals on the basis of epidemiological evidence, toxicological findings in laboratory animals, and mechanistic considerations, and then assigns a specific category of carcinogenic potential to the individual chemical or exposure situation (e.g., employment as a painter).63 Only a small percentage of
59. All of these tests require validation regarding their relevance to predicting human carcinogenesis, as well as to their technical reproducibility. See Raffaella Corvi et al., ECVAM Retrospective Validation of In Vitro Micronucleus Test, 23 Mutagenesis 271–83 (2008), for an example of an approach to validating a short-term assay for carcinogenesis.
60. F. Mitelman et al., Chromosome Pattern, Occupation, and Clinical Features in Patients with Acute Nonlymphocytic Leukemia, 4 Cancer Genet. & Cytogenet. 197, 214 (1981).
61. See Luoping Zhang et al., The Nature of Chromosomal Aberrations Detected in Humans Exposed to Benzene, 32 Crit. Rev. Toxicol. 1–42 (2002).
62. The U.S. National Toxicology Program issues a congressionally mandated Report on Carcinogens. The 12th report is available at http://ntp.niehs.nih.gov/ntp/roc/twelfth/roc12.pdf. IARC produces its reports through a monograph series that provides detailed description of the agents or processes under consideration as well as the findings of the IARC expert working group. See the IARC Web site for a list of these monographs (http://monographs.iarc.fr/).
63. IARC uses the following classifications:
Group 1, The agent (mixture) is carcinogenic to humans;
Group 2A, The agent (mixture) is probably carcinogenic to humans,
Group 2B, The agent (mixture) is possibly carcinogenic to humans;
Group 3, The agent (mixture) is not classifiable as to its carcinogenicity to humans; and
Group 4, The agent (mixture) is probably not carcinogenic to humans.
Inherent in putting chemicals into distinct categories when there is a continuum for the strength of the evidence is that some chemicals will be very close to the dividing line between the discrete categories. Inevitably, small differences in the interpretation of the evidence for such chemicals will lead to disagreement regarding categorization.
the total chemicals in commerce are considered to be known human carcinogens. In the past, assignment to the highest category was dependent almost totally on epidemiological evidence, although animal data and mechanistic information were also considered. In recent years, with improved understanding of the mechanism of action of chemical carcinogens, there has been increased use of mechanistic data.64 For example, higher credence is given to the likelihood that a chemical is a human carcinogen if the metabolite found to be responsible for carcinogenesis in a laboratory animal is also found in the blood or urine of humans exposed to this chemical, or if there is evidence of the same type of DNA damage in humans as there is in laboratory animals in which the agent does cause cancer.65
In recent decades, exposure assessment has developed into a scientific field with the usual trappings of journals, learned societies, and research funding processes.
64. See Vincent James Cogliano et al., Use of Mechanistic Data in IARC Evaluations, 49 Envt. & Molecular Mutagenesis 100–09 (2008) for a discussion and for specific examples of the use of mechanistic data in evaluating carcinogens. The evolution in the approach to determining cancer causality is evident from reviewing the guidelines used to assemble the weight of evidence for causality by IARC and NTP, two of the organizations that have the lengthiest track record of responsibility for the hazard identification of carcinogens. Both have increased the weight given to mechanistic evidence in characterizing the overall strength of the total evidence used to classify the potential for a chemical or an exposure to be causal. IARC now permits classification in Group 1 when there is less than sufficient evidence in humans but sufficient evidence in animals and “strong evidence in exposed humans that the agent acts through a relevant mechanism of carcinogenicity” Id. at 103. The criteria used by NTP for listing a chemical as a known human carcinogen in its biannual Report on Carcinogens is “There is sufficient evidence of carcinogenicity from studies in humans,* which indicates a causal relationship between exposure to the agent, substance, or mixture, and human cancer.” The asterisk is particularly notable in that it specifies that the evidence need not be solely epidemiological: “*This evidence can include traditional cancer epidemiology studies, data from clinical studies, and/or data derived from the study of tissues or cells from humans exposed to the substance in question that can be useful for evaluating whether a relevant cancer mechanism is operating in people.” See National Toxicology Program, U.S. Dep’t of Health and Human Servs., Report of Carcinogens (12th ed. 2011), at 4, available at http://ntp.niehs.nih.gov/ntp/roc/twelfth/roc12.pdf.
EPA also considers mechanism of action in its regulatory approaches and distinguishes further between mechanism of action and mode of action. See Katherine Z. Guyton et al., Improving Prediction of Chemical Carcinogenicity by Considering Multiple Mechanisms and Applying Toxicogenomic Approaches, 681 Mutation Res. 230, 240 (2009); Katherine Z. Guyton et al., Mode of Action Frameworks: A Critical Analysis, 11 J. Toxicol. & Envtl. Health Part B 16, 31 (2008).
65. A recent example is the IARC evaluation of formaldehyde that upgraded the categorization from 2A to 1 based upon epidemiological data that were strongly supported by the finding of nasal cancer in laboratory animals and by the presence of DNA-protein cross-links in the nasal tissue of the laboratory animals and of humans inhaling formaldehyde. However, epidemiological evidence associating formaldehyde with human acute myelogenous leukemia was questioned on the basis of the lack of mechanistic evidence, including questions about how such a highly reactive agent could reach the bone marrow following inhalation. See Formaldehyde, 2-Butoxyethanol and 1-tert-Butoxypropan-2-ol, in 88 IARC Monographs on the Evaluation of Carcinogenic Risks to Humans (2006).
Exposure assessment methodologies include mathematical models predicting exposure resulting from an emission source, which might be a long distance upwind; chemical or physical measurements of media such as air, food, and water; and biological monitoring within humans, including measurements of blood and urine specimens. An exposure assessment should also look for competing exposures. In this continuum of exposure metrics, the closer to the human body, the greater the overlap with toxicology.66
Exposure assessment is central to epidemiology as well. Many of the causal associations between chemicals and human disease have been developed from epidemiological studies relating a workplace chemical to an increased risk of the specific disease in cohorts of workers, often with only a qualitative assessment of exposure. An improved quantitative understanding of such exposures enhances the likelihood of observing causal relations.67 It also can provide the information needed by the expert toxicologist to opine on the likelihood that a specific exposure was responsible for an adverse outcome.
Epidemiology is the study of the incidence and distribution of disease in human populations. Clearly, both epidemiology and toxicology have much to offer in elucidating the causal relationship between chemical exposure and disease.68 These
66. Toxicologists also have indirect means of approaching exposure through symptoms. For many agents, there is a known threshold for smell and a reasonable range of levels that might cause symptoms. For example, the use of toxicological expertise is appropriate in a situation in which chronic exposure to a volatile hydrocarbon is alleged to have occurred at levels at which acute exposure would be expected to render the individual unconscious. Toxicologists may also contribute knowledge of the extent of individual exposure based upon appropriate assumptions concerning inhalation rate or water use; for example, children inhale more per body mass than do adults, and outdoor workers in hot climates will drink more fluids.
67. In terms of general causation, accurate exposure assessment is important because a true effect can be missed because of the confounding caused by cohorts that often include workers with little exposure to the putative offending agent, thereby diluting the actual effect. See Peter F. Infante, Benzene Exposure and Multiple Myeloma: A Detailed Meta-analysis of Benzene Cohort Studies, 1976 Ann. N.Y. Acad. Sci. 90–109 (2006), for a discussion of this issue in relation to a meta-analysis of the potential causative role of benzene in multiple myeloma. On the other hand, an association between exposure and effect occurring solely by chance is more likely if the effect does not meet the expected standard of being more pronounced in those receiving the highest dose. See Bernard D. Goldstein, Toxic Torts: The Devil Is in the Dose, 16 J.L. & Pol’y 551–85 (2008). Setting regulatory standards based upon the observed effect in a cohort often requires a risk assessment, which in turn is dependent on understanding the extent of the exposure. This has led to extensive retrospective reconstruction of exposure in key cohorts.
68. See Michael D. Green et al., Reference Guide on Epidemiology, Section V, in this manual. For example, in Norris v. Baxter Healthcare, 397 F.3d 878, 882 (10th Cir. 2005), testimony was excluded as unreliable in which the expert ignored epidemiological studies that conflicted with the expert’s opinion. However, epidemiological studies are not always necessary. Glastetter v. Novartis Pharms. Corp., 252 F.3d 986, 999 (8th Cir. 2001).
sciences often go hand in hand with assessments of the risks of chemical exposure, without artificial distinctions being drawn between them. However, although courts generally rule epidemiological expert opinion admissible, the admissibility of toxicological expert opinion has been more controversial because of uncertainties regarding extrapolation from animal and in vitro data to humans. This particularly has been true in cases in which relevant epidemiological research data exist. However, the methodological weaknesses of some epidemiological studies, including their inability to accurately measure exposure and their small numbers of subjects, render these studies difficult to interpret.69 In contrast, because animal and cell studies permit researchers to isolate the effects of exposure to a single chemical or to known mixtures, toxicological findings offer unique information concerning dose–response relationships, mechanisms of action, specificity of response, and other information relevant to the assessment of causation.70
The gold standard in clinical epidemiology and in the testing of pharmaceutical agents is the randomized double-blind cohort study in which the control and intervention groups are perfectly matched. Although appropriate and very informative for the testing of pharmaceutical agents, it is generally unethical for chemicals used for other purposes. The randomized control design in essence is what is used in a classic toxicological study in laboratory animals, although matching is more readily achieved because the animals are genetically similar and have identical environmental histories.
Dose issues are at the interface between toxicology and epidemiology. Many epidemiological studies of the potential risk of chemicals do not have direct information about dose, although qualitative differences among subgroups or in comparison with other studies can be inferred. The epidemiology database includes many studies that are probing for the potential for an association between a cause and an effect. Thus a study asking all those suffering from a specific disease a multiplicity of questions related to potential exposures is bound to find some statistical association between the disease and one or more exposure conditions. Such studies generate hypotheses that can then be evaluated more thoroughly by subsequent studies that more narrowly focus on the potential cause-and-effect
69. Id. See also Michael D. Green et al., Reference Guide on Epidemiology, in this manual.
70. Both commonalities and differences between animal responses and human responses to chemical exposures were recognized by the court in International Union, United Automobile, Aerospace and Agricultural Implement Workers of America, UAW v. Pendergrass, 878 F.2d 389 (D.C. Cir. 1989). In reviewing the results of both epidemiological and animal studies on formaldehyde, the court stated: “Humans are not rats, and it is far from clear how readily one may generalize from one mammalian species to another. But in light of the epidemiological evidence [of carcinogenicity] that was not the main problem. Rather it was the absence of data at low levels.” Id. at 394. The court remanded the matter to OSHA to reconsider its findings that formaldehyde presented no specific carcinogenic risk to workers at exposure levels of 1 part per million or less. See also Hopkins v. Dow Corning Corp., 33 F.3d 1116 (9th Cir. 1994); In re Accutane Prod. Liab., 511 F. Supp. 2d 1288, 1292 (M.D. Fla. 2007); United States v. Philip Morris USA, Inc., 449 F. Supp. 2d 1, 182 (D.D.C. 2006); Ambrosini v. Labarraque, 101 F.3d 129, 141 (D.C. Cir. 1996).
relation. One way to evaluate the strength of the association is to assess whether those epidemiological studies evaluating cohorts with relatively high exposure observe the association.71
The requirement in certain jurisdictions for epidemiological evidence of a relative risk greater than two (RR > 2) for general causation also has limited the utilization of toxicological evidence.72 A firm requirement for such evidence means that if the epidemiological database showed statistically significant evidence that cohorts exposed to 10 parts per million of an agent for 20 years produced an 80% increase in risk, the court could not hear the case of a plaintiff alleging that exposure to 50 parts per million for 20 years of the same agent caused the adverse outcome. Yet to a toxicologist there would be little question that exposure to the fivefold higher dose would lead to more than a doubling of the risk, all other facets of the case being similar.
Even though there is little toxicological data on many of the 75,000 compounds in general commerce, there is far more information from toxicological studies than from epidemiological studies.73 It is much easier, and more economical, to expose an animal to a chemical or to perform in vitro studies than it is to perform epidemiological studies. This difference in data availability is evident even for cancer causation, for which toxicological study is particularly expensive and time-consuming. Of the perhaps two dozen chemicals that reputable international authorities agree are known human carcinogens based on positive epidemiological studies, arsenic is the only one not known to be an animal carcinogen. Yet there are more than 100 known animal carcinogens for which there is no valid epidemiological database, and others for which the epidemiological database has been
71. For common chemicals, it is not unusual that a literature search reveals an association with virtually any disease. As an example of considering dose issues across epidemiological studies, see Luoping Zhang et al., Formaldehyde Exposure and Leukemia: A New Meta-Analysis and Potential Mechanisms, 681 Mutat. Res. 150–68 (2008). The subject of the strength of an epidemiological association and its relation to causality is considered in Michael D. Green et al., Reference Guide on Epidemiology, in this manual.
72. The basis for the use of RR > 2 is the translation of the preponderance of evidence, or “more likely than not,” as a basis for tort law into at least a doubling of risk. An example is the Havner rule in Texas, which for general causation requires that there be at least two epidemiological studies with a statistically significant RR > 2 associating a putative cause with an effect (Merrell Dow Pharms. v. Havner, 953 S.W.2d 706, 716 (Tex. 1997)). For a discussion of the use by jurisdictions of relative risk > 2 for general and specific causation, see Russellyn S. Carruth & Bernard D. Goldstein, Relative Risk GreaterThan Two in Proof of Causation in Toxic Tort Litigation, 41 Jurimetrics 195 (2001); for the toxicological issues, see Bernard D. Goldstein, Toxic Torts: The Devil Is in the Dose, 16 J.L. & Pol’y 551–85 (2008).
73. See generally Committee on Toxicity Testing and Assessment of Environmental Agents, supra note 24. See also National Research Council, Toxicity Testing: Strategies to Determine Needs and Priorities (1984); Myra Karstadt & Renee Bobal, Availability of Epidemiologic Data on Humans Exposed to Animal Carcinogens, 2 Teratogenesis, Carcinogenesis & Mutagenesis 151 (1982); Lorenzo Tomatis et al., Evaluation of the Carcinogenicity of Chemicals: A Review of the Monograph Program of the International Agency for Research on Cancer, 38 Cancer Res. 877, 881 (1978).
equivocal.74 To clarify any findings, regulators can require a repeat of an equivocal 2-year animal toxicological study or the performance of additional laboratory studies in which animals deliberately are exposed to the chemical. Such deliberate exposure is not possible in humans. As a general rule, unequivocally positive epidemiological studies reflect prior workplace practices that led to relatively high levels of chemical exposure for a limited number of individuals and that, fortunately, in most cases no longer occur now. Thus an additional prospective epidemiological study often is not possible, and even the ability to do retrospective studies is constrained by the passage of time.
In essence, epidemiological findings of an adverse effect in humans represent a failure of toxicology as a preventive science or of regulatory authorities or other responsible parties in controlling exposure to a hazardous chemical or physical agent. A corollary of the tenet that, depending upon dose, all chemical and physical agents are harmful, is that society depends upon toxicological science to discover these harmful effects and on regulators and responsible parties to prevent human exposure to a harmful level or to ensure that the agent is not produced. Epidemiology is a valuable backup approach that functions to detect failures of primary prevention. The two disciplines complement each other, particularly when the approaches are iterative.
Once the expert has been qualified, he or she is expected to offer an opinion on whether the plaintiff’s disease was caused by exposure to a chemical. To do so, the expert relies on the principles of toxicology to provide a scientifically valid
74. The absence of epidemiological data is due, in part, to the difficulties in conducting cancer epidemiology studies, including the lack of suitably large groups of individuals exposed for a sufficient period of time, long latency periods between exposure and manifestation of disease, the high variability in the background incidence of many cancers in the general population, and the inability to measure actual exposure levels. These same concerns have led some researchers to conclude that “many negative epidemiological studies must be considered inconclusive” for exposures to low doses or weak carcinogens. Henry C. Pitot III & Yvonne P. Dragan, Chemical Carcinogenesis, in Casarett and Doull’s Toxicology: The Basic Science of Poisons 201, 240–41 (Curtis D. Klaassen ed., 5th ed. 1996).
75. Determinations about cause-and-effect relations by regulatory agencies often depend upon expert judgment exercised by assessing the weight of evidence. For a discussion of this process as used by the International Agency for Research on Cancer of the World Health Organization and the role of information about mechanisms of toxicity, see Vincent J. Cogliano et al., Use of Mechanistic Data in IARC Evaluations, 49 Envt’l & Molecular Mutagens 100 (2008). For the use of expert judgment in EPA’s response to submission of information for premanufacture notification required under the Toxic Substances Control Act, 15 U.S.C. §§ 2604, 2605(e), 40 C.F.R. §§ 720 et seq., see Chemical Manufacturers Ass’n v. EPA, 859 F.2d 977 (D.C. Cir. 1988).
methodology for establishing causation and then applies the methodology to the facts of the case.
An opinion on causation should be premised on three preliminary assessments. First, the expert should analyze whether the disease can be related to chemical exposure by a biologically plausible theory. Second, the expert should examine whether the plaintiff was exposed to the chemical in a manner that can lead to absorption into the body. Third, the expert should offer an opinion about whether the dose to which the plaintiff was exposed is sufficient to cause the disease.
The following questions help evaluate the strengths and weaknesses of toxicological evidence.
A. On What Species of Animals Was the Compound Tested? What Is Known About the Biological Similarities and Differences Between the Test Animals and Humans? How Do These Similarities and Differences Affect the Extrapolation from Animal Data in Assessing the Risk to Humans?
All living organisms share a common biology that leads to marked similarities in the responsiveness of subcellular structures to toxic agents. Among mammals, more than sufficient common organ structure and function readily permit the extrapolation from one species to another in most instances. Comparative information concerning factors that modify the toxic effects of chemicals, including absorption, distribution, metabolism, and excretion, in the laboratory test animals and humans enhances the expert’s ability to extrapolate from laboratory animals to humans.76
The expert should review similarities and differences between the animal species in which the compound has been tested and humans. This analysis should form the basis of the expert’s opinion regarding whether extrapolation from animals to humans is warranted.77
76. See generally supra notes 35–36 and accompanying text.
77. The failure to review similarities and differences in metabolism in performing cross-species extrapolation has led to the exclusion of opinions based on animal data. See In re Silicone Gel Breast Implants Prods. Liab. Litig., 318 F. Supp. 2d 879, 891 (C.D. Cal. 2004); Fabrizi v. Rexall Sundown, Inc., 2004 WL 1202984, at *8 (W.D. Pa. June 4, 2004). Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387, 1410 (D. Or. 1996); Nelson v. Am. Sterilizer Co., 566 N.W.2d 671 (Mich. Ct. App. 1997). But see In re Paoli R.R. Yard PCB Litig., 35 F.3d 717, 779–80 (3d Cir. 1994) (noting that humans and monkeys are likely to show similar sensitivity to PCBs), cert. denied sub nom. Gen. Elec. Co. v. Ingram, 513 U.S. 1190 (1995). As the Supreme Court noted in General Electric Co. v. Joiner, 522 U.S. 136, 144 (1997), the issue regarding admissibility is not whether animal studies are ever admissible to establish causation, but whether the particular studies relied upon by plaintiff’s experts were sufficiently supported. See Carl F. Cranor et al., Judicial Boundary Drawing and the Need for Context-Sensitive Science in Toxic Torts After Daubert v. Merrell Dow Pharmaceuticals, Inc., 16 Va. Envtl. L.J. 1, 38 (1996).
In general, an overwhelming similarity is apparent in the biology of all living things, and there is a particularly strong similarity among mammals. Of course, laboratory animals differ from humans in many ways. For example, rats do not have gallbladders. Thus, rat data would not be pertinent to the possibility that a compound produces human gallbladder toxicity.78 Note that many subjective symptoms are poorly modeled in animal studies. Thus, complaints that a chemical has caused nonspecific symptoms, such as nausea, headache, and weakness, for which there are no objective manifestations in humans, are difficult to test in laboratory animals.
Some toxic agents affect only specific organs and not others. This organ specificity may be due to particular patterns of absorption, distribution, metabolism, and excretion; the presence of specific receptors; or organ function. For example, organ specificity may reflect the presence in the organ of relatively high levels of an enzyme capable of metabolizing or changing a compound to a toxic form of the compound,79 or it may reflect the relatively low level of an enzyme capable of detoxifying a compound. An example of the former is liver toxicity caused by inhaled carbon tetrachloride, which affects the liver but not the lungs because of extensive metabolism to a toxic metabolite within the liver but relatively little such metabolism in the lung.80
Some chemicals, however, may cause nonspecific effects or even multiple effects. Lead is an example of a toxic agent that affects many organ systems, including the blood, the central and peripheral nervous systems, the reproductive system, and the kidneys.
The basis of specificity often reflects the function of individual organs. For example, the thyroid is particularly susceptible to radioactive iodine in atomic fallout because thyroid hormone is unique within the body in that it requires iodine. Through evolution, a very efficient and specific mechanism has developed that
78. See, e.g., Edward J. Calabrese, Multiple Chemical Interactions 583–89 tbl.14-1 (1991). Species differences that produce a qualitative difference in response to xenobiotics are well known. Sometimes understanding the mechanism underlying the species difference can allow one to predict whether the effect will occur in humans. Thus, carbaryl, an insecticide commonly used for gypsy moth control, among other things, produces fetal abnormalities in dogs but not in hamsters, mice, rats, and monkeys. Dogs lack the specific enzyme involved in metabolizing carbaryl; the other species tested all have this enzyme, as do humans. Therefore, it has been assumed that humans are not at risk for fetal malformations produced by carbaryl.
79. Certain chemicals act directly to produce toxicity, whereas others require the formation of a toxic metabolite.
80. Brian Jay Day et al., Potentiation of Carbon Tetrachloride-Induced Hepatotoxicity and Pneumotoxicity by Pyridine, 8 J. Biochem. Toxicol. 11 (1993).
concentrates any absorbed iodine preferentially within the thyroid, rendering the thyroid particularly at risk from radioactive iodine. In a test tube, the radiation from radioactive iodine can affect the genetic material obtained from any cell in the body, but in the intact laboratory animal or human, only the thyroid is at risk.
The unfolding of the human genome already is beginning to provide information pertinent to understanding the wide variation in human risk from environmental chemicals. The impact of this understanding on toxic tort causation issues remains to be explored.81
Understanding the structural aspects of chemical toxicology has led to the use of structure–activity relationships (SAR) as a formal method of predicting the potential toxicity of new chemicals. This technique compares the chemical structure of compounds with known toxicity and the chemical structure of compounds with unknown toxicity. Toxicity then is estimated based on the molecular similarities between the two compounds. Although SAR is used extensively by EPA in evaluating many new chemicals required to be tested under the registration requirements of TSCA, its reliability has a number of limitations.82
81. Committee on Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment, National Research Council, Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment (2007); Gary E. Marchant, Toxicogenomics and Toxic Torts, 20 Trends Biotech. 329 (2002). Genomics can also be misinterpreted. A recent example is the use of white blood cell gene expression to determine whether benzene was a cause of acute myelogenous leukemia (AML) in individual workers. M.T. Smith, 14 Misuse of Genomics in Assigning Causation in Relation to Benzene Exposure, Int’l J. Occup. Envtl. Health 144–46 (2008) describes why the failure to match a pattern of DNA expression in workers with AML who were previously exposed to benzene is not scientifically defensible as a means to establish the lack of causation, as said to have been done in workers’ compensation cases in California. The wide range in the rate of metabolism of chemicals is at least partly under genetic control. A study of Chinese workers exposed to benzene found approximately a doubling of risk in people with high levels of either an enzyme that increased the rate of formation of a toxic metabolite or an enzyme that decreased the rate of detoxification of this metabolite. There was a sevenfold increase in risk for those who had both genetically determined variants. N. Rothman et al., Benzene Poisoning, A Risk Factor for Hematological Malignancy, Is Associated with the NQO1 609C→T Mutation and Rapid Fractional Excretion of Chlorzoxazone, 57 Cancer Res. 239–42 (1997). See also Frederica P. Perera, Molecular Epidemiology: Insights into Cancer Susceptibility, Risk Assessment, and Prevention, 88 J. Nat’l Cancer Inst. 496 (1996).
82. For example, benzene and the alkyl benzenes (which include toluene, xylene, and ethyl benzene) share a similar chemical structure. SAR works exceptionally well in predicting the acute central nervous system anesthetic-like effects of both benzene and the alkyl benzenes. Although there are slight differences in dose–response relationships, they are readily explained by the interrelated factors of chemical structure, vapor pressure, and lipid solubility (the brain is highly lipid). National Research Council, The Alkyl Benzenes (1981). However, only benzene produces damage to the bone marrow and leukemia; the alkyl benzenes do not have this effect. This difference is the result
Cellular and tissue culture research can be particularly helpful in identifying mechanisms of toxic action and potential target-organ toxicity. The major barrier to the use of in vitro results is the frequent inability to relate doses that cause cellular toxicity to doses that cause whole-animal toxicity. In many critical areas, knowledge that permits such quantitative extrapolation is lacking.83 Nevertheless, the ability to quickly test new products through in vitro tests, using human cells, provides invaluable “early warning systems” for toxicity.84
No matter how strong the temporal relationship between exposure and the development of disease, or the supporting epidemiological evidence, it is difficult to accept an association between a compound and a health effect when no
of specific toxic metabolic products of benzene in comparison with the alkyl benzenes. Thus SAR is predictive of neurotoxic effects but not bone marrow effects. See Preston & Hoffman, supra note 33, at 277. Advances in computational approaches show promise in improving SAR. See Committee on Toxicity Testing and Assessment of Environmental Agents, National Research Council, Toxicity Testing in the 21st Century: A Vision and a Strategy, ch. 4 (2007).
In Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993), the Court rejected a per se exclusion of SAR, animal data, and reanalysis of previously published epidemiological data where there were negative epidemiological data. However, as the court recognized in Sorensen v. Shaklee Corp., 31 F.3d 638, 646 n.12 (8th Cir. 1994), the problem with SAR is that “‘[m]olecules with minor structural differences can produce very different biological effects.’” (quoting Joseph Sanders, From Science to Evidence: The Testimony on Causation in the Bendectin Cases, 46 Stan. L. Rev. 1, 19 (1993)). See also Glastetter v. Novartis Pharms. Corp., 252 F.3d 986, 990 (8th Cir. 2001); Polski v. Quigley Corp., 2007 WL 2580550, at *6 (D. Minn. Sept. 5, 2007).
83. In Vitro Toxicity Testing: Applications to Safety Evaluation 8 (John M. Frazier ed., 1992). Despite its limitations, in vitro research can strengthen inferences drawn from whole-animal bioassays and can support opinions regarding whether the association between exposure and disease is biologically plausible. See Preston & Hoffman, supra note 33, at 278–93; Rogers & Kavlock, supra note 33, at 319–23.
84. Graham v. Playtex Prods., Inc., 993 F. Supp. 127, 131–32 (N.D.N.Y. 1998) (opinion based on in vitro experiments showing that rayon tampons were associated with higher risk of toxic shock syndrome was admissible in the absence of epidemiological evidence). See also Allgood v. General Motors Corp., 2006 WL 2669337, at *7 (S.D. Ind. Sept. 18, 2006); In re Ephedra Prods. Liab. Litig., 393 F. Supp. 2d 181, 194 (S.D.N.Y. 2005) (in vitro studies may be subject of proper inferences “although the gaps between such data and definitive evidence of causality are real and subject to challenge before the jury, they are not so great as to require the opinion to be excluded from evidence. Inconclusive science is not the same as junk science”).
mechanism can be identified by which the chemical or physical exposure leads to the putative effect.85
An expert who opines that exposure to a compound caused a person’s disease engages in deductive clinical reasoning.86 In most instances, cancers and other diseases do not wear labels documenting their causation. The opinion is based on an assessment of the individual’s exposure, including the amount, the temporal relationship between the exposure and disease, and other disease-causing factors. This information is then compared with scientific data on the relationship between exposure and disease. The certainty of the expert’s opinion depends on the strength of the research data demonstrating a relationship between exposure and the disease at the dose in question and the presence or absence of other disease-causing factors (also known as confounding factors).87
Particularly problematic are generalizations made in personal injury litigation from regulatory positions. Regulatory standards are set for purposes far different than determining the preponderance of evidence in a toxic tort case. For example, if regulatory standards are discussed in toxic tort cases to provide a reference point for assessing exposure levels, it must be recognized that there is a great deal of variability in the extent of evidence required to support different regulations.88 The extent of evidence required to support regulations depends on
85. However, theories of bioplausibility, without additional data, have been found to be insufficient to support a finding of causation. See, e.g., Golod v. Hoffman La Roche, 964 F. Supp. 841, 860–61 (S.D.N.Y. 1997); Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387, 1414 (D. Or. 1996). But see Best v. Lowe’s Home Centers, Inc., 2008 WL 2359986, at *8 (E.D. Tenn. June 5, 2008) (expert relied on temporal proximity in concluding that plaintiff lost his sense of smell due to chemical exposure).
86. For an example of deductive clinical reasoning based on known facts about the toxic effects of a chemical and the individual’s pattern of exposure, see Bernard D. Goldstein, Is Exposure to Benzene a Cause of Human Multiple Myeloma? 609 Annals N.Y. Acad. Sci. 225 (1990).
87. Causation issues are discussed in Michael D. Green et al., Reference Guide on Epidemiology, Section V, and Wong et al., Reference Guide on Medical Testimony, Section IV, in this manual. See also David L. Bazelon, Science and Uncertainty: A Jurist’s View, 5 Harv. Envtl. L. Rev. 209 (1981); Troyen A. Brennan, Causal Chains and Statistical Links: The Role of Scientific Uncertainty in Hazardous-Substance Litigation, 73 Cornell L. Rev. 469 (1988); Joseph Sanders, Scientific Validity, Admissibility and Mass Torts After Daubert, 78 Minn. L. Rev. 1387 (1994); Orrin E. Tilevitz, Judicial Attitudes Towards Legal and Scientific Proof of Cancer Causation, 3 Colum. J. Envtl. L. 344, 381 (1977).
88. See, e.g., In re Paoli R.R. Yard PCB Litig., 35 F.3d 717, 781 (3d Cir. 1994) (district court abused its discretion in excluding animal studies relied upon by EPA), cert. denied sub nom. General
- The law (e.g., the Clean Air Act National Ambient Air Quality Standard provisions have language focusing regulatory activity for primary pollutants on adverse health consequences to sensitive populations with an adequate margin of safety and with no consideration of economic consequences, while regulatory activity under TSCA clearly asks for some balance between the societal benefits and risks of new chemicals89);
- The specific end point of concern (e.g., consider the concern caused by cancer and adverse reproductive outcomes versus almost anything else); and
- The societal impact (e.g., the public’s support for control of an industry that causes air pollution versus the public’s relative lack of desire to alter personal automobile use patterns).
These three concerns, as well as others, including costs, politics, and the virtual certainty of litigation challenging the regulation, have an impact on the level of scientific proof required by the regulatory decisionmaker.90
In addition, regulatory standards traditionally include protective factors to reasonably ensure that susceptible individuals are not put at risk. Furthermore, standards often are based on the risk that results from lifetime exposure. Accordingly, the mere fact that an individual has been exposed to a level above a standard does not necessarily mean that an adverse effect has occurred.
Evidence of exposure is essential in determining the effects of harmful substances. Basically, potential human exposure is measured in one of three ways. First, when direct measurements cannot be made, exposure can be measured by mathematical modeling, in which one uses a variety of physical factors to estimate the transport of the pollutant from the source to the receptor. For example, mathematical models take into account such factors as wind variations to allow calculation of
Elec. Co. v. Ingram, 513 U.S. 1190 (1995); Molden v. Georgia Gulf Corp., 465 F. Supp. 2d 606, 613 (M.D. La. 2006) (Plaintiff failed to establish prima facie case due to failure to establish exposure at a level considered dangerous by regulatory agency); In re W.R. Grace & Co. 355 B.R. 462, 490 (Bankr. D. Del. 2006) (OSHA standards of exposure relevant to causation but not determinative for exposure occurring due to home attic insulation). See also John Endicott, Interaction Between Regulatory Law and Tort Law in Controlling Toxic Chemical Exposure, 47 SMU L. Rev. 501 (1994).
89. See, e.g., Clean Air Act Amendments of 1990, 42 U.S.C. § 7412(f) (1994); Toxic Substances Control, Act, 15 U.S.C. § 2605 (1994).
90. These concerns are discussed in Stephen Breyer, Breaking the Vicious Circle: Toward Effective Risk Regulation (1993).
the transport of radioactive iodine from a federal atomic research facility to nearby residential areas. Second, exposure can be directly measured in the medium in question—air, water, food, or soil. When the medium of exposure is water, soil, or air, hydrologists or meteorologists may be called upon to contribute their expertise to measuring exposure. The third approach directly measures human receptors through some form of biological monitoring, such as blood tests to determine blood lead levels or urinalyses to check for a urinary metabolite indicative of pollutant exposure. Ideally, both environmental testing and biological monitoring are performed; however, this is not always possible, particularly in instances of past exposure.91
The toxicologist must go beyond understanding exposure to determine if the individual was exposed to the compound in a manner that can result in absorption into the body. The absorption of the compound is a function of its physiochemical properties, its concentration, and the presence of other agents or conditions that assist or interfere with its uptake. For example, inhaled lead is absorbed almost totally, whereas ingested lead is taken up only partially into the body. Iron deficiency and low nutritional calcium intake, both common conditions of inner-city children, increase the amount of ingested lead that is absorbed in the gastrointestinal tract and passes into the bloodstream.92
Once a compound is absorbed into the body through the skin, lungs, or gastrointestinal tract, it is distributed throughout the body through the bloodstream. Thus the rate of distribution depends on the rate of blood flow to various organs
91. See, e.g. Mitchell v. Gencorp Inc., 165 F.3d 778, 781 (10th Cir. 1999) (“[g]uesses, even if educated, are insufficient to prove the level of exposure in a toxic tort case”); Wright v. Willamette Indus., Inc., 91 F.3d 1105, 1107 (8th Cir. 1996); Ingram v. Solkatronic Chemical, Inc., WL 3544244, at *11–*18 (N.D. Okla. 2005) (no information on dose so causation cannot be evaluated); In re Three Mile Island Litig. Consol. Proceedings, 927 F. Supp. 834, 870 (M.D. Pa. 1996) (plaintiffs failed to present direct or indirect evidence of exposure to cancer-inducing levels of radiation); Valentine v. Pioneer Chlor Alkali Co., 921 F. Supp. 666, 678 (D. Nev. 1996). But see CSX Transp., Inc. v. Moody, 2007 WL 2011626, at *7 (Ky. Ct. App. July 13, 2007) (specific dose of solvent exposure not necessary as long as evidence of exposure that could cause plaintiff’s toxic encephalopathy is presented including how often solvents were used, duration of exposure, and documentation of physical symptoms while plaintiff worked with solvents).
92. The term “bioavailability” is used to describe the extent to which a compound, such as lead, is taken up into the body. In essence, bioavailability is at the interface between exposure and absorption into the organism. For an example of the impact of bioavailability on a governmental decision, see Thomas H. Umbreit et al., Bioavailability of Dioxin in Soil from a 2,4,5-T Manufacturing Site, 232 Science 497–99 (1986), who found that the bioavailability of dioxins in the soil of Newark, New Jersey, was negligible compared with that of Times Beach, Missouri—the latter community having previously been evacuated because of dioxin soil contamination.
and tissues. Distribution and resulting toxicity also are influenced by other factors, including the dose, the route of entry, tissue solubility, lymphatic supplies to the organ, metabolism, and the presence of specific receptors or uptake mechanisms within body tissues.
Metabolism is the alteration of a chemical by bodily processes. It does not necessarily result in less toxic compounds being formed. In fact, many of the organic chemicals that are known human cancer-causing agents require metabolic transformation before they can cause cancer. A distinction often is made between direct-acting agents, which cause toxicity without any metabolic conversion, and indirect-acting agents, which require metabolic activation before they can produce adverse effects. Metabolism is complex, because a variety of pathways compete for the same agent; some produce harmless metabolites, and others produce toxic agents.93
Excretory routes are urine, feces, sweat, saliva, expired air, and lactation. Many inhaled volatile agents are eliminated primarily by exhalation. Small water-soluble compounds are usually excreted through urine. Higher-molecular-weight compounds are often excreted through the biliary tract into the feces. Certain fat-soluble, poorly metabolized compounds, such as PCBs, may persist in the body for decades, although they can be excreted in the milk fat of lactating women.
In acute toxicity, there is usually a short time period between cause and effect. However, in some situations, the length of basic biological processes necessitates a longer period of time between initial exposure and the onset of observable disease. For example, in acute myelogenous leukemia, the adult form of acute leukemia, at least 1 to 2 years must elapse from initial exposure to radiation, benzene, or
93. Courts have explored the relationship between metabolic transformation and carcinogenesis. See, e.g., In re Methyl Tertiary Butyl Ether (MTBE) Prods. Liab. Litig., 2008 WL 2607852, at *2 (S.D.N.Y. July 1, 2008); Stites v. Sundstrand Heat Transfer, Inc., 660 F. Supp. 1516, 1519 (W.D. Mich. 1987).
cancer chemotherapy before the manifestation of a clinically recognizable case of leukemia, and the period of significantly higher risk from the last exposure usually persists for no more than about 15 years. A toxic tort claim alleging a shorter or longer time period between cause and effect is scientifically highly debatable. Much longer latency periods are necessary for the manifestation of solid tumors caused by agents such as asbestos and arsenic.94
For agents that produce effects other than through mutations, it is assumed that there is some level that is incapable of causing harm. If the level of exposure was below this no observable effect, or threshold, level, a relationship between the exposure and disease cannot be established.95 When only laboratory animal
94. The temporal relationship between exposure and causation is discussed in Rolen v. Hansen Beverage Co., 193 F. App’x 468, 473 (6th Cir. 2006) (“Expert opinions based upon nothing more than the logical fallacy of post hoc ergo propter hoc typically do not pass muster under Daubert.”). See also Young v. Burton, 2008 WL 2810237, at *17 (D.D.C. July 22, 2008); Dellinger v. Pfizer, Inc., 2006 WL 2057654, at *10 (W.D.N.C. July 16, 2006) (temporal relationship between exposure and illness alone not sufficient for causation when exposure was over an 18-month period); Cavallo v. Star Enterprise, 892 F. Supp. 756, 769–74 (E.D. Va. 1995) (expert testimony based primarily on temporal connection between exposure to jet fuel and onset of symptoms, without other evidence of causation, ruled inadmissible). But see In re Stand ‘N Seal, Prods. Liab. Litig., 623 F. Supp. 2d 1355, 1371–72 (N.D. Ga. 2009) (toxicologist’s causation opinion that exposure to grout sealer caused chemical pneumonitis not subject to Daubert challenge based on a strong temporal relationship between exposure and acute onset of respiratory symptoms despite lack of dose response data); In re Ephedra Prods. Liab. Litig., 2007 WL 2947451, at *2 (S.D.N.Y. Oct. 9, 2007) (when exposure is known to produce quick biological effects, a temporal relationship between exposure and effect can be used to infer causation); Nat’l. Bank of Commerce v. Dow Chem. Co., 965 F. Supp. 1490, 1525 (E.D. Ark. 1996) (“[T]here may be instances where the temporal connection between exposure to a given chemical and subsequent injury is so compelling as to dispense with the need for reliance on standard methods of toxicology.”). The issue of latency periods and the statute of limitations is considered in Carl F. Cranor, Toxic Torts: Science, Law and the Possibility of Justice 173 (2006).
95. See, e.g., Allen v. Pennsylvania Eng’g Corp., 102 F.3d 194, 199 (5th Cir. 1996) (“Scientific knowledge of the harmful level of exposure to a chemical, plus knowledge that the plaintiff was exposed to such quantities, are minimal facts necessary to sustain the plaintiff’s burden in a toxic tort case.”); Redland Soccer Club, Inc. v. Dep’t of the Army, 55 F.3d 827, 847 (3d Cir. 1995) (summary judgment for defendant precluded where exposure above cancer threshold level could be calculated from soil samples); Molden v. Georgia Gulf Corp., 465 F. Supp. 2d 606, 613 (M.D. La. 2006) (levels of phenol released into the air were not considered harmful by regulatory agencies); Adams v. Cooper Indus., Inc., 2007 WL 2219212, at *8 (E.D. Ky. July 30, 2007) (because plaintiffs’ experts have not attempted to quantify or measure the amount or dosage of a substance to which a plaintiff was exposed, their opinions are unreliable as to specific causation). But see Byers v. Lincoln Elec. Co, 607 F. Supp.
data are available, the expert extrapolates the NOEL from animals to humans by calculating the animal NOEL based on experimental data and decreasing this level by one or more safety factors to ensure no human effect.96 The NOEL can also be calculated from human toxicity data if they exist. This analysis, however, is not applied to substances that exert toxicity by causing mutations leading to cancer. Theoretically, any exposure at all to mutagens may increase the risk of cancer, although the risk may be very slight and not achieve medical probability.97
One of the basic and most useful tools in diagnosis and treatment of disease is the patient’s medical history.98 A thorough, standardized patient information ques-
2d 863, FN101 (N.D. Ohio 2009) (no welder could ever provide evidence of actual exposure levels after the fact “which is why the law does not require mathematical precision to show toxic exposure” to support claims that inhaled manganese in welding fumes caused neurological injury); Tamraz v. BOC Group, Inc., 2008 U.S. Dist. LEXIS 54932, at *9–*10 (N.D. Ohio July 18, 2008) (plaintiffs were able to provide substantial evidential to support estimates of actual workplace conditions and exposure for welder exposed to manganese).
96. See, e.g., supra note 26 & accompanying text; Robert G. Tardiff & Joseph V. Rodricks, Toxic Substances and Human Risk: Principles of Data Interpretation 391 (1988); Joseph V. Rodricks, Calculated Risks 230–39 (2006); Lu, supra note 22, at 84. For regulatory toxicology, NOEL is being replaced by a more statistically robust approach known as the benchmark dose. See supra note 27 & accompanying text. For example, EPA’s use of the benchmark dose takes into account comprehensive dose–response information, unlike NOEL.
97. See sources cited supra note 28. See also Henricksen v. ConocoPhillips Co., 605 F. Supp. 2d 1142, 1164–65 (E.D. Wa. 2009) (toxicologists’ opinion that exposure to gasoline containing benzene caused truck driver’s acute mylogenous leukemia found unreliable where dose calculation was unreliable, and “no-threshold model” lacked scientific support). U.S. regulatory approaches aimed at protecting the general population tend to avoid setting a standard for a known human carcinogen, because any allowable level below the standard is at least theoretically capable of causing cancer. However, exposure to many chemical carcinogens, including benzene and arsenic, cannot be eliminated. Thus, agencies and Congress have developed a number of ingenious means to regulate carcinogens while not seeming to acquiesce in exposure of the general population to a carcinogen. These include FDA’s approach to de minimis risk and EPA’s setting of a zero maximum contaminant level goal for carcinogens in drinking water while setting a maximum contaminant level above zero that is “set as closely as possible to the MCLG, taking technology and cost data into account,” http://safewater.custhelp.com/cgi-bin/safewater.cfg/php/enduser/std_adp.php?p_faqid=1319. In contrast, occupational standards, which also take into account feasibility, permit exposure to known human carcinogens. A generally outmoded approach for environmental or indoor air guidelines has been to divide the permissible OSHA standard by a factor accounting for the presumed lifetime exposure to the environmental chemical compared with 45 years at a 40-hour workweek.
98. For a thorough discussion of the methods of clinical diagnosis, see John B. Wong et al., Reference Guide on Medical Testimony, in this manual. See also Jerome P. Kassirer & Richard I.
tionnaire would be particularly useful for identifying the etiology, or causation, of illnesses related to toxic exposures; however, there is currently no validated or widely used questionnaire that gathers all pertinent information.99 Nevertheless, it is widely recognized that a thorough medical history involves the questioning and examination of the patient as well as appropriate medical testing. The patient’s written medical records also should be examined.
The following information is relevant to a patient’s medical history: past and present occupational and environmental history and exposure to toxic agents; lifestyle characteristics (e.g., use of nicotine and alcohol); family medical history (i.e., medical conditions and diseases of relatives); and personal medical history (i.e., present symptoms and results of medical tests as well as past injuries, medical conditions, diseases, surgical procedures, and medical test results).
In some instances, the reporting of symptoms can be in itself diagnostic of exposure to a specific substance, particularly in evaluating acute effects.100 For example, individuals acutely exposed to organophosphate pesticides report headaches, nausea, and dizziness accompanied by anxiety and restlessness. Other reported symptoms are muscle twitching, weakness, and hypersecretion with sweating, salivation, and tearing.101
Acute exposure to many toxic agents produces a constellation of nonspecific symptoms, such as headaches, nausea, lightheadedness, and fatigue. These types of symptoms are part of human experience and can be triggered by a host of medical and psychological conditions. They are almost impossible to quantify or document beyond the patient’s report. Thus, these symptoms can be attributed mistakenly to an exposure to a toxic agent or discounted as unimportant when in fact they reflect a significant exposure.102
Kopelman, Learning Clinical Reasoning (1991). A number of cases have considered the admissibility of the treating physician’s opinion based, in part, on medical history, symptomatology, and laboratory and pathology studies.
99. Office of Tech. Assessment, U.S. Congress, supra note 17, at 365–89.
100. But see Moore v. Ashland Chem., Inc., 126 F.3d 679, 693 (5th Cir. 1997) (discussion of relevance of symptoms within 45 minutes of exposure); Armstrong v. Durango Georgia Paper Co. 2005 WL 2373443, at *5 (S.D. Ga. Sept. 27, 2005) (plaintiffs exhibited temporary symptoms widely recognized by the medical community as those associated with exposure to chlorine gas).
101. Environmental Protection Agency, Recognition and Management of Pesticide Poisonings (4th ed. 1989).
102. The issue of whether the development of nonspecific symptoms may be related to pesticide exposure was considered in Kannankeril v. Terminix Int’l, Inc., 128 F.3d 802 (3d Cir. 1997). The court ruled that the trial court abused its discretion in excluding expert opinion that considered, and rejected, a negative laboratory test. Id. at 808–09. See also Kerner v. Terminix Int’l, Co., 2008 WL 341363, at *7 (S.D. Ohio Feb. 6, 2008) (expert testimony about causation admissible based on plaintiff’s nonspecific symptoms because scientific literature has linked exposure to pyrethrins and pyrethroids to
In taking a careful medical history, the expert focuses on the time pattern of symptoms and disease manifestations in relation to any exposure and on the constellation of symptoms to determine causation. It is easier to establish causation when a symptom is unusual and rarely is caused by anything other than the suspect chemical (e.g., such rare cancers as hemangiosarcoma, associated with vinyl chloride exposure, and mesothelioma, associated with asbestos exposure). However, many cancers and other conditions are associated with several causative factors, complicating proof of causation.103
Two types of laboratory tests can be considered: tests that are routinely used in medicine to detect changes in normal body status and specialized tests that are used to detect the presence of the chemical or physical agent.104 For the most part, tests used to demonstrate the presence of a toxic agent are frequently unavailable from clinical laboratories. Even when available from a hospital or a clinical laboratory, a test such as that for carbon monoxide combined to hemoglobin is done so rarely that it may raise concerns regarding its accuracy. Other tests, such as the test for blood lead levels, are required for routine surveillance of potentially exposed workers. However, if a laboratory is certified for the testing of blood lead in workers, for which the OSHA action level is 40 micrograms per deciliter (µg/dl), it does not necessarily mean that it will give reliable data on blood lead levels at the much lower Centers for Disease Control and Prevention action level of 10 µg/dl.
With few exceptions, acute and chronic diseases, including cancer, can be caused by either a single toxic agent or a combination of agents or conditions. In taking a careful medical history, the expert examines the possibility of competing causes, or confounding factors, for any disease, which leads to a differential diagnosis. In addition, ascribing causality to a specific source of a chemical requires that a history be taken concerning other sources of the same chemical. The failure of a physician to elicit such a history or of a toxicologist to pay attention to such a
numbness, tingling, burning sensations, and paresthesia); Wicker v. Consol. Rail Corp., 371 F. Supp. 2d 702, 732 (W.D. Pa. 2005).
103. Failure to rule out other potential causes of symptoms may lead to a ruling that the expert’s report is inadmissible. See, e.g., Perry v. Novartis Pharms. Corp., 564 F. Supp. 2d 452, 469 (E.D. Pa. 2008); Farris v. Intel Corp., 493 F. Supp. 2d 1174, 1185 (D.N.M. 2007); Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387, 1413 (D. Or. 1996); Rutigliano v. Valley Bus. Forms, 929 F. Supp. 779, 786 (D.N.J. 1996).
104. See, e.g., Kannankeril v. Terminix Int’l, Inc., 128 F.3d 802, 807 (3d Cir. 1997).
history raises questions about competence and leaves open the possibility of competing causes of the disease.105
An individual’s simultaneous exposure to more than one chemical may result in a response that differs from that which would be expected from exposure to only one of the chemicals.106 When the effect of multiple agents is that which would be predicted by the sum of the effects of individual agents, it is called an additive effect; when it is greater than this sum, it is known as a synergistic effect; when one agent causes a decrease in the effect produced by another, the result is termed antagonism; and when an agent that by itself produces no effect leads to an enhancement of the effect of another agent, the response is termed potentiation.107
Three types of toxicological approaches are pertinent to understanding the effects of mixtures of agents. One is based on the standard toxicological evaluation of common commercial mixtures, such as gasoline. The second approach is from studies in which the known toxicological effect of one agent is used to explore the mechanism of action of another agent, such as using a known specific inhibitor of a metabolic pathway to determine whether the toxicity of a second agent depends on this pathway. The third approach is based on an understanding of the basic mechanism of action of the individual components of the mixture, thereby allowing prediction of the combined effect, which can then be tested in an animal model.108
105. See, e.g., Perry v. Novartis Pharms. Corp., 564 F. Supp. 2d 452, 471 (E.D. Pa. 2008) (plaintiff’s experts failed to adequately account for the possibility that plaintiff’s T-LBL was idiopathic, and thus their conclusion that exposure to Elidel was a substantial cause of plaintiff’s cancer is unreliable and inadmissible); Bell v. Swift Adhesives, Inc., 804 F. Supp. 1577, 1580 (S.D. Ga. 1992) (expert’s opinion that workplace exposure to methylene chloride caused plaintiff’s liver cancer, without ruling out plaintiff’s infection with hepatitis B virus, a known liver carcinogen, was insufficient to withstand motion for summary judgment for defendant).
106. See generally Edward J. Calabrese, Multiple Chemical Interactions 97–115, 220–221 (1991).
107. Courts have been called on to consider the issue of synergy. In International Union, United Automobile, Aerospace & Agricultural Implement Workers of America v. Pendergrass, 878 F.2d 389, 391 (D.C. Cir. 1989), the court found that OSHA failed to sufficiently explain its findings that formaldehyde presented no significant carcinogenic risk to workers at exposure levels of 1 part per million or less. The court particularly criticized OSHA’s use of a linear low-dose risk curve rather than a risk-adverse model after the agency had described evidence of synergy between formaldehyde and other substances that workers would be exposed to, especially wood dust. Id. at 395.
108. See generally Calabrese, supra note 106. EPA has been addressing the issue of multiple exposures to different agents within a community under the heading of cumulative risk assessment. This approach is particularly of importance in dealing with environmental justice concerns. See, e.g., Institute of Medicine, Toward Environmental Justice: Research, Education, and Health Policy Needs (1999); Michael A. Callahan & Ken Sexton, If Cumulative Risk Assessment Is the Answer, What Is the Question? 115 Envtl. Health Persp. 799–806 (2006).
Individuals who exercise inhale more than sedentary individuals and therefore are exposed to higher doses of airborne environmental toxins. Similarly, differences in metabolism, which are inherited or caused by external factors, such as the levels of carbohydrates in a person’s diet, may result in differences in the delivery of a toxic product to the target organ.109
Moreover, for any given level of a toxic agent that reaches a target organ, damage may be greater because of a greater response of that organ. In addition, for any given level of target-organ damage, there may be a greater impact on particular individuals. For example, an elderly individual or someone with preexisting lung disease is less likely to tolerate a small decline in lung function caused by an air pollutant than is a healthy individual with normal lung function.
A person’s level of physical activity, age, sex, and genetic makeup, as well as exposure to therapeutic agents (such as prescription or over-the-counter drugs), affect the metabolism of the compound and hence its toxicity.110 Advances in human genetics research are providing information about susceptibility to environmental agents that may be relevant to determining the likelihood that a given exposure has a specific effect on an individual.
Multiple avenues of deductive reasoning based on scientific data lead to acceptance of causation in any field, particularly in toxicology. However, the basis for this deductive reasoning is also one of the most difficult aspects of causation to describe quantitatively. If animal studies, pharmacological research on mechanisms of toxicity, in vitro tissue studies, and epidemiological research all document toxic effects of exposure to a compound, an expert’s opinion about causation in a particular case is much more likely to be true.111
109. See generally Calabrese, supra note 106.
110. The problem of differences in chemical sensitivity was addressed by the court in Gulf South Insulation v. United States Consumer Product Safety Commission, 701 F.2d 1137 (5th Cir. 1983). The court overturned the commission’s ban on urea-formaldehyde foam insulation because the commission failed to document in sufficient detail the level at which segments of the population were affected and whether their responses were slight or severe: “Predicting how likely an injury is to occur, at least in general terms, is essential to a determination of whether the risk of that injury is unreasonable.” Id. at 1148.
111. Consistency of research results was considered by the court in Marsee v. United States Tobacco Co., 639 F. Supp. 466, 469–70 (W.D. Okla. 1986). The defendant, the manufacturer of snuff alleged to cause oral cancer, moved to exclude epidemiological studies conducted in Asia that demonstrate
The more difficult problem is how to evaluate conflicting research results. When different research studies reach different conclusions regarding toxicity, the expert must be asked to explain how those results have been taken into account in the formulation of the expert’s opinion.
The basis of the toxicologist’s expert opinion in a specific case is a thorough review of the research literature and treatises concerning effects of exposure to the chemical at issue. To arrive at an opinion, the expert assesses the strengths and weaknesses of the research studies. The expert also bases an opinion on fundamental concepts of toxicology relevant to understanding the actions of chemicals in biological systems.
As the following series of questions indicates, no single academic degree, research specialty, or career path qualifies an individual as an expert in toxicology. Toxicology is a heterogeneous field. A number of indicia of expertise can be explored, however, that are relevant to both the admissibility and weight of the proffered expert opinion.
A. Does the Proposed Expert Have an Advanced Degree in Toxicology, Pharmacology, or a Related Field? If the Expert Is a Physician, Is He or She Board Certified in a Field Such as Occupational Medicine?
A graduate degree in toxicology demonstrates that the proposed expert has a substantial background in the basic issues and tenets of toxicology. Many universities have established graduate programs in toxicology. These programs are administered by the faculties of medicine, pharmacology, pharmacy, or public health.
Although most recent toxicology Ph.D. graduates have no other credentials, many highly qualified toxicologists are physicians or hold doctoral degrees
a link between smokeless tobacco and oral cancer. The defendant also moved to exclude evidence demonstrating that the nitrosamines and polonium-210 contained in the snuff are cancer-causing agents in some 40 different species of laboratory animals. The court denied both motions, finding:
There was no dispute that both nitrosamines and polonium-210 are present in defendant’s snuff products. Further, defendant conceded that animal studies have accurately and consistently demonstrated that these substances cause cancer in test animals. Finally, the Court found evidence based on experiments with animals particularly valuable and important in this litigation since such experiments with humans are impossible. Under all these circumstances, the Court found this evidence probative on the issue of causation.
Id. See also sources cited supra note 14.
in related disciplines (e.g., veterinary medicine, pharmacology, biochemistry, environmental health, or industrial hygiene). For a person with this type of background, a single course in toxicology is unlikely to provide sufficient background for developing expertise in the field.
A proposed expert should be able to demonstrate an understanding of the discipline of toxicology, including statistics, toxicological research methods, and disease processes. A physician without particular training or experience in toxicology is unlikely to have sufficient background to evaluate the strengths and weaknesses of toxicological research. Most practicing physicians have little knowledge of environmental and occupational medicine.112 Generally, physicians are quite knowledgeable about the identification of effects and their treatment. The cause of these effects, particularly if they are unrelated to the treatment of the disease, is generally of little concern to the practicing physician. Subspecialty physicians may have particular knowledge of a cause-and-effect relationship (e.g., pulmonary physicians have knowledge of the relationship between asbestos exposure and asbestosis),113 but most physicians have little training in chemical toxicology and lack an understanding of exposure assessment and dose–response relationships. An exception is a physician who is certified in medical toxicology as a subspeciality under the American Board of Medical Specialties’ requirements, based on substantial training in toxicology and successful completion of rigorous examinations, including recertification exams.114
112. For recent documentation of how rarely an occupational history is obtained, see B.J. Politi et al., Occupational Medical History Taking: How Are Today’s Physicians Doing? A Cross-Sectional Investigation of the Frequency of Occupational History Taking by Physicians in a Major US Teaching Center. 46 J. Occup. Envtl. Med. 550–55 (2004).
113. See, e.g., Moore v. Ashland Chem., Inc., 126 F.3d 679, 701 (5th Cir. 1997) (treating physician’s opinion admissible regarding causation of reactive airway disease); McCullock v. H.B. Fuller Co., 61 F.3d 1038, 1044 (2d Cir. 1995) (treating physician’s opinion admissible regarding the effect of fumes from hot-melt glue on the throat, where physician was board certified in otolaryngology and based his opinion on medical history and treatment, pathological studies, differential etiology, and scientific literature); Benedi v. McNeil-P.P.C., Inc., 66 F.3d 1378, 1384 (4th Cir. 1995) (treating physician’s opinion admissible regarding the causation of liver failure by mixture of alcohol and acetaminophen, based on medical history, physical examination, laboratory and pathology data, and scientific literature—the same methodologies used daily in the diagnosis of patients); In re Ephedra Prods. Liab. Litig., 478 F. Supp. 2d 624, 633 (S.D.N.Y. 2007) (opinion of treating physician will assist the trier of fact because a reasonable juror would want to know what inferences a treating physician would make); Morin v. United States, 534 F. Supp. 2d 1179, 1185 (D. Nev. 2005) (treating physician does not have sufficient expertise to offer opinion about whether exposure to jet fuel caused cancer in his patient).
Treating physicians also become involved in considering cause-and-effect relationships when they are asked whether a patient can return to a situation in which an exposure has occurred. The answer is obvious if the cause-and-effect relationship is clearly known. However, this relationship is often uncertain, and the physician must consider the appropriate advice. In such situations, the physician will tend to give advice as though the causality was established, both because it is appropriate caution and because of fears concerning medicolegal issues.
114. Before 1990, the American Board of Medical Toxicology certified physicians, but beginning in 1990, medical toxicology became a subspecialty board under the American Board of Emer-
Some physicians who are occupational health specialists also have training in toxicology. Knowledge of toxicology is particularly strong among those who work in the chemical, petrochemical, and pharmaceutical industries, in which the surveillance of workers exposed to chemicals is a major responsibility. Of the occupational physicians practicing today, only about 1000 have successfully completed the board examination in occupational medicine, which contains some questions about chemical toxicology.115
B. Has the Proposed Expert Been Certified by the American Board of Toxicology, Inc., or Does He or She Belong to a Professional Organization, Such as the Academy of Toxicological Sciences or the Society of Toxicology?
As of January 2008, more than 2000 individuals had received board certification from the American Board of Toxicology. To sit for the examination, the candidate must be involved full time in the practice of toxicology, including designing and managing toxicological experiments or interpreting results and translating them to identify and solve human and animal health problems. Diplomats must be recertified every 5 years. The Academy of Toxicological Sciences (aTs) was formed to provide credentials in toxicology through peer review only. It does not administer examinations for certification. Approximately 200 individuals are certified as Fellows of ATS.
gency Medicine, the American Board of Pediatrics, and the American Board of Preventive Medicine, as recognized by the American Board of Medical Specialties.
115. Clinical ecologists, another group of physicians, have offered opinions regarding multiple chemical hypersensitivity and immune system responses to chemical exposures. These physicians generally have a background in the field of allergy, not toxicology, and their theoretical approach is derived in part from classic concepts of allergic responses and immunology. This theoretical approach has often led clinical ecologists to find cause-and-effect relationships or low-dose effects that are not generally accepted by toxicologists. Clinical ecologists often belong to the American Academy of Environmental Medicine.
In 1991, the Council on Scientific Affairs of the American Medical Association concluded that until “accurate, reproducible, and well-controlled studies are available…multiple chemical sensitivity should not be considered a recognized clinical syndrome.” Council on Scientific Affairs, American Med. Ass’n, Council Report on Clinical Ecology 6 (1991). In Bradley v. Brown, 42 F.3d 434, 438 (7th Cir. 1994), the court considered the admissibility of an expert opinion based on clinical ecology theories. The court ruled the opinion inadmissible, finding that it was “hypothetical” and based on anecdotal evidence as opposed to scientific research. See also Kropp v. Maine School Adm. Union No. 44, 471 F. Supp. 2d 175, 181–82 (D. Me. 2007) (expert physician does not rely upon scientifically valid methodologies or data in reaching the conclusion that plaintiff is hypersensitive to phenol vapors in indoor air); Coffin v. Orkin Exterminating Co., 20 F. Supp. 2d 107, 110 (D. Me. 1998); Frank v. New York, 972 F. Supp. 130, 132 n.2 (N.D.N.Y. 1997). But see Elam v. Alcolac, Inc., 765 S.W.2d 42, 86 (Mo. Ct. App. 1988) (expert opinion based on clinical ecology theories admissible).
The Society of Toxicology (SOT), the major professional organization for the field of toxicology, was founded in 1961 and has grown dramatically in recent years. It now has 6300 members.116 Criteria for membership is based either on peer-reviewed publications or on the active practice of toxicology. Physician toxicologists can join the American College of Medical Toxicology and the American Academy of Clinical Toxicologists. There are also societies of forensic toxicology, such as the International Academy of Forensic Toxicology. Other organizations in the field are the American College of Toxicology, for which experience in the active practice of toxicology is the major membership criterion; the International Society of Regulatory Toxicology and Pharmacology; and the Society of Occupational and Environmental Health. For membership, the last two organizations require only the payment of dues.
The success of academic scientists in toxicology, as in other biomedical sciences, usually is measured by the following types of criteria: the quality and number of peer-reviewed publications, the ability to compete for research grants, service on scientific advisory panels, and university appointments.
Publication of articles in peer-reviewed journals indicates an expertise in toxicology. The number of articles, their topics, and whether the individual is the principal or senior author are important factors in determining the expertise of a toxicologist.117
Most research grants from government agencies and private foundations are highly competitive. Successful competition for funding and publication of the research findings indicate competence in an area.
Selection for local, national, and international regulatory advisory panels usually implies recognition in the field. Examples of such panels are the NIH Toxicology Study Section and panels convened by EPA, FDA, Who, and IARC. Recognized industrial organizations, including the American Petroleum Institute and the Electric Power Research Institute, and public interest groups, such as the Environmental Defense Fund and the Natural Resources Defense Council,
116. There are currently 21 specialty sections of SOT that represent the different specialty areas involved in understanding the wide range of toxic effects associated with exposure to chemical and physical agents. These sections include mechanisms, molecular biology, inhalation toxicology, metals, neurotoxicology, carcinogenesis, risk assessment, and immunotoxicology.
117. Examples of reputable, peer-reviewed journals are the Journal of Toxicology and Environmental Health; Toxicological Sciences; Toxicology and Applied Pharmacology; Science; British Journal of Industrial Medicine; Clinical Toxicology; Archives of Environmental Health; Journal of Occupational and Environmental Medicine; Annual Review of Pharmacology and Toxicology; Teratogenesis, Carcinogenesis and Mutagenesis; Fundamental and Applied Toxicology; Inhalation Toxicology; Biochemical Pharmacology; Toxicology Letters; Environmental Research; Environmental Health Perspectives; International Journal of Toxicology; Human and Experimental Toxicology; and American Journal of Industrial Medicine.
employ toxicologists directly and as consultants and enlist academic toxicologists to serve on advisory panels. Because of a growing interest in environmental issues, the demand for scientific advice has outgrown the supply of available toxicologists. It is thus common for reputable toxicologists to serve on advisory panels.
Finally, a university appointment in toxicology, risk assessment, or a related field signifies an expertise in that area, particularly if the university has a graduate education program in that area.
The authors greatly appreciate the excellent research assistance provided by Eric Topor and Cody S. Lonning.
The following terms and definitions were adapted from a variety of sources, including Office of Technology Assessment, U.S. Congress, Reproductive Health Hazards in the Workplace (1985); Casarett and Doull’s Toxicology: The Basic Science of Poisons (Curtis D. Klaassen ed., 7th ed. 2007); National Research Council, Biologic Markers in Reproductive Toxicology (1989); Committee on Risk Assessment Methodology, National Research Council, Issues in Risk Assessment (1993); M. Alice Ottoboni, The Dose Makes the Poison: A Plain-Language Guide to Toxicology (2d ed. 1991); and Environmental and Occupational Health Sciences Institute, Glossary of Environment Health Terms (1989) [update].
absorption. The taking up of a chemical into the body orally, through inhalation, or through skin exposure.
acute toxicity. An immediate toxic response following a single or short-term exposure to an agent or dosing.
additive effect. When exposure to more than one toxic agent results in the same effect as would be predicted by the sum of the effects of exposure to the individual agents.
antagonism. When exposure to one toxic agent causes a decrease in the effect produced by another toxic agent.
benchmark dose. The benchmark dose is determined on the basis of dose–response modeling and is defined as the exposure associated with a specified low incidence of risk, generally in the range of 1% to 10%, of a health effect, or the dose associated with a specified measure or change of a biological effect.
bioassay. A test for measuring the toxicity of an agent by exposing laboratory animals to the agent and observing the effects.
biological monitoring. Measurement of toxic agents or the results of their metabolism in biological materials, such as blood, urine, expired air, or biopsied tissue, to test for exposure to the toxic agents, or the detection of physiological changes that are due to exposure to toxic agents.
biologically plausible theory. A biological explanation for the relationship between exposure to an agent and adverse health outcomes.
carcinogen. A chemical substance or other agent that causes cancer.
carcinogenicity bioassay. Limited or long-term tests using laboratory animals to evaluate the potential carcinogenicity of an agent.
chronic toxicity. A toxic response to long-term exposure or dosing with an agent.
clinical ecologists. Physicians who believe that exposure to certain chemical agents can result in damage to the immune system, causing multiple-
chemical hypersensitivity and a variety of other disorders. Clinical ecologists often have a background in the field of allergy, not toxicology, and their theoretical approach is derived in part from classic concepts of allergic responses and immunology. There has been much resistance in the medical community to accepting their claims.
clinical toxicology. The study and treatment of humans exposed to chemicals and the quantification of resulting adverse health effects. Clinical toxicology includes the application of pharmacological principles to the treatment of chemically exposed individuals and research on measures to enhance elimination of toxic agents.
compound. In chemistry, the combination of two or more different elements in definite proportions, which when combined acquire properties different from those of the original elements.
confounding factors. Variables that are related to both exposure to a toxic agent and the outcome of the exposure. A confounding factor can obscure the relationship between the toxic agent and the adverse health outcome associated with that agent.
differential diagnosis. A physician’s consideration of alternative diagnoses that may explain a patient’s condition.
direct-acting agents. Agents that cause toxic effects without metabolic activation or conversion.
distribution. Movement of a toxic agent throughout the organ systems of the body (e.g., the liver, kidney, bone, fat, and central nervous system). The rate of distribution is usually determined by the blood flow through the organ and the ability of the chemical to pass through the cell membranes of the various tissues.
dose, dosage. A product of both the concentration of a chemical or physical agent and the duration or frequency of exposure.
dose–response curve. A graphic representation of the relationship between the dose of a chemical administered and the effect produced.
dose–response relationships. The extent to which a living organism responds to specific doses of a toxic substance. The more time spent in contact with a toxic substance, or the higher the dose, the greater the organism’s response. For example, a small dose of carbon monoxide will cause drowsiness; a large dose can be fatal.
epidemiology. The study of the occurrence and distribution of disease among people. Epidemiologists study groups of people to discover the cause of a disease, or where, when, and why disease occurs.
epigenetic. Pertaining to nongenetic mechanisms by which certain agents cause diseases, such as cancer.
etiology. A branch of medical science concerned with the causation of diseases.
excretion. The process by which toxicants are eliminated from the body, including through the kidney and urinary tract, the liver and biliary system, the fecal excretor, the lungs, sweat, saliva, and lactation.
exposure. The intake into the body of a hazardous material. The main routes of exposure to substances are through the skin, mouth, and lungs.
extrapolation. The process of estimating unknown values from known values.
good laboratory practice (GLP). Codes developed by the federal government in consultation with the laboratory testing industry that govern many aspects of laboratory standards.
hazard identification. In risk assessment, the qualitative analysis of all available experimental animal and human data to determine whether and at what dose an agent is likely to cause toxic effects.
hydrogeologists, hydrologists. Scientists who specialize in the movement of ground and surface waters and the distribution and movement of contaminants in those waters.
immunotoxicology. A branch of toxicology concerned with the effects of toxic agents on the immune system.
indirect-acting agents. Agents that require metabolic activation or conversion before they produce toxic effects in living organisms.
inhalation toxicology. The study of the effect of toxic agents that are absorbed into the body through inhalation, including their effects on the respiratory system.
in vitro. A research or testing methodology that uses living cells in an artificial or test tube system, or that is otherwise performed outside of a living organism.
in vivo. A research or testing methodology that uses living organisms.
lethal dose 50 (LD50). The dose at which 50% of laboratory animals die within days to weeks.
lifetime bioassay. A bioassay in which doses of an agent are given to experimental animals throughout their lifetime. See bioassay.
maximum tolerated dose (MTD). The highest dose of an agent to which an organism can be exposed without it causing death or significant overt toxicity.
metabolism. The sum total of the biochemical reactions that a chemical produces in an organism.
molecular toxicology. The study of how toxic agents interact with cellular molecules, including DNA.
multiple-chemical hypersensitivity. A physical condition whereby individuals react to many different chemicals at extremely low exposure levels.
multistage events. A model for understanding certain diseases, including some cancers, based on the postulate that more than one event is necessary for the onset of disease.
mutagen. A substance that causes physical changes in chromosomes or biochemical changes in genes.
mutagenesis. The process by which agents cause changes in chromosomes and genes.
neurotoxicology. A branch of toxicology concerned with the effects of exposure to toxic agents on the central nervous system.
no observable effect level (NOEL). The highest level of exposure to an agent at which no effect is observed. It is the experimental equivalent of a threshold.
no-threshold model. A model for understanding disease causation that postulates that any exposure to a harmful chemical (such as a mutagen) may increase the risk of disease.
one-hit theory. A theory of cancer risk in which each molecule of a chemical mutagen has a possibility, no matter how tiny, of mutating a gene in a manner that may lead to tumor formation or cancer.
pharmacokinetics. A mathematical model that expresses the movement of a toxic agent through the organ systems of the body, including to the target organ and to its ultimate fate.
potentiation. The process by which the addition of one agent, which by itself has no toxic effect, increases the toxicity of another agent when exposure to both agents occurs simultaneously.
reproductive toxicology. The study of the effect of toxic agents on male and female reproductive systems, including sperm, ova, and offspring.
risk assessment. The use of scientific evidence to estimate the likelihood of adverse effects on the health of individuals or populations from exposure to hazardous materials and conditions.
risk characterization. The final step of risk assessment, which summarizes information about an agent and evaluates it in order to estimate the risks it poses.
safety assessment. Toxicological research that tests the toxic potential of a chemical in vivo or in vitro using standardized techniques required by governmental regulatory agencies or other organizations.
structure–activity relationships (SAR). A method used by toxicologists to predict the toxicity of new chemicals by comparing their chemical structures with those of compounds with known toxic effects.
synergistic effect. When two toxic agents acting together have an effect greater than that predicted by adding together their individual effects.
target organ. The organ system that is affected by a particular toxic agent.
target-organ dose. The dose to the organ that is affected by a particular toxic agent.
teratogen. An agent that changes eggs, sperm, or embryos, thereby increasing the risk of birth defects.
teratogenic. The ability to produce birth defects. (Teratogenic effects do not pass to future generations.) See teratogen.
threshold. The level above which effects will occur and below which no effects occur. See no observable effect level.
toxic. Of, relating to, or caused by a poison—or a poison itself.
toxic agent or toxicant. An agent or substance that causes disease or injury.
toxicology. The science of the nature and effects of poisons, their detection, and the treatment of their effects.
A Textbook of Modern Toxicology (Ernest Hodgson ed., 4th ed. 2010).
Casarett and Doull’s Toxicology: The Basic Science of Poisons (Curtis D. Klaassen ed., 7th ed. 2007).
Committee on Toxicity Testing and Assessment of Environmental Agents, National Research Council, Toxicity Testing in the 21st Century: A Vision and a Strategy (2007).
Environmental Toxicants (Morton Lippmann ed., 3d ed. 2009).
Patricia Frank & M. Alice Ottoboni, The Dose Makes the Poison: A Plain-Language Guide to Toxicology (3d ed. 2011).
Genetic Toxicology of Complex Mixtures (Michael D. Waters et al. eds., 1990).
Human Risk Assessment: The Role of Animal Selection and Extrapolation (M. Val Roloff ed., 1987).
In Vitro Toxicity Testing: Applications to Safety Evaluation (John M. Frazier ed., 1992).
Michael A. Kamrin, Toxicology: A Primer on Toxicology Principles and Applications (1988).
Frank C. Lu, Basic Toxicology: Fundamentals, Target Organs, and Risk Assessment (4th ed. 2002).
National Research Council, Biologic Markers in Reproductive Toxicology (1989).
Alan Poole & George B. Leslie, A Practical Approach to Toxicological Investigations (1989).
Principles and Methods of Toxicology (A. Wallace Hayes ed., 5th ed. 2008).
Joseph V. Rodricks, Calculated Risks (2d ed. 2006).
Short-Term Toxicity Tests for Nongenotoxic Effects (Philippe Bourdeau et al. eds., 1990).
Toxic Interactions (Robin S. Goldstein et al. eds., 1990).
Toxic Substances and Human Risk: Principles of Data Interpretation (Robert G. Tardiff & Joseph V. Rodricks eds., 1987).
Toxicology (Hans Marquardt et al. eds., 1999).
Toxicology and Risk Assessment: Principles, Methods, and Applications (Anna M. Fan & Louis W. Chang eds., 1996).