10
Case Study 3: Genetically Modified Organisms

AN OVERVIEW OF RISK ASSESSMENT PROCEDURES APPLIED TO GENETICALLY ENGINEERED CROPS

PETER KAREIVA and MICHELLE MARVIER

Department of Zoology, University of Washington

The commercial production of genetically engineered crops has prompted countries around the world to adopt risk assessment procedures for evaluating the safety of transgenic cultivars. Most concern has been directed at the risk that a genetically modified crop may itself be made more weedy as a result of its recombinant trait, or may, through hybridization and introgression, contribute genes to a wild relative, consequently making the related plant more weedy (reviewed in Williamson, 1993; Rissler and Mellon, 1996; Bergelson et al., in press). Additional risks include the environmental fate of plant products (such as degradation versus accumulation of novel endotoxins in soils) and altered agricultural practices (such as increased application of herbicides; Rissler and Mellon, 1996). Although these ecological risks are widely thought to be on average minimal, the tremendous variety of plant attributes that are potentially modifiable renders blanket pronouncements of safety untenable. Moreover, because experience with transgenic crops is still limited, the formal development of risk assessment procedures faces the challenge of anticipating problems with traits that have not yet been developed let alone patented or commercialized.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 231
Incorporating Science, Economics, and Sociology in Developing Sanitary and Phytosanitary Standards in International Trade: Proceedings of a Conference 10 Case Study 3: Genetically Modified Organisms AN OVERVIEW OF RISK ASSESSMENT PROCEDURES APPLIED TO GENETICALLY ENGINEERED CROPS PETER KAREIVA and MICHELLE MARVIER Department of Zoology, University of Washington The commercial production of genetically engineered crops has prompted countries around the world to adopt risk assessment procedures for evaluating the safety of transgenic cultivars. Most concern has been directed at the risk that a genetically modified crop may itself be made more weedy as a result of its recombinant trait, or may, through hybridization and introgression, contribute genes to a wild relative, consequently making the related plant more weedy (reviewed in Williamson, 1993; Rissler and Mellon, 1996; Bergelson et al., in press). Additional risks include the environmental fate of plant products (such as degradation versus accumulation of novel endotoxins in soils) and altered agricultural practices (such as increased application of herbicides; Rissler and Mellon, 1996). Although these ecological risks are widely thought to be on average minimal, the tremendous variety of plant attributes that are potentially modifiable renders blanket pronouncements of safety untenable. Moreover, because experience with transgenic crops is still limited, the formal development of risk assessment procedures faces the challenge of anticipating problems with traits that have not yet been developed let alone patented or commercialized.

OCR for page 231
Incorporating Science, Economics, and Sociology in Developing Sanitary and Phytosanitary Standards in International Trade: Proceedings of a Conference In spite of striking cultural differences regarding willingness to accept risk, countries around the world have converged on three general principles of risk assessment for transgenic crops: containment, the principle of familiarity, and a reliance on small-scale experiments. We discuss each of these approaches and their limitations. Finally, in recognition of the shortcomings of existing screening procedures, we end with a recommendation that greater consideration be given to postrelease monitoring of transgenic plantings. CONTAINMENT The most straightforward way to manage the risk of a biological organism would be to simply contain the organism, to somehow prevent it from spreading beyond its intended release site. For instance, the initial experiments with genetically engineered ice-minus bacteria in Northern California were subjected to elaborate security measures, including fences and broad isolation zones. In its 1989 report on the field testing of genetically modified organisms, the National Research Council (NRC) offered the optimistic conclusion that "routinely used methods for plant confinement offer a variety of options for limiting both gene transfer by pollen and direct escape of the genetically modified plant" (NRC, 1989, p. 36). If transgenic plants and genes could in fact be contained, decisions regarding their risks would be greatly simplified. Yet, on the contrary, data from field trials clearly demonstrate that this initial faith in the feasibility of containment was overly optimistic. For some species hybridization and transfer of genes to wild relatives can occur very rapidly (e.g., Mikkelsen et al., 1996). In addition, direct field experiments indicate that, although most pollen moves only short distances from source plants, a measurable quantity of pollen travels vast distances, making containment of transgenic pollen highly unlikely (e.g., Kareiva et al., 1991; Kareiva et al., 1994; Lavigne et al., 1998). Potential methods of containment include the use of barren zones around crops and plantings of trap plants into border rows. Unfortunately, barren zones may actually cause increases in the mean distance or amount of gene flow out of plots (Manasse, 1992; Morris et al., 1994). Although the use of border rows to trap pollen has proven more successful in reducing the extent of gene movement, the borders must be substantially larger than the transgenic fields, making their use impractical for agronomic-scale plantings (Hokanson et al., 1997). Even in cases where gene transfer is an extremely infrequent event, the notion that transgenes could ever be completely contained remains indefensible. Furthermore, with large-scale commercial production, the sources of transgenes are so plentiful and opportunities for exchange so widespread, containment can not possibly be considered as a tenable risk management procedure. It is noteworthy that regulations in the United States and in the European Union do not in any way rely on containment as part of their risk management procedure for commercial products. In these countries, containment practices are only required for small-scale experiments during the research and development stage of novel cultivar breeding and genetic modification.

OCR for page 231
Incorporating Science, Economics, and Sociology in Developing Sanitary and Phytosanitary Standards in International Trade: Proceedings of a Conference THE PRINCIPLE OF FAMILIARITY Risk assessments often rely on comparisons between transgenic plants and the more familiar unmodified form of the plant or closely related plant species. The Organization for Economic Cooperation and Development (OECD) describes this principle as follows: Whether standard cultural practices would be adequate to manage a relatively unfamiliar new plant line or cultivar can be assessed based on familiarity with a closely related line in conjunction with results from laboratory and preliminary field work with the new line (Anonymous, 1993). This principle is not intended to imply that "familiarity means safety," although implementation of the policy frequently seems to embody such a deduction. For example, it is often assumed that if experiences with familiar plants have been broad and generally positive (e.g., the unmodified plant and its close relatives are not weeds), then the transgenic plant is similarly unlikely to pose a substantial risk. However, field experiments have clearly demonstrated that genetic modification may result in a number of incidental changes to the plant's original traits and that extrapolations from the familiar to the unfamiliar can be severely misguided. For example, the common weed Arabadopsis thaliana is a highly selfing species for which the prospects of gene transfer would generally be considered very low. However, field experiments with transgenic Arabadopsis showed that the transgenic plants, for some unknown reason, actually outcrossed at a rate of 6 percent, nearly 20 times more frequently than unmodified Arabadopsis (Bergelson et al., 1998). The authors concluded (p. 25) that "genetic engineering can substantially increase the probability of transgene escape, even in a species considered to be almost completely selfing." Although regulations in some nations advise that the required degree of scrutiny should depend on the traits of the parent organism (e.g., Genetic Manipulation Advisory Committee, 1998, Appendix 5), transgenic plants may exhibit substantially altered life histories and "familiarity with these [parental] species as useful agricultural and horticultural plants may be irrelevant and misleading" (Williamson, 1994). A second problem regarding application of the principle of familiarity arises when the risk of a recombinant trait is compared with that of a familiar, seemingly similar trait that occurs naturally in unmodified plants. The assumption is that a novel trait that is similar to traits seen elsewhere is unlikely to pose new risks. The problem is that familiarity with a trait is in the eyes of the beholder. An especially good example involves the gene derived from Bacillus thurengiensis (Bt) for endotoxin production, which provides a "natural" insecticide. Because plants in general produce compounds that act as antiherbivore agents, and plant breeders have a long tradition of selecting plant varieties to increase their resistance to herbivores, some might argue that Bt endotoxin production is "familiar" and therefore probably "safe." On the other hand, when the gene for Bt endotoxin is inserted into canola, the transgenic

OCR for page 231
Incorporating Science, Economics, and Sociology in Developing Sanitary and Phytosanitary Standards in International Trade: Proceedings of a Conference canola acquires a trait that it has never before possessed; a trait that protects it, to varying degrees, from a very broad range of caterpillar species. The risks associated with such a trait should not be assessed on the basis of subjective opinions regarding its familiarity or novelty, but rather should rely on data from experimental trials. A third type of extrapolation that is tenuous concerns the long-term effects of repeated plantings of genetically modified crops on soil ecosystems. For example, although Bt endotoxins have previously been sprayed on crops as a form of organic pest control, we have no experience with large quantities of Bt-laden crops decomposing in soils year after year. Experiments have indicated that Bt-residues in cotton leaves persisted for at least 56 days after burial in the soil (Palm et al., 1996). Similarly, although small-scale laboratory experiments indicate no harmful impacts of proteinase inhibitors (another transgenic trait with insecticidal activity), longer-term experiments using natural soil communities suggests that there might be surprising impacts of these compounds with respect to microbial respiration and soil organisms (Donegan et al., 1997). Extrapolations from the familiar to the unfamiliar of the type described above are common, but improper, applications of the principle of familiarity. Rather, the intention of the principle is that familiarity should provide a context for measuring risk—for example, the weediness of a genetically modified plant could be compared with that of the familiar, unmodified form. In fact, U.S. regulations require that before a transgenic crop is deregulated, it must be shown that the genetically engineered plant "is unlikely to pose a greater plant pest risk than the unmodified organisms from which it was derived" (U.S. Department of Agriculture [USDA], 1992). Although surprisingly few of the U.S. petitions for nonregulated status approved prior to 1995 performed such a comparison (Purrington and Bergelson, 1995), experiments comparing the performance of transgenic plants with unmodified source plants should be a cornerstone of the risk assessment process. Thus, rather than providing any evidence regarding risk, familiar plants should provide a benchmark or standard to which the risks posed by modified plants can be compared. SMALL-SCALE RISK ASSESSMENT EXPERIMENTS Most countries require some degree of "testing" to quantify risks if a crop is modified in a way that seems ecologically significant. In the United States, the earliest petitions to deregulate transgenic crops tended to be deficient on actual field experiments and instead relied upon greenhouse tests or simple literature surveys (Parker and Kareiva, 1996, Table 1). Although disputes have arisen repeatedly between environmental groups and industry over the appropriateness of various experimental designs (e.g., Rissler and Mellon, 1996, comment on Upjohn's transgenic squash petition, Animal and Plant Inspection Service [APHIS] Docket No. 92-127-1) and experimental risk assessments have generally been severely flawed (Purrington and Bergelson, 1995), reliance upon field experiments has grown steadily over recent years. Currently, in the United

OCR for page 231
Incorporating Science, Economics, and Sociology in Developing Sanitary and Phytosanitary Standards in International Trade: Proceedings of a Conference States, Europe, and Australia, field experiments aimed at evaluating the potential weediness of transgenic crops are a mandatory part of the approval process (USDA, 1992; European Communities Committee, 1998; Genetic Manipulation Advisory Committee, 1998). Field experiments are, in fact, a valuable tool: If a transgenic crop behaved like an aggressive weed in these experiments, it would be a clear signal that the plant should be tightly regulated and perhaps not allowed for commercial production. However, while the experimental detection of weediness provides a clear sign of danger, the failure to detect weediness does not lead to such a clear-cut conclusion. Determination of "safety" is more complicated because we must consider the experiment's capacity to detect weediness if it in fact exists. Unfortunately, a one-to two-year field assessment in small plots over a limited region may fail to reveal any enhancement of weediness, when in fact such an enhancement occurs under infrequent but important conditions. Simulations demonstrate that field tests for assessing a plant's enhanced invasiveness are prone to high rates of error unless the trials are repeated at multiple sites and over at least several years (Kareiva et al., 1996). Similarly, the potential risks associated with herbivore resistance genes can only be assessed accurately when trials are performed at multiple sites that offer potentially different environments for plant growth as well as different background densities of herbivores (Marvier and Kareiva, 1999). A further weakness of short-term experiments is that there will likely be substantial time lags between the introduction of a transgenic plant and the emergence of ecological problems related to its introduction, such as escape of transgenes into wild relatives or the naturalization of transgenic crops. Long time lags are inherent features of many biological invasions. For example, a survey of historical records for past invasions by weeds in the northwestern United States indicated that the median timelag between the first record of a weed and the onset of widespread infestation was on the order of 30–50 years (Marvier et al., 1999). In addition, time lags between the introduction of ornamental woody plants and their escape into the wild in Germany are on the order of 150 years (Kowarik, 1995). Although examples from the "exotic species" literature are often rejected in the biotechnology arena, it is entirely reasonable to expect that invasions of transgenes will entail extensive time lags simply because invasion is such an unlikely event, probably depending on the chance concordance of a suite of favorable conditions. The potential for time lags means that short-term experiments are likely to support a verdict of "safety" when in fact such a determination is not warranted. MONITORING AND A PRECAUTIONARY APPROACH Unfortunately, containment of transgenic plants or their genes is not a viable option, "familiarity" with related plants or similar traits cannot be extrapolated accurately to the transgenic plants themselves, and a few experiments under a narrow range of conditions can not provide acceptable

OCR for page 231
Incorporating Science, Economics, and Sociology in Developing Sanitary and Phytosanitary Standards in International Trade: Proceedings of a Conference proof of safety. In light of the tremendous uncertainty of risk assessment, the European community has called for amendments to Directive 90/220/EEC on deliberate release of genetically modified organisms that would require vigilant monitoring of transgenic commercial plantings after a marketing consent has been granted (European Communities Committee, 1998), with the idea that dangerous escapes might be detected before undue damage has been done. This approach could prove feasible if populations of problematic transgenic crops (or transgenic weeds) might be sufficiently confined and then controlled with herbicide. Long-term, large-scale monitoring of transgenic plantings provide both an important research opportunity—we can learn a great deal about temporal and spatial variability as well as the occurrence of rare events—and a valuable means of minimizing risk. Although caution and tenacious monitoring are clearly warranted for certain transgenic crops, it will be hard to exercise that caution given the current pressure to ease regulations on the basis of a safe record to date. It should, however, be considered that, although monitoring is an expensive enterprise, the cost and difficulty of controlling a weed population are greatly exacerbated once a weed becomes well established. Thus, investment in monitoring programs that strive toward the earliest possible detection and elimination of transgenic weeds will likely prove cost effective in the long run. More generally, a reliance on monitoring when uncertainty, in the face of empirical data, is still substantial may be an advisable principle for a wide variety of risk assessments. Because of evolution and the role of chance in biological dynamics, monitoring may need to be a mainstay of any ecological risk assessment. REFERENCES Anonymous. 1993. Safety consideration for biotechnology: Scale-up of crop plants. Paris: Organization for Economic Cooperation and Development (OECD). Bergelson, J., C.B. Purrington, and G. Wichmann. 1998. Promiscuity in transgenic plants. Nature 395:25. Bergelson, J., J. Winterer, and C.B. Purrington. In press. Ecological impacts of transgenic crops. In Biotechnology and Genetic Engineering of Plants. V. Malik, ed. Oxford, U.K.: Oxford University Press. Donegan, K.K., R.J. Seidler, V.J. Fieland, D.L. Schaller, C.J. Palm, L.M. Ganio, D.M. Cardwell, and Y. Steinberger. 1997. Decomposition of genetically engineered tobacco under field conditions: persistence of the proteinase inhibitor I product and effects on soil microbial respiration and protozoa, nematode, and microarthropod populations. Journal of Applied Ecology 34:767–777. European Communities Committee. 1998. Second Report: EC Regulation of Genetic Modification in Agriculture. 6378/98/98 (COM(98) 85) Proposal for a European Parliament and Council Directive amending Directive 90/220/EEC on the deliberate release into the environment of genetically modified organisms. Genetic Manipulation Advisory Committee. 1998. Guidelines for the Deliberate Release of Genetically Manipultated Organisms: Field Trials and General Release. Canberra, Australia.

OCR for page 231
Incorporating Science, Economics, and Sociology in Developing Sanitary and Phytosanitary Standards in International Trade: Proceedings of a Conference Hokanson, S. C., R. Grumet, and J. Hancock. 1997. Effect of border rows and trap/donor ratios on pollen-mediated gene movement. Ecological Application 7:1075–1081. Kareiva, P., R. Manasse, and W. Morris. 1991. Using models to integrate data from field trials and estimate risks of gene escape and gene spread. Pp. 31–42 in Biological Monitoring of Genetically Engineered Plants and Microbes. D. R. MacKenzie and S. C. Henry, eds. Bethesda, MD: Agricultural Research Institute. Kareiva, P.W. Morris, and C.M. Jacobi. 1994. Studying and managing the risk of cross-fertilization between transgenic crops and wild relatives. Molecular ecology 3:15–21. Kareiva, P., I.M. Parker, and M. Pascual. 1996. Can we use experiments and models in predicting the invasiveness of genetically engineered organisms? Ecology 77:1670–1675. Kowarik, I. 1995. Time lags in biological invasions with regard to the success and failure of alien species. Pp. 15–38 in Plant Invasions: General Aspects and Special Problems. P. Pysek, K. Prach, M. Rejmanek, M. Wade, eds. Amsterdam: SPB Academic Publishing. Lavigne, C., E.K. Klein, P. Vallee, J. Pierre, B. Godelle, and M. Renard. 1998. A pollen-dispersal experiment with transgenic oilseed rape. Estimation of the average pollen dispersal of an individual plant within a field. Theoretical and Applied Genetics 96:886–896. Manasse, R. 1992. Ecological risks of transgenic plants: effects of spatial dispersion on gene flow. Ecological Applications 2:431–438. Morris, W.F., P.M. Kareiva, and P.L. Raymer. 1994. Do barren zones and pollen traps reduce gene escape from transgenic crops? Ecological Applications 4:157–165. Marvier, M.A. and P. Kareiva. 1999. Extrapolating from field experiments that remove herbivores to population-level effects of herbivore resistance transgenes . Pp. 57–64 in Proceedings of a Workshop on: Ecological Effects of Pest Resistance Genes in Managed Ecosystems. Traynor, P.L. and J.H. Westwood, eds. Blacksburg, Virginia: Information Systems for Biotechnology. Marvier, M.A., E. Meir, and P.M. Kareiva. 1999. How do the design of monitoring and control strategies affect the chance of detecting and containing transgenic weeds? In Risks and Prospects of Transgenic Plants, Where Do We Go From Here? K. Ammann and Y. Jacot, eds. Basel: Birkhasuer Press. Mikkelsen, T.R, B. Andersen, and R.B. Jorgensen. 1996. The risk of crop transgene spread. Nature 380:31. National Research Council (NRC). 1989. Field Testing Genetically Modified Organisms: Framework for Decisions. Washington, DC: National Academy Press. Palm, C.J., D.L. Schaller, K.K. Donegan, and R.J. Seidler. 1996. Persistence in soil of transgenic plant produced Bacillus thurengiensis var. kurstaki delta-endotoxin. Canadian Journal of Microbiology 42:1258–1262. Parker, I.M. and P. Kareiva. 1996. Assessing the risks of invasion for genetically engineered plants: acceptable evidence and reasonable doubt. Biological Conservation 78:193–203. Purrington, C.B. and J. Bergelson 1995. Assessing weediness of transgenic crops: industry plays plant ecologist. Trends in Ecology and Evolution 10:340–342. Rissler, J. and M. Mellon. 1996. The Ecological Risks of Engineered Crops. Cambridge, MA: Massachusetts Institute of Technology Press. U.S. Department of Agriculture (USDA). 1992. Federal Register 57, 53036–53043. Williamson, M. 1993. Risks from the release of GMOs: ecological and evolutionary considerations. Environment Update 1:5–9.

OCR for page 231
Incorporating Science, Economics, and Sociology in Developing Sanitary and Phytosanitary Standards in International Trade: Proceedings of a Conference Williamson, M. 1994. Community response to transgenic plant release: predictions from the British experience of invasive plants and feral crop plants. Molecular Ecology 3:75–79. APPROACHES TO RISK AND RISK ASSESSMENT1 PAUL B. THOMPSON Department of Philosophy, Purdue University Risk analysis is typically understood as a wholly technical or scientific process. Yet the very concept of risk usually implies that some class of possible events has been judged to be adverse, or that that the very indeterminacy of future events is itself adverse. As such, risk analysis cannot be wholly based on science. At best, science can characterize the mechanisms that would lead to events such as mortality or morbidity, and can assign a probability or likelihood to their occurrence. Still, the badness or adversity that is associated with death and disease is based not on science, but morality. Nature is indifferent to death, and it is only when the perspective of human striving is introduced that it can be understood in terms of risk. Risks to health seem amenable to a purely scientific characterization because the moral judgments that are involved in this issue are among the least controversial. But even these judgments become contested at the margins. Ideas of ''health" shift from "absence of disease" to "enhanced capacities," and the capacity to control (and hence assume responsibility for) future events is reflected in the judgment that a particular practice is "risky." As such, philosophy and ethical theory have an inevitable place in the characterization and evaluation of risks. Within the social sciences, the normative and philosophical dimensions of risk are often incorporated into the characterization of rationality. For example, cost-benefit analysis (discussed in Chapter 2) frames rational choice through evaluating and comparing the likely outcomes from each of two or more options. Cost-benefit analysis takes on ethical significance when rational 1   Author's note: The following is a lightly edited transcript of my workshop presentation, which was an overview of my own research as it bears on the case of genetically modified foods. It was not intended to be a comprehensive or representative discussion of philosophical work on risk assessment or on biotechnology. The orientation of the chapter is thus personal and citations are strongly biased toward my own publications. There has been an on-going discussion of this topic in popular press and on the Internet. Thompson (1997a) provides a more balanced and fully referenced discussion of philosophical work on biotechnology.

OCR for page 231
Incorporating Science, Economics, and Sociology in Developing Sanitary and Phytosanitary Standards in International Trade: Proceedings of a Conference optimization of expected values is presumed to be the decision rule that should guide decision making with respect to regulatory standard setting or investment of public resources. Philosophical research on risk has tended to take one of two tacks with respect to this conception of rational optimization. Philosophers who endorse the basic strategy of rational optimization have tended to be critical of scientists' characterizations of probability and uncertainty (see Shrader-Frechette, 1991; Wachbroit, 1991). Other philosophers are critical of rational optimization and cost—benefit analysis, and have argued that public choices should focus on maintaining a basic structure of rights that maintain conditions of fairness among private decision makers (see Sagoff, 1985, MacLean, 1990). For the case study presented by Peter Kareiva and Michelle Marvier, I will introduce a different set of philosophical concerns that focus on ways of framing (or interpreting) risks involved with genetically engineered food. One of the parameters that I use in my work is not to question the consensus assessment among scientists about the probability and degree of harm associated with genetic engineering. Sometimes it is difficult to figure out exactly what that consensus is, but to the extent that I can discern it, I never question it. That is not my business as a philosopher. What I am interested in is the divergence between that assessment, however it is set it up, and that of the broader public (or at least some segments of the broader public) with respect to the riskiness of genetically engineered food. There are, of course, differing opinions among scientists. Nonetheless, it has been and still is true that the broader public (and particularly if that is extended to the specifically concerned public) understand genetically engineered food to be riskier than the scientific consensus would suggest. My particular project has been to try to understand the rational basis for that difference. I am not interested in irrational bases for difference. I am not interested, for example, in purely nonrational judgments of taste. And in some sense, I am not even interested in culture as an explanatory value of those differences, although I do believe that culture has a tremendous influence in terms of the way that people understand risk and get information about risk. I have been strongly influenced by cognitive work on risk undertaken by people such as Paul Slovic and, before that, Tversky and Kahnemann (1982). But unlike them, my framework is rational choice, and I am interested in the rational basis for deviations between a benchmark notion of what the risk is, derived from scientific consensus and other notions that might be held by the public. Furthermore, my project is a philosophical rather than an empirical one: I am attempting to make sense of the debate over genetically modified organisms in a manner that exposits and exemplifies a conception of rationality. I am not attempting to make empirical claims about human psychology or motivation. The philosophical work that I have done suggests testable empirical hypotheses, but I do not represent my work as making empirically verified claims. My philosophical approach to the subject hand is non-standard in that I do not assume that probability and harm or probability and negative outcome are essential characteristics of risk. I have built my work on risk by looking at the

OCR for page 231
Incorporating Science, Economics, and Sociology in Developing Sanitary and Phytosanitary Standards in International Trade: Proceedings of a Conference way the word risk is actually used in Western languages (Thompson, 1987 and 1991; Thompson and Dean, 1996). I look for the meaning of the word "risk," the things that it could possibly mean in a grammatical sentence. Although in many instances it could and does, in fact, mean something like "the probability of harm," that certainly does not account for all of the legitimate uses of the word risk. So I would argue that we need a broader notion of risk, one that sees it as having multiple dimensions. This is a standard view in risk perception and cognitive science literature (Slovic, 1987). My hypothesis is that although genetic engineering tends to score fairly low with respect to probability and harm, it tends to score fairly high with respect to some of these additional dimensions of risk. In this paper I discuss two dimensions of risk. One is information reliability and the second is an ambiguity between event-predicting and act-classifying notions of risk (see Thompson, 1997a; 1997b; 1999). First, is information reliability. Whenever anyone does work on risk, one of the factors to be considered is how reliable the information is. We tend to discount information that we believe to be unreliable. In the first part of this chapter Peter Kareiva and Michelle Marvier discuss the value judgments that scientists apply within their research and within their community for how much discounting to place on information. Here I lay out a spectrum between highly reliable information that is true (although in some respects that is a bad, possibly misleading characterization), to highly unreliable information, which is not just false but also mendacious. How do people sort out whether information is highly reliable or highly unreliable? Clearly one of the things that people consider in evaluating reliability is the context in which this information is presented to them. As a matter of fact, I would argue that the discourse context—the kind of speech that is being performed, the kind of claims that are being made, the purposes that are behind the making of claims, and the rules under which claims can be put forward and evaluated—all influence the extent to which people regard information as reliable. Corresponding to highly reliable information we can postulate the ideal discourse situation, which is a long story. It is something borrowed from the work of Habermas (1990). In the ideal discourse situation, everyone is trying to figure out what is true. There are rules of arguments and ethics; there are possibilities of reproducing results or testing results that are carried out. So there is a sense, at least, in which the way science is supposed to work that fits the ideal discourse situation, and it is clear that people like Habermas who have worked this out have science in mind when they talk about ideal discourse. On the opposite extreme, there is strategic discourse, and purely strategic discourse is a situation in which people do not care about whether the claim is true or false. Strategic speakers only want you to believe something or to act on the basis of something or to accept it as true because it happens to suit some particular interest of theirs at the moment. My paradigm example of strategic discourse in some of my writings is buying a used car. Not all used car dealers

OCR for page 231
Incorporating Science, Economics, and Sociology in Developing Sanitary and Phytosanitary Standards in International Trade: Proceedings of a Conference are bad, of course, but the metaphor still strikes a chord. The used car dealer is a cultural icon—we just do not believe anything that a used car dealer tells us. There is a rational tendency to regard a situation as more risky (like buying a used car) to the extent that we see it moving down a scale toward more strategic considerations and toward more circumstances in which the information that we get is expected to be unreliable. My conclusion would be that risk increases to the extent that one is moving down the information reliability scale. We tend to think of this as the risk of buying a car, which is risky. There is some sense in which the objective facts about the probability that the car is going to break down are quite independent of whether the person that is selling us the car is with a firm that we trust and so on. But we will interpret the purchase of the car and the activity of buying the car as more risky based in part on this information reliability factor. So this is one dimension in which there is a tremendous difference between the public's position and the position of the scientific community, including the regulatory community. The difference is that, for the most part, the scientific community's information about risk comes from an ideal discourse situation. As scientists, we may not get quite as close to an ideal discourse as we might like in large conference settings, but it is far closer to an ideal discourse setting than the circumstance in which members of the public often acquire risk information. Therefore, it is, in fact, quite rational to regard information that filters through strategic channels as questionable. In other words, if genetic engineering is claimed to be safe in a strategic situation, someone might actually interpret that claim to mean that it is therefore more dangerous because it is claimed to be safe. If it is claimed to be dangerous in a strategic situation, one might actually move in the other direction and think that therefore it must be safe. Again, I will not speculate too much on whether and how much this explains European versus North American considerations. But it may well be that there is a sense in which, partly because of the way in which the issue has come to Europe as part of the strategic trade negotiations, that there is a tendency to see this as a set of more strategic claims than in the United States. The second issue that I want to point out is a bit more contentious and a bit more complex. There is an ambiguity in the concept of risk that I have characterized here, and I am systemizing it as an ambiguity between event predicting and act classifying. If we look at the way that people talk about risks in real life, in a nonscientific context, often what they mean is exactly what scientists mean, which is that some function of the probability of events, and the value or harm are associated with the events. But there are many other contexts in which that cannot be what is meant. To summarize a long argument (Thompson, 1991 and 1995), remember that the word "risk" is a verb. And words like "risky" and "risking'' pertain much more to the verb form of the word risk than to the noun form of the word "risk". I defy anyone to translate probability and harm into a verb. When someone risks something, they are doing something. There is some connotation of action or activity that is implicit

OCR for page 231
Incorporating Science, Economics, and Sociology in Developing Sanitary and Phytosanitary Standards in International Trade: Proceedings of a Conference whenever the word risk is used as a verb. There is no connotation of action that is implicit when the word risk is used as a probability and an outcome. Furthermore, if you'll perform the thought experiment, you will have a lot of trouble forming a meaningful grammatically correct English sentence in which the subject that risks, the subject of a risk sentence, is not an intentional agent. By that I mean a human being or a group. We attribute intentionality to corporations and countries all the time. Sometimes we attribute it to animals. We do not attribute it too often to plants and trees, and we certainly do not attribute it to mountains and ecosystems; it just does not make sense to say that that a tree risked its livelihood by growing in a particular place. That starts to sound like anthropomorphism. So there is an important part of the grammar of risk that picks out actions that are performed by intentional agents. I am suggesting that, in the spirit of the kind of heuristics work that has been done by Tversky and Kahnemann and Slovic, we should understand this other sense of risk, what I call the act classifying the sense of risk, as a kind of heuristic. When we use the word risk in these contexts, we are picking out a class of actions. We are picking out a class of things that either people or organizations do. Under this definition, risks are actions that call for some sort of special consideration. Next I want to discuss heuristics as a kind of cognitive filtering. When we call something a risk, we are saying that this deserves more consideration. We need to give it some thought. We need to do something with respect to it. And when we do not call something a risk, when we do not call it risky, we just go ahead and do it. These would be fairly routine, ordinary, habitual things that pass through the cognitive filter without detection. This cognitive filter may be culturally based or psychologically based. It is a way of telling us when to dedicate more resources, in the sense of time, energy, intellectual activity, or (socially) in terms of money to obtain information, write reports, or have committee meetings. It is a filter that tells us when it is important to do that and when it is not important to do that, because we tend to rely on habit, routine, or ordinary activities. There is a link between the intentionality and the cognitive filtering function because at least historically, but maybe not anymore, there has been very little point to devoting special attention to things that we cannot do anything about. So we look at actions that, if we did something else, then things would be different, or if I did something else, I might avoid a certain type of harm. We do not lump generic natural hazards, earthquakes, floods, tornadoes, and so on into that "could have acted otherwise" category. So there is a sense in which, in this way of thinking about risk, things such as freak accidents and acts of God—and as well a background of hazards that characterize all of our daily activities—are not considered to be risks. Clearly accidents have some probability of harm associated with them, but they are not picked out by the cognitive filter that is associated with the word risk in an ordinary context. I want to make a final point. Many times when people say that there is no risk associated with something, scientists interpret that as meaning that there is zero probability of harm. However, few people believe that there is zero

OCR for page 231
Incorporating Science, Economics, and Sociology in Developing Sanitary and Phytosanitary Standards in International Trade: Proceedings of a Conference probability of harm associated with any activity. But what is going on is that when someone makes a claim that "there is no risk," they are saying that it is something that has not made it through their cognitive filter. It is something that we do not devote any special attention to. We just keep doing what we have always been doing. So there is a tension that arises between the way that the scientific risk assessment scientists talk about risk and this other notion of risk that is still very much alive in public discourse. Note that intention is irrelevant to the probability and harm conception of risk. Yet it is highly relevant to the cognitive filtering sense of risk. When we start out with the event-predicting sense of risks, we are already involved in a process of deliberative optimizing. We want to know the probabilities and the level of harm because we are at least, at some level, making a risk-benefit trade-off decision. By deliberative, I mean that we are consciously thinking about options, we are consciously making a comparison, and we are, at least to some degree, consciously applying a decision rule about which way to go. We are doing very little consciously at the heuristics or the cognitive filtering level. This is the type of thing that happens before something even emerges in our world view as significant. For the responses to act-classifying risks, there are three strategies that people follow, both individually and collectively, when they have decided that there is a risk in this broad sense of actions that call for special consideration. The first is to eliminate the perceived source of risk to simplify one's life by saying "I don't even want to think about it. Just don't do it." A second thing someone will do is solve the problem of accountability. Who is going to be responsible in this particular situation? Am I responsible as the risk bearer? Are you responsible as the risk imposer? And if we get that satisfied satisfactorily, that may be the end of the story. We may not have done any work to either quantify or even approximate or estimate probabilities and consequences before we arrive at either of those two solutions. The third thing that we can do in this situation is to undertake a deliberation, to go to the trouble of trying to explicitly articulate—perhaps qualified, perhaps not—but explicitly articulate the dimensions of probability and harm and go through the process of making a deliberate conscious decision. This may be an individual working through a thought process or a group working through a social process. There is a sense in which what is going on in terms of a lot of the public debate is that the risk assessment community, and justifiably so, is already well into the process of deliberation. And the public is still sorting things out and talking about this as being risky in the sense that this is something that calls for a greater look and more care. And it is not clear that the public wants to resolve this problem by a deliberative strategy. They may be more receptive to resolving it by laying down some strict criteria of accountability or by simply eliminating the option from consideration. What is the rationality that is implicit in this? Basically it would be quite irrational to engage in deliberative optimization with regard to all the potential

OCR for page 231
Incorporating Science, Economics, and Sociology in Developing Sanitary and Phytosanitary Standards in International Trade: Proceedings of a Conference choices that we face. If we did that, we would be spending all our time calculating probabilities and benefits and making comparative decisions. One after another there are hundreds of thousands of potential choices that we make every day, and it would be a tremendous waste of our cognitive resources to make deliberative decisions about all of them. It is clear that there have to be some of these substitute rules that apportion deliberative resources and tell us when we are going to go though the explicit risk comparison. I am suggesting that although there is a clear sense in which deliberative optimizing gives us a very strong characterization of what would be rational behavior in a particular case, we need some type of heuristic operating in the background. This heuristic gives some sense of when it is the right time to get more information, when it is the right time to get a detailed risk assessment or risk calculation. In looking at genetically engineered foods, I will assume that they score low on the probability and harm levels. That has been the scientific consensus, at least, although that consensus goes back and forth over time. Nevertheless, compared with microbial hazards, genetic engineering is not a serious risk issue with respect to the probability of harm. Compared with risks of global climate change, it is probably not even a serious environmental risk issue. Genetically modified food is not going to score very high on the two parameters of probability and degree of harm. However, if we look at questions such as "Is it an action that is being undertaken intentionally?," it scores very high. It is not only an intentional (or deliberate) action, but it is very clearly promoted by the people that are undertaking the action as something that is new. The novelty of this activity is, in fact, a big element in the way it has been discussed. How does information on genetic engineering come to people? It often comes to them through channels that are perceived as strategic, meaning that it is through advertising or channels in which people with different points of view are debating one another over issues such as food safety policy or trade issues. Therefore, it is quite rational that it would tend to filter into a relatively high-risk category with respect to both the classifying and the information reliability. Many people who are concerned about genetically modified organisms see it as an easily eliminable source of risk; they do not understand that there would be important costs associated with foregoing genetically engineered food altogether. Because of this, there has been a tendency to gravitate rather quickly toward the elimination strategy, at least in the minds of many people, and I do not believe that this is an irrational move for people to make. When the science and business communities strive to counter that move, they are perceived as engaging in strategic discourse. This cycle of factors tends to reinforce itself. In some respects, science institutions remain in a self-reinforcing cycle of increasing public skepticism about genetic engineering.

OCR for page 231
Incorporating Science, Economics, and Sociology in Developing Sanitary and Phytosanitary Standards in International Trade: Proceedings of a Conference REFERENCES Habermas, J. 1990. Discourse Ethics: Notes on a Program of Philosophical Justification, in The Communicative Ethics Debate, S. Behabib, and F. Dallmayer, eds. Cambridge, MA: Massachusetts Institute of Technology Press. Kahnemann, D., P. Slovic, and A. Tversky, eds. 1982. Judgment Under Uncertainty: Heuristics And Biases. New York: Cambridge University Press. MacLean, D. 1990. Comparing values in environmental policies: moral issues and moral arguments. P.B. Hammond and R. Coppock, eds. Pp. 83–106 in Valuing Health Risks, Costs and Benefits for Environmental Decision Making. Washington, D.C.: National Academy Press. Sagoff, M. 1985. Risk Benefit Analysis in Decisions Concerning Public Safety and Health, Dubuque, IA: Kendall/Hunt. Shrader-Frechette, K. 1991. Risk and Rationality. Berkeley, California: University of California Press. Slovic. P. 1987. Perception of Risk, Science 236:280–285. Thompson, P.B. 1987. Agricultural Biotechnology and the Rhetoric of Risk: Some Conceptual Issues, The Environmental Professional, 9:316–326. Thompson, P.B. 1991. Risk: Ethical Issues and Values, in Agricultural Biotechnology, Food Safety and Nutritional Quality for the Consumer, J.F. MacDonald, ed. National Agricultural Biotechnology Council (NABC) Report 2, Ithaca, N.Y.: NABC. Pp. 204-217. Thompson, P.B. 1995. Risk and Responsibilities in Modern Agriculture, in Issues in Agricultural Bioethics, T.B. Mepham, G.A. Tucker, and J. Wiseman, eds. Nottingham: Nottingham University Press. Pp. 31–45. Thompson, P.B. 1997a. Food Biotechnology in Ethical Perspective. London: Chapman and Hall. Thompson, P.B. 1997b. Science Policy and Moral Purity: The Case of Animal Biotechnology, Agriculture and Human Values 14(1997):11–27. Thompson, P.B. 1999. The Ethics of Truth-Telling and the Problem of Risk, Science and Engineering Ethics 5(4):489–511. Thompson P.B. and W.E. Dean. 1996. Competing Conceptions of Risk, Risk: Health, Safety and Environment 7(4):361–384. Wachbroit, R. 1991. Describing Risk, M.A. Levin and H.S. Strauss, eds., Risk Assessment in Genetic Engineering, New York: McGraw-Hill. Pp. 368–377.

OCR for page 231
Incorporating Science, Economics, and Sociology in Developing Sanitary and Phytosanitary Standards in International Trade: Proceedings of a Conference This page in the original is blank.