At various points in the workshop, participants took time to step back and examine the larger picture for functional genomics. In particular, there were three separate “big-picture” discussions that took place: on education and training, on the definition and use of “model systems,” and on the social and ethical implications of functional genomics research.
As the capacity of functional genomics grows, it is likely that more and more people will look to use its tools to answer questions of interest and develop new capabilities and applications. This in turn will require an increased ability to educate and train people in this field.
While questions around this topic came up throughout the workshop, there was also a session devoted specifically to education and training. This session was moderated by Patricia Wittkopp of the University of Michigan. To begin, participants divided themselves into smaller groups to discuss three questions related to the future of functional genomics education. The three questions were:
- What are the training needs for future genotype-to-phenotype researchers?
- What subjects and topics are important to advance students toward research in functional genomics?
- What are the best strategies for attracting students to this research?
After about 30 minutes of group discussions, the entire group reconvened in plenary, and one participant from each group reported on the main points that the group members discussed. Following these reports, there was also a panel discussion. The panel included Terry Magnuson of the University of North Carolina at Chapel Hill, Arnaud Martin of The George Washington University, Lauren O’Connell of Stanford University, Grace Anderson of Octant, and Rebecca Walker of the University of North Carolina at Chapel Hill.
Education and Training Needs
Much of the conversation in the breakout groups and in the discussion that followed the group reports was focused on what students need to learn in order to pursue careers in functional genomics. For example, Trudy Mackay of Clemson University, who reported for one group, said that the participants had talked about the need for biology majors to be exposed to computer science as well as to other cross-disciplinary topics such as statistics, chemistry, and engineering. Mackay noted that her group found themselves “going down a rabbit hole” of naming topics that would be good for students to understand. In addition to those above, they talked about the basic disciplines of genetics, genomics, molecular biology, and then the more population-level subjects such as quantitative population and evolutionary genetics “because that is at the heart of the genotype-to-phenotype map.” Furthermore, students need to learn how to think in evolutionary terms and be exposed to taxonomically diverse systems, all the way up to human biology.
The group also thought it was important to train students to understand the pressing questions in the field, but they were not certain how to accomplish this. “The best we could come up with is that maybe that’s up to the advisor,” Mackay said, “but I wonder if there are more formal ways of teaching this.”
In addition to topics of classes that students should take, the second group also discussed the importance of creating environments where close collaboration can take place among students, such as between computational biologists and experimentalists.
The reporter for group three said that much of their discussion was focused on what the end goal of training should be. “Are we trying to find specialists or generalists? Should everyone be some sort of new hybrid genomic scientist who incorporates all of these different disciplines?” Some in the group were concerned that this could lead to a lack of depth in the various individual disciplines. They came to an agreement that you need researchers who specialize in fields such as experimental biology and
computer science, but there still needs to be a “basic level of literacy across these disciplines that all of the trainees need to be getting … early.”
Marla Sokolowski of the University of Toronto, another group reporter, said her group also grappled with the interdisciplinary issue and the “push–pull” between giving students a broad integrative understanding and providing them with the deep knowledge needed to earn a Ph.D. It is crucial, she said, for students to learn the vocabularies and concepts of the various areas, and one way to do that is to have graduate students work in interdisciplinary groups where they discuss papers on diverse multi-disciplinary topics. Another approach is to offer short courses to introduce students to various topics. There are some areas that all students should be familiar with, Sokolowski said, including strong training in evolution, and being comfortable thinking about issues from an evolutionary perspective. They should all also have some basic familiarity with ethical issues.
These points led to a further discussion of how to break down the silos around scientific disciplines that exist in universities. One way, Sokolowski mentioned, would be through funding that encouraged interdisciplinary education, such as if graduate fellowships made their funding contingent on cross-disciplinary training.
Terry Magnuson described a slightly different approach toward the same end. “We have two curriculums among many: one is called computational biology and the other one is called genetics and molecular biology.” The former is focused on dry-lab work, while the latter involves a lot of “wet” bench work. Students are paired up, one from each curriculum, to work together on a project, so that they “cross-train” each other. Magnuson spoke to the success of the program in getting students to start working on tasks, such as “pipeting,” or writing in code, in which they are not experts.
Lauren O’Connell added that students should be trained in skills other than analyzing DNA sequences. That is not all that bioinformatics is, she noted, and “often image analysis and phenotyping are more difficult than running something through an RNA-seq pipeline.” Another participant added that students should also be exposed to regulatory issues.
Researchers are not trained to navigate regulatory hurdles, such as compliance with protocols for accessing and using samples. Another useful topic for students is safety, not just of the traditional type, but also biosecurity and other issues that could be involved with functional genomics.
Although it was not explicitly contained in any of the questions posed to the breakout groups, the topic of how to teach individual initiative and creativity to students received a great deal of attention. Wittkopp began
the discussion with a question: Granted that students must be taught such things as genetics and bioinformatics, how can they learn about scientific inquiry? For those who will be staying in academia, their primary job will be coming up with the next research question.
O’Connell suggested that one issue making it difficult for students to identify scientific gaps is simply that they cannot or do not read scientific papers. Teaching graduate students how to read scientific papers and getting them to read more broadly would be helpful, she said, although she acknowledged that she did not know how to do that.
Wittkopp responded that one tactic she uses is that when a new student comes into the lab, she “asks them to read about 20 to 25 papers and keep a journal.” She asks the students to record their thoughts—what the student was wondering about when reading an article. “Then we review that together, and it helps me see where their core interests are even if they can’t articulate them.”
“One of the things that often gets lost in undergraduate science education is creativity and trying to get people to understand that science as a practice really is creative,” said Jeff Dudycha of the University of South Carolina. He reported that he has instituted a number of practices in undergraduate and graduate courses that are intended to encourage creativity. For example, in undergraduate classes he has students write poetry to encourage them to think about creative ways to express scientific ideas. “For grad students, we have free association times sometimes in lab meetings where we try to figure out two different things that are going on in the lab or with another lab or what is our connection to another lab in our department that we may not have an obvious link to.” At the undergraduate level he is team-teaching a biology and music course with a composer. In answer to a question from Arnaud Martin about what the students do in the biology and music course, Dudycha said that the class is half biology students and half students in the composing program, and they form teams to build musical simulations of genetic processes. The point is not to make the music sound good, but to make it represent the biology.
Following up on the creativity question, Scott Edwards from Harvard University commented, “At some level you can’t really teach creativity, although I think you can expose students to how to identify the limits of knowledge and what’s an exciting question.” Conversely, creativity can be stifled in cases where an advisor is too heavy handed. There are cases in which a student has been told what to do for the entirety of his or her Ph.D., and cannot generate a scientific question of their own during their postdoc. Advisors should give more thought to what is in the best interests of their students and less about getting out another high-profile paper, he said.
Attracting Students to the Field
Complimentary to educational needs, there was also a great deal said about how to attract more students to the field, as many participants felt that with the growing power of functional genomics, there will be many more research opportunities to exploit if there are enough researchers to exploit them.
A theme that emerged from the breakout group reports and the ensuing discussions was the importance of reaching students early and exposing them to genomics—either somewhat early in the undergraduate years or, when possible, while they are still in high school. For example, Sokolowski’s group reported that they thought it was important for functional genomics to be introduced as a topic in basic biology courses.
Mackay, reporting for her group, suggested that capstone projects, summer internships, and exposure to the research environment are all ways to attract students to the field. She mentioned the importance of giving students hands-on functional genomics experience. “You really want students to be in the wet lab, extracting DNA, getting their results, analyzing them statistically, and being guided through that process.”
O’Connell’s group concluded that it is important to have some genomics tools, such as rapid sequencing tools, in undergraduate classrooms so that students can use them and get a sense of what genomics researchers do. “We also talked about integrating bioinformatics into courses on genetics and evolution and each of these core classes rather than having a bioinformatics class on its own,” she said.
Another breakout panel reporter, Arnaud Martin, spoke about his own experiences in providing undergraduates with practice in genome editing. “I gather 12 to 16 students in the classroom. We have microscopes, we have little micro manipulators. I have heard of very cheap micro-injectors as well…. We acquire Cas9 for very cheap. And we ask the student to run experiments across the semester and actually do loss-of-function assays.” They do the work in frogs and butterflies and can observe the effects of removing a gene. “So, the undergrads get beautiful mutant butterflies, and they’re super excited,” he said. “I think it gets them hooked.”
At one point the discussion turned to attracting students from underrepresented groups into science and, particularly, into functional genomics. “There is an entire population of people out there who are at non-research institutions and underrepresented groups at minority colleges and so forth,” said one workshop participant. This person had a National Science Foundation (NSF)-funded program to bring students from tribal colleges in New Mexico into the lab and give them research experience.
O’Connell agreed with the idea that these programs are important, saying that she runs a program that brings in community college students.
“Most underrepresented groups start off their education at community colleges.” Thus, it is important, she said, for researchers and research universities “to also take students from outside your bubble.”
Having said that, O’Connell added that it can be difficult to get funding for such endeavors. She has been paying for it with her career grant but is not sure what will happen when that grant ends. “It’s hard to get long-term funding for community-based projects,” she said.
Edwards said that he had been running a program to bring diverse undergraduate students to the evolution meetings every year and that he, too, had found it can be difficult to secure long-term funding. “My guess is that a number of program officers at NSF would be excited about it, especially if you can show you’ve had a track record with it.” These sorts of activities do not require huge budgets, he added, so that should make the opportunities more attractive for funding agencies.
Another overarching theme of the workshop was the use of model organisms. Related to this, a session on the afternoon of the first day was devoted to a discussion concerning the concept of “model” organisms. What is a model system in today’s biology? Does it even make sense to talk about model systems in a world where it is cheap and easy to sequence any species and analyze its genome? To start, Paul Katz of the University of Massachusetts Amherst offered an introduction to the topic. This was followed by a facilitated discussion among Katz, Lauren O’Connell of Stanford University, Zoe Donaldson of the University of Colorado Boulder, and Dominique Bergman of Stanford University, with further contributions from audience members.
Katz began with two questions: “What do we mean by model organisms? And is it problematic to classify some organisms as models?” To offer some context, he said that when he examined PubMed for the use of terms such as “model organism,” “model animal,” and “model species,” he found that the use of these terms has gone up exponentially. In 1990, the term was still relatively rare, with only a few uses per year, but by 2000 there were nearly 700 examples of such terms used in papers listed in PubMed, and the number had climbed to nearly 1,400 by 2016.
While the term has become increasingly popular, he wanted to know exactly how people have been using it. To answer this question Katz began by examining how various federal science agencies define “model organism.” The National Institutes of Health (NIH) website says, “The term ‘model organism’ includes mammalian models, such as the mouse and rat, and non-mammalian models, such as budding yeast, social amoeba, roundworm, Arabidopsis, fruit fly, zebrafish, and frog.” Katz noted that
the website did not really define the term and that the examples it offered differed in “level of granularity,” from a genus name to a “round worm” and the “frog.” “It’s really not clear what they mean by model organism,” he concluded.
Moving to NSF, he noted that different divisions of NSF have contradictory definitions. Some define it as a “traditional laboratory model species,” whereas others say that they want to identify new model organisms as they come into use, indicating that they view model organisms not as “traditional laboratory model species” but rather as any organisms used to model specific biological processes, even if that use is new.
Switching to a more basic question, Katz asked, what is a model? There are various definitions, he noted, but science has a specific meaning for the term. “The science definition is that a model is a systematic description of an object or phenomenon that shares important characteristics with the object or phenomenon,” he said. In other words, he explained, a model is not the object or phenomenon itself, but rather a model that shares characteristics with the object or phenomenon being studied. Scientific models can be material, visual, mathematical, or computational, Katz said, but they all share one key feature: they explain how something works. As an example, he pointed to the physical model of a DNA molecule that James Watson and Francis Crick built. Not only did the model show what the molecule looked like, but it also made clear how a DNA molecule could be copied to create additional identical molecules.
So, he asked, are model organisms actually scientific models? Do they explain how something works? “In and of themselves,” he said, “I don’t think so.” But because the term has become so common, people tend to conflate model organisms with models, and “this has had really important repercussions.”
As an example, Katz pointed to a paper published in the Proceedings of the National Academy of Sciences in 2013 (Seok et al., 2013) showing that mouse models of inflammation had almost nothing in common with inflammation in humans. In The New York Times article describing the research (Kolata, 2013), one of the study’s lead authors, Ronald Davis of Stanford University, explained that they began studying the mouse model of inflammation when they submitted findings from a 10-year study on inflammatory responses in humans and the findings were rejected because they had not validated them in a mouse model. The underlying issue, Katz explained, was that many scientists assumed that inflammation worked the same in humans as in mice because the mouse, being a “model organism,” was thought of as a model for how inflammation worked in humans. Yet, when the researchers explored inflammation in mice, they found almost no overlap between how it worked in mice and how it worked in humans.
The idea of using animals as models, Katz said, can be traced back to some extent to August Krogh, a Danish physiologist. In a 1929 article he wrote, “For a large number of problems there will be some animal or a choice of a few such animals, on which it can be most conveniently studied” (Krogh, 1929). This came to be known as “Krogh’s principle.”
Another issue with models, Katz continued, is that there is a bias in favor of homology instead of convergence for determining general principles. This explains, he said, why NIH prefers mammalian models for studying issues that are important in humans over other invertebrate species.
“Similarity due to homology at one level of organization,” he continued, “does not guarantee similarity of mechanism.” To illustrate, he described findings from nudibranchs, organisms that he works with. Two different nudibranch species, Dendronotus iris and Melibe leonina, both swim by flexing from side to side, movements that Katz showed in a video. This behavior is homologous, he said, because all of the species in the clade to which these two species belong swim this way. Furthermore, the two species also have homologous neurons. “We can go from species to species of nudibranchs and find the same individual neurons based on neuroanatomy, and neurochemistry,” Katz said. However, closer inspection shows that the neurons are connected differently in the different species, and the neurological mechanisms underlying the swimming are fundamentally different (Sakurai and Katz, 2017).
Finally, Katz offered several concluding points for “seeding a conversation” with the panel members. First, similarity due to homology does not guarantee similarity of mechanism. Second, general principles can be found only by comparing examples of evolutionary convergence.
Finally, he said, it is important that people not allow language to corrupt their thinking. In particular, overvaluing “model organisms” can lead to an undervaluing of comparative science, and models are often considered the norms, with other species being thought of as the exceptions. There are several examples where this has happened, he said. “Certainly it happened in Drosophila regarding the period genes where they were thought to be the norm, and other insects were weirdos. It turns out Drosophila was the weirdo.”
To begin the discussion, panel member Lauren O’Connell of Stanford University commented that one of the reasons the planning committee chose to include a panel on model organisms was because “we were kind of arguing about the language regarding a model system versus non-model systems and whether that was a useful framework in the first place.” At some level, she said, all of the organisms that people study are modeling
something fundamental about biology, or else they would be the subject of study, but the use of the term “model organism” can have harmful effects on the way researchers approach scientific questions and the way that reviewers see their grants. “So,” she said, “I think language has corrupted our thought a little bit in this process.” But, she added that she is heartened by a shift that she is seeing in how people are talking about “model organisms.” For instance, the National Institute of General Medical Sciences now uses the phrase “research organism” instead of “model system” or “model organism,” which O’Connell called “a step in the right direction.”
Zoe Donaldson of the University of Colorado Boulder suggested that instead of talking about model organisms, researchers should speak of a research species as a model for studying something, such as a model for studying social behavior. Katz responded that it makes more sense to simply speak of studying social behavior rather than studying a model of social behavior. “When you say you’re studying a model of something,” he said, “you’re not studying the thing itself. You’re studying a representation of that thing, and I think it denigrates our own work to call what we’re doing models of something else.” Furthermore, he said, using the language in this way gets away from the idea that the behavior in the animal is being studied as a model of human behavior, which is the implication when one speaks of studying “models.”
Donaldson agreed. “If you’re studying an organism for something that the organism naturally does, you are studying what the organism does and not a model of that behavior.”
Dominique Bergmann of Stanford University said that she appreciated Katz’s comments about homology versus convergence because it emphasizes how studying plants can provide a powerful tool for finding general principles. For example, if one finds “a set of transcription factors that work in a certain way with certain partners that is absolutely the same” in a plant species and an animal species, that can only be explained by convergence and explains something about the developmental events that drive this convergence.
One speaker noted that discussions around work using non-traditional model organisms, such as Ciona intestinalis, end up devolving into conversations about whether the organism is a “model” or not, and are not necessarily about the science being represented. In adopting the term “research organism,” she noted that “it’s a question of how you differentiate between the level of tool development or the amount of knowledge in each of these different research organisms.”
Bergmann responded that it is valuable to talk about different organisms and the tools and investment needed for each. In particular, she said, it is important to keep in mind that there are some pairs of species in which it is useful to study their differences because they are closely related and
other pairs in which it is useful to study their differences because they are far apart.
Responding to Bergmann, the audience member said that it is natural when a researcher has a question and wants to pick the best system to address that question, but many times the research is limited by the available tools. “If you don’t have the tools to work on your question in that system, you have to go to a system where those tools exist, even if it’s suboptimal.” Thus, a lot of the discussion about models versus non-models might go away if more generalized tools can be developed.
Katz agreed, and explained that it would democratize the biological world if researchers were able to use many different tools on a wide variety of species. It is important to avoid studying things simply because they can be studied. “If we want to understand the generalities of biology,” he said, “we want to spread our net diversely.” One must acknowledge that it has been valuable to focus on a small number of species, he added. Classically science has advanced by first going deep on a small number of things and then broadening the focus—and now is the point in genomics when researchers should be looking more broadly, he said.
An audience member commented that one problem, at least in the mouse world, is that researchers have been seduced by their models. “We forget that they’re representations, and we forget that they’re intended for a specific purpose, to help us learn something about X, and that they may not generalize to everything.” Still, it is important to keep in mind that the models are seductive because they do offer many insights into humans.
“What worries me at the end of the day,” Donaldson said, “is that what I’m studying in my one species is not generalizable.” What she has found in looking at the convergent evolution of specific traits is that while a behavior or phenotype may be similar across species, the underlying processes or mechanisms are different.
Continuing that line of thought, Bergmann said that assuming research is funded to improve lives, one argument for studying diverse organisms is because it may reveal multiple solutions to a problem. That led to a brief discussion among multiple participants about balancing research on single organisms versus research across broad systems in such a way that the research across systems can be used to determine if principles discovered in a single organism represent general patterns or are specific only to that organism.
As the discussions outlined in Chapter 8 about consortia and large databases make clear, functional genomics research can be expected to
grow vastly in scope and power in coming years. This makes it particularly important that much thought be given in advance to the sorts of societal and ethical issues raised by this research. Thus, on the afternoon of the first day of the workshop, a panel discussion moderated by Zoe Donaldson of the University of Colorado Boulder was devoted to the societal and ethical implications of functional genomics research.
The three panelists—Rebecca Walker of the University of North Carolina at Chapel Hill, Scott Jackson of Bayer Crop Science, and Ronald Sandler of Northeastern University—each offered some initial comments on the topic, followed by a discussion with the audience.
Rebecca Walker spoke first and began by discussing what she referred to as the “ethical dimensions” of research organism choice (see Figure 9-1). Displaying a series of research organisms—C. elegans, zebrafish, mouse, dog, chimpanzees—as a linear array, she suggested that there are a number of dimensions along which research organisms can be organized.
One such dimension reflects how research on the different organisms is regulated. There are some for which there are few or no regulations, such as Caenorhabditis elegans, and others, such as chimpanzees, on which NIH no longer funds research.
Selecting an organism for research, she said, involves various scientific and pragmatic choices, but also ethical choices that require certain considerations. One such consideration, Walker said, is the capacity for pleasure and pain—which creatures have it, which creatures do not, and what sort of capacity they have. Cognitive capacity is another consideration, she said. How much cognitive capacity does a creature have, and how should that be taken into account? Yet another factor is sociability with its own kind and with other organisms.
How the animals are studied involves another set of ethical considerations, she said. Are they studied in their own environment with noninvasive methods? Are they kept in a laboratory? If so, what things are put into place to make sure that they are capable of flourishing in the environment provided?
Finally, genome editing raises another whole set of issues. As researchers develop an increasingly better understanding of functional genomics and better editing tools, there will be an increasing capability for carrying out genome editing. What sorts of ethical limits should there be on this editing, and what sorts of ethical requirements should there be? One consideration, Walker suggested, should be the purpose of the editing. Editing aimed at preventing disease, for instance, should perhaps be treated differently than editing for enhancement.
Scott Jackson spoke about the use of functional genomics tools in agriculture. Biotechnology is just one of many tools used to improve crops, he said, with the others including such things as breeding, mutagenesis,
polyploidy, and interspecies crossing. When the implications for the plants are taken into account, he said, functional genomics is “much less scary than what we’ve been doing for a thousand years.” For example, mutagenesis “really screws up a genome much more than any of the biotech approaches that we talk about.” Thus, he said, the ethical implications of using functional genomics tools may be less worrisome than for some types of breeding that are an accepted part of agriculture.
Furthermore, he said, the biotechnology tools used or under consideration for use in agriculture are built on things that occur in nature. The gene transfer technique used in agriculture for decades relies on a bacterium that causes tumors in plants. CRISPR/Cas comes from adaptive immunity in bacteria, which has been used for decades in dairy products.
Jackson noted that gene editing is the most precise breeding method yet, making it possible to edit single genes versus changing hundreds or thousands of genes, as is done in traditional approaches.
“From an ethical perspective,” Jackson said, “over the past 30, 40 years, cost has been a major hindrance for [the] public sector and for small companies to get involved in biotechnology.” Engineering a trait and taking that trait all the way to the field, including getting through the entire regulatory process, can cost upward of $130 million. Most universities cannot afford to do that, nor can small companies or nonprofits, which means that only a few very large multi-nationals are able to create new crops using biotechnology. It may turn out to cost less with gene editing techniques. The regulatory system in the United States and Canada will also be much easier to navigate than the one in Europe.
The final short presentation was by Ronald Sandler, a professor of philosophy and the director of the Ethics Institute at Northeastern University, who discussed the ethics of biotechnology in conservation. There is already a robust discussion going on among ethicists about the ethics of using biotechnology in a conservation context, he said, and his goal was to offer a quick overview of that discourse.
He began by listing some of the prominent cases being discussed. There is, for example, talk of using synthetic biology and conservation cloning to increase the genetic diversity of populations that have been through a genetic bottleneck, such as the black-footed ferret. Some have suggested using gene drives to suppress or eliminate populations of invasive species, such as rodents on islands. There has also been discussion about using genetic modification to help certain plant species, such as modifying the American chestnut to make it resistant to chestnut blight.
Much of the discussion about the ethics of these technologies concerns how to use them responsibly in a risk–benefit sense, Sandler said, and the conversation involves such factors as risk assessment and analysis, risk management, cost–benefit analysis, and opportunity cost. In other words,
he said, the ethics discussion sees these new technologies as tools, and “then the question becomes how you use these in a way to get the benefits and to avoid the risk and unintended consequences.”
However, he continued, ethics is much richer than that. There are many other ethical issues associated with biotechnology in a conservation context. The goal of the standard conservation approach, he said, is to eliminate anthropogenic impacts wherever possible, and the strategies for this typically have to do with limiting human activities in and around a space. Examples include carrying out captive breeding programs and performing ecological restorations.
However, that standard paradigm is increasingly insufficient, he said, in large part because of climate change. Macro-scale, high-rate, high-magnitude ecological change is putting a great deal of stress on the standard conservation paradigm in a couple of ways. First, he said, when the reasons that species are at risk are grounded in climatic processes, local strategies do not work. Second, in more and more cases, it is not possible to undo the human impacts. “So, one of the reasons people are attracted to bio-technologies in conservation is because they offer new strategies for these really hard problems.”
The standard conservation model, he explained, is all about limiting human activities or undoing human impact. “But when you start to talk about using genetic tools…. It’s no longer thinking about how we have to change ourselves in order to accommodate other populations; it’s more about thinking how we change these populations so they’re better suited to our world.” That is a radical change in conservation philosophy. Furthermore, the values underlying the standard conservation paradigm do not support—and, indeed, are in tension with—biotechnological interventions, and so it becomes necessary to make trade-offs.
The bottom line is that not only are the scientific issues involved with functional genomics complex, but so are the ethical issues. The risk–benefit ethical considerations are still there, and they are still extremely important in oversight and public engagement, Sandler said, but when functional genomics interacts with conservation, many other values come into play as well, and the way they intersect is complex. “The ethical issues become not just can you do this safely,” he said, “but does this preserve other sorts of values that we care about?” And there are other questions, such as what should be the goals of conservation, and how do we see our relation with the natural world if we are starting to design it and modify it in a more intensive way?
Following those short presentations, Donaldson opened the session up to discussion among the panelists and audience members. To get the conversation started she asked a question of the panel: “What are some of the best strategies to avoid the hubris of previous generations? So many invasive species exist because they were seen as a solution to a problem.”
Sandler offered his thoughts about one aspect of that problem—trying to restore ecosystems to their previous state after they have been affected by the introduction of invasive species or other sorts of damage. For a decade or so, restorationists have realized that “historical reference conditions aren’t going to be as good a proxy for future ecological integrity as they have been in the past.” As a result, they are faced with the question, “To what extent can we continue to call something restoration as it becomes more forward-looking, when we’re not using those reference conditions?” One of the reasons that using historical reference conditions was attractive was that those conditions served as a check on hubris because people were not trying to design the systems but only return them to a previous state. In a world where the future climate will be different from that of the past, however, “the question becomes, what are you designing things for? And it becomes more of an open question that I don’t have an answer for.”
In response, Walker suggested that it is important to foster intellectual humility in such a way that people do not believe that they know more than they actually do about the effects certain actions might have. Sandler commented that people in the ethics field do not think in terms of yes or no, do or do not. Instead, when thinking about a technology, ethicists will ask how to maximize the social and ecological benefits of the technology while avoiding negative consequences. Their goal is to help inform the design and implementation of the technology in ways that maximize the good.
In response to an audience question about genetically modifying animals, Walker said that people need to think carefully about whether they are modifying the animals in ways that would undermine or increase their welfare.
Chris Peterson from the U.S. Department of Agriculture noted that the public is often leery of, if not completely opposed to, genetically modified organisms despite various efforts to help gain their acceptance. “What needs to change in our messaging so that we can make some progress in communicating what these technologies have to offer?”
Sandler answered that from the point of view of consumers, there seems to be no benefit to a genetically modified organism such as Bt corn—it still looks the same and tastes the same and has the same nutritional content—but now there is something new in it that they do not fully understand. Perhaps it is good for the people producing and selling the product, but
consumers may not see the benefit to them, so Sandler thinks it is a deeper problem than just finding the right public education campaign. Furthermore, he said, there is plenty of literature showing that people’s resistance to genetically modified foods is not so much an information deficit issue as it is a trust issue.
In response to a participant’s comment that research animals seem to receive much more ethical consideration than farm animals, Walker said that it seems to be a form of “research exceptionalism.” When humans take part in research, for instance, a high level of informed consent is required even for actions that other people may just choose to do. Something similar may be taking place in regard to animals.
Steven Moss from the National Academies of Sciences, Engineering, and Medicine asked about the ethics of editing species in nature. He mentioned specifically a study on corals that brought up the possibility of using genomic editing tools in corals as well as the possibility of using genomic tools to deal with invasive species. At what point could these things be reasonably considered?
Walker commented that there is a major distinction between genetically modified organisms that are isolated from the environment and those that are put into the environment. So, in the cases where the genetically modified organisms will be released into the environment, it will be important to be much more careful in thinking about potential consequences.
Sandler agreed. “I would say the earlier the better” because there are so many considerations to take into account. “We want to run these value analyses to understand all the different ways in which proposed interventions could intersect with cultural values or ethnic values or symbolic values…. I’m thinking particularly about how the non-human world is valued by different groups of people that you might not think of initially as being relevant.” Furthermore, it is important to be aware of the danger of hubris in making these decisions. In particular, he addressed the tendency of people to overestimate their ability to predict and control complex biological and ecological systems. There is always going to be an element of uncertainty, so at what point does a willingness to accept a certain amount of uncertainty become hubris? The question of how much confidence to have versus how much precaution to take is not just for researchers to answer but also for people who might be affected by the answer.
Nathan Springer of the University of Minnesota pointed out that there is a massive policy lag in regulating many of these technologies—that the technologies offer various capabilities that the regulations were not designed to take into account. Today’s policies relevant to functional genomics, for instance, almost all date to before the development of CRISPR, which has been a game changer. Walker added that the problem has been made worse by the fact that ethical policies concerning genome editing are
almost entirely focused on humans, but that is not where most of the work is being done in the genomics area. There are about 63 different policy proposals floating around related to genome editing in humans, she said, but that is not where the most attention is focused.
Sandler responded that fast-moving technologies will always get out in front of policies and regulations. This is why ethics is an important topic for researchers to take seriously. “The research community and other folks who could potentially implement this stuff are going to be facing these hard questions before they have good guidance from regulatory bodies.” If those researchers on the technological frontier think carefully through the issues, their practices can serve as the bases for the regulations or policies that come along afterward.
Jackson commented that this is what happened with genetically modified organisms in the 1980s and 1990s—that many of the policies eventually put into effect by regulatory agencies were first developed by the companies working on these organisms. Donaldson pointed out that something similar happened with recombinant DNA, as the scientists who were developing that technology spent a great deal of time thinking about the ethics before the use of the technology became widespread. Similarly, Sandler added, researchers have been proactive in the areas of cloning and various kinds of human modification, considering the ethics before the capabilities have been fully developed.
This page intentionally left blank.