RONALD R. KLINE
School of Electrical and Computer Engineering, and Science and Technology Studies Department Cornell University
In this essay I draw on the history of engineering and research ethics, specifically the way priorities in these disciplines were established in the United States, to discuss how we should teach social responsibility in research ethics. Following Deborah Johnson (1992, p. 21), I use the term “social responsibility” in the sense of having a moral obligation “to protect the safety and welfare of society.”1 I focus on one obstacle in teaching this aspect of research ethics: the long-standing belief that social responsibility is not the primary concern of scientists because they produce basic knowledge rather than technology. In this view, scientific knowledge is seen as neutral, neither good nor bad, and those who apply this knowledge, mainly engineers, should bear the primary social responsibility for its use (see, e.g., the 1999 newspaper statement by physics Nobel laureate Leon Lederman cited in McGinn 2010, p. 8; Shuurbiers 2011, p. 770).2
This long-held belief in the neutrality of scientific knowledge and the ideal of pure science, which amounts to a social agnosticism of science, has been roundly criticized by historians, sociologists, and philosophers of science and technology. But my experience teaching research ethics at Cornell has shown me how persistent this belief still is. It comes up regularly in the classroom and in discussions with science faculty. Unfortunately, this impediment to teaching social responsibility is reinforced by the literature on how to teach research and engineering ethics. Engineering ethics prioritizes the public’s health, safety, and welfare, while research ethics prioritizes the ethical conduct of research. The literature in these fields sends the message that social responsibility—the duty to protect the public—is not the main concern of scientists.
Conflicting Priorities in Science and Engineering Ethics
This inversion of priorities has been evident in research and engineering ethics since the professionalization of these fields in the 1970s and 1980s. Codes of ethics, textbooks, and the National Academies booklet On Being a Scientist—all show this striking distinction.3 The fields
1 Johnson argues that engineers have an individual role responsibility, rather than a responsibility emerging from a social contract between the engineering profession and the public. The social responsibility of researchers would seem to come under the category of “responsibility-as-accountability,” identified by Michael Davis (2012, p. 15).
2 Schuurbiers (2011, p. 770) comments on the “‘neutrality view’ of social responsibility” and cites previous authors who argue that the “social responsibility of researchers should include critical reflection of the socio-ethical context of their work.”
3 Although the journal Science and Engineering Ethics, established in 1995, publishes articles that focus on the traditional priorities in the field of engineering ethics and the traditional priorities in the field of research ethics,
list similar ethical issues, but invert their priorities (see Box 1). The order of priority varies somewhat in the various codes of ethics issued by the professional engineering societies, e.g., between the lean code of the Institute of Electrical and Electronics Engineers (IEEE) and the expansive code of the National Society of Professional Engineers (NSPE), which is regularly enforced. There are also some differences in priorities between the latest (2009) editions of On Being a Scientist and The Responsible Conduct of Research (by Adil Shamoo and David Resnik). But the basic distinction on how social responsibility is valued in these fields holds and has held since I published an article on this subject in 2005 (Kline 2005). The public’s health, safety, and welfare are the first priority in engineering ethics, but the lowest priority in research ethics.
Priorities in Research and Engineering Ethics
Main Issues in Research Ethics
Integrity of research
Credit and authorship
Conflicts of interest
Welfare of subjects, experimenters, and environment
Social implications of research
Main Issues in Engineering Ethics
Public’s health, safety, and welfare, including the environment
Being a faithful agent of the employer
Conflicts of interest
Credit (e.g., intellectual property provisions)
Integrity of reports
Sources: Committee on Science, Engineering, and Public Policy, National Academy of Sciences, National Academy of Engineering, Institute of Medicine (2009); Shamoo and Resnik (2009); Martin and Schinzinger (2005); and Herkert (2000).
How do we account for this inverted priority and why does it matter for teaching the responsible conduct of research? I maintain that the inverted priority is a cultural obstacle to teaching social responsibility in research ethics and that understanding its history—how it came about—shows why this cultural belief has such a strong hold on ethicists, students, and researchers alike. Understanding this history helps us to identify ways to improve our methods of teaching social responsibility in research ethics.
Creating Priorities in Engineering and Research Ethics
I have argued elsewhere that the reason for the inverted priorities in engineering and research ethics is best understood by considering the responses of professional societies to accidents in
thereby reinforcing these disciplinary boundaries, it also publishes a large number of articles that cover the social responsibility of research as I have defined it here, as evidenced, for example, by the article by Schuurbiers (2011) which I discuss at the end of this essay.
engineering and scandals in science.4 These responses were supported by the centuries-old belief that science values fundamental knowledge in order to understand nature, while engineering values the design of artifacts in order to improve the lives of people. This belief ignores the long history of the hybridity of science and technology—the difficulty of drawing sharp boundaries between science and engineering in the past and in the present.
The Catalysts: Catastrophic Accidents and High-Profile Fraud
I’ll focus here on the responses of professional societies to accidents and scandals. In the 1970s, charges of research misconduct and dangerous technology grew into public scandals about “fraud” in science and amoral calculation in engineering. Accounts of scientific scandals and engineering disasters filled newspapers, calling forth responses from the scientific and engineering communities and from social scientists and philosophers. This outcry helped create the fields of research ethics and engineering ethics, as well as programs in science, technology, and society (Mitcham 2003a,b).
Engineering ethics was transformed by a litany of engineering disasters in the 1970s and 1980s. A short list would include the unsafe design of the Bay Area Rapid Transit (BART) system; the amoral cost-benefit analysis in designing the gas tank of the Ford Pinto; the crash of a DC-10 airliner due to a cargo door opening after takeoff and the crash of a second DC-10 due to an engine falling off during takeoff; the partial meltdown of a nuclear power plant at Three Mile Island; the collapse of a fourth-floor walkway in the atrium of the Hyatt Regency Hotel in Kansas City; the Union Carbide disaster in Bhopal, India; and the space shuttle Challenger accident. These disasters form the corpus of the standard historical cases taught in engineering ethics. The earliest cases, the Pinto gas tank and BART in the early 1970s, spurred changes in the codes of ethics of engineering professional societies. The Engineers’ Council for Professional Development, an umbrella group, rewrote its code in 1974 to state that the engineer “shall hold paramount the safety, health, and welfare of the public.” Other engineering societies followed suit. The revision aimed to assure the public that engineers, if not their managers, were socially responsible. The IEEE also wrote a new code of ethics in 1974, based on its involvement with the BART case (Kline 2001/2002, pp. 15–16; Davis 2001).5
In science, scandalous cases of fraud helped define the field of research ethics. Perhaps the book that did the most to publicize this issue was Betrayers of the Truth, published by science journalists Bill Broad and Nicholas Wade in 1982. In the previous year, Al Gore, then a young congressman from Tennessee, held congressional hearings on Fraud in Biomedical Research; as chair of the Investigations and Oversight Subcommittee of the House Committee on Science and Technology, he drew on cases reported by Broad and Wade in the journal Science. Other congressional bodies followed suit in the 1980s. In 1988, Representative John Dingell, chair of the House Committee on Energy and Commerce, held hearings on Fraud in NIH Grant Programs (Broad and Wade 1982, chap. 1; Gold 1992–1993, vol. 2, chap. 6; Kevles 1998, pp. 101–108).
The scientific community responded to the publicity surrounding these cases by conducting investigations, issuing reports, and publishing educational materials. The first edition of On Being a Scientist appeared in 1988. In 1992, the National Research Council defined misconduct as “fabrication, falsification, and plagiarism in proposing, conducting, and reporting research.” The National Academies established a Panel on Scientific Responsibility and the Conduct of
4 This section is based on Kline (2005, pp. xxxvi–xxxviii), which gives a more detailed history.
5 Davis argues that the original codes, dating back to the late 19th century, stressed social responsibility.
Research, which issued a two-volume report, Responsible Science: Ensuring the Integrity of the Research Process, in 1992–1993 (Committee on Science, Engineering, and Public Policy; National Academy of Sciences, National Academy of Engineering, Institutes of Medicine 1992– 1993, vol. 1, p. 5; 2009, p. 15; Gold 1992–1993; Mitcham 2003a, p. 277; Whitbeck 1995, p. 201). The cold fusion controversy in 1989, the David Baltimore case in biomedicine in 1991, and the fabrication of research results on organic semiconductors by physicist Jan Hendrik Schön at Bell Labs in the late 1990s kept the topic of fraud in the news and before the scientific community (Lewenstein 1992; Kevles 1998; Levi 2002).
Ethical Conduct of Research versus Social Responsibility
The third and most recent edition of On Being a Scientist (2009, The third and most recent edition of On Being a Scientist (2009, p. 3; emphasis in the original) carries on the tradition of prioritizing the ethical conduct of research over its social consequences. The booklet’s introduction cites the duty “to act in ways that serve the public” as one of three main obligations of the scientist—the first two are obligations to other researchers and to oneself.6 It also says that science directly affects the health and well-being of individuals and is used to make policy on social issues such as climate change and stem cell research. But the duties to society as a whole merit only one of the 12 topical sections, “The Researcher in Society” (see Box 2); the first page of this short section describes the duties of the researcher to the public, the second presents a historical case (Committee on Science, Engineering, and Public Policy 2009, pp. 29–43, 48–49). That case, however, is not about protecting the public’s health and safety, which is the first priority in codes of engineering ethics. Rather, the case is about the researcher’s duties in playing a public role as a scientist.
Topics Covered in On Being a Scientist (2009)
Advising and Mentoring
The Treatment of Data
Mistakes and Negligence
Responses to Suspected Violations of Professional Standards
Human Participants and Animal Subjects in Research
Laboratory Safety in Research
Sharing of Research Results
Authorship and the Allocation of Grants
Competing Interests, Commitments, and Values
The Researcher in Society
Source: Committee on Science, Engineering, and Public Policy (2009), pp. xvii–xviii.
6 One could argue that the duty to protect human subjects of research is a social concern and thus a social responsibility.
In the historical case, Arthur Galston, a graduate student in the early 1940s, found that a synthetic chemical enabled crops to grow in colder climates. After the war, he learned that military researchers had turned his work into the defoliant Agent Orange, which was sprayed on forests during the Vietnam War. At a meeting of the American Society of Plant Physiologists in 1966, Galston testified about the long-term toxic effects of Agent Orange. He sent a copy of his report to President Lyndon Johnson and later met with President Richard Nixon’s science advisor, Edward E. David Jr., who recommended in 1970 that the spraying of Agent Orange be stopped. The case concludes by quoting Galston that he used to think a scientist could simply refuse to work on a project that had risky health effects, but that it wasn’t that simple. “The only recourse,” he concluded, “is for a scientist to remain involved with it [the project] to the end” (Committee on Science, Engineering, and Public Policy 2009, p. 49).
It’s a good case study, but the general discussion accompanying it gives a mixed message. On Being a Scientist states that researchers “have a professional obligation to perform research and present the results of that research as objectively and as accurately as possible” ((Committee on Science, Engineering, and Public Policy 2009, p. 48).7 Yet when they become public advocates of science, their colleagues and the public may view them as “biased.” Nevertheless, they have the “right to express their convictions and work for social change, and these activities need not undercut a rigorous commitment to objectivity in research.” This section tends to reinforce the sharp boundary drawn between science and politics, which historians and sociologists of science have questioned for some time. The implication is that objective research will always lead to good results, whereas advocacy is always suspect and has to be justified.
The section ends by saying that the
values on which science is based—including honesty, fairness, collegiality, and openness— serve as guides to action in everyday life as well as in research. These values have helped produce a scientific enterprise of unparalleled usefulness, productivity, and creativity. So long as these values are honored, science—and the society it serves—will prosper. (Committee on Science, Engineering, and Public Policy 2009, p. 48)
The sentiment is an honorable one, but it does not consider whether the researcher has an obligation to avoid harming the public. It assumes deterministically that these good values of science lead to good technology which leads to good social results. It ignores the historical evidence that many research projects have not led to good social results, including the case of Agent Orange that illustrates this section of the booklet. Because this is a historical case, rather than a hypothetical one, there are no questions asking readers about the ethics of Galston’s actions throughout this episode, alternative paths he might have taken, or the responsibility of other researchers in this case. Readers miss the opportunity to consider what the social responsibilities of scientists are while they are conducting research. Monitoring is a worthy value, but so is reflection on possible consequences while conducting research—and the latter is more expected of engineers than of scientists.8
7 On the difficulty of separating politics from science, see Hackett et al. (2008), Part III.
8 The view that the main social responsibility of scientists pertains to their public role in giving expert testimony, making statements to the public, and advocacy has most likely been shaped by post–World War II cases of the scientist in the public eye, beginning with the movement of the atomic scientists.
Suggestions for Practical Guidance
This brief analysis of the history and present state of priorities in research and engineering ethics, and my experience teaching these subjects at Cornell, prompt me to suggest some practical guidance about how to improve our teaching of social responsibility in research.
First, I propose addressing head-on the inversion of priority toward social responsibility in research and engineering ethics. My observation when I taught an NSF-required session on research ethics for undergraduate students doing research in nanoscience and technology during the summer at Cornell is that they respond well to this approach. Their puzzlement that engineers are expected to privilege public health and safety while scientists are not leads them to reflect on their own expectations of research and their status in the university laboratories. They see the hybridity of science and engineering in the labs because nano research is conducted by both scientists and engineers, often in the same building. Being young, they are also idealistic and very interested in the social implications of their research. I try to balance the attention I give to the conduct of research and its possible consequences. The main way I’ve done this is to introduce the inversion of priorities about social responsibility, discuss a case or two along the lines of those on research conduct in On Being a Scientist—though I tend to use more detailed cases—and discuss in-depth a current social concern about nanoscience and technology, such as the toxicity of nano products or the creation of surveillance bots.
My second suggestion is to bring literature from science and technology studies—the history, sociology, and philosophy of science and technology—to bear on research ethics. One value of that approach is to question the sharp boundary drawn between science and engineering in the cases we use (see Kline 2001/2002).
Third, I strongly recommend expanding the material on social responsibility in the booklet On Being a Scientist and giving it a higher priority in the next revision. Including a statement on the obligation to protect the public’s safety and welfare should be considered, as well as explicit statements about the duties of social responsibility during the conduct of research. I suggest adding a hypothetical case study on this aspect of social responsibility to balance the current historical case on the public role of the scientist. One might go further and devote two sections to social responsibility: one on the conduct of research and one on interactions with the public. That would send the message that social responsibility involves attending to large-scale social consequences as much as it does the obligation to care for the welfare of both experimenters and human subjects of research.
Resources and Examples
While writing this paper, I looked for an existing case that would exemplify what I had in mind about teaching the broad social responsibility of conducting research. In the six volumes of cases on graduate research ethics, edited by Brian Schrag at the Association for Practical and Professional Ethics and published from 1997 to 2002, I found one case on obligations to the public (Schrag 2001). In this case, Tom, a postdoc, is conducting research on the pH levels of a region’s lakes, and thinks that the high acidity levels, which are killing off the fish, are probably caused by emissions from nearby electrical power plants. A second, five-year research project is planned to determine the cause of the acid rain. Tom meets with Susan, a member of a local environmental group, who asks him to downplay the uncertainties in his research and state that he believes the power plants are causing the acid rain. Tom consults Richard, a senior research
scientist on the project, who also believes the power plants are probably the cause of the acid rain. But Richard cautions Tom against getting involved with advocacy of this sort because he could tarnish his scientific reputation by seeming to be nonobjective and biased since the second five-year study of the cause of the acid rain has not been conducted. The case asks, What should Tom do in this situation, in which his moral obligations to the norms of science and to saving the lakes are in conflict?
This hypothetical case provides a good alternative to the historical case in the Researcher in Society section in On Being a Scientist. But it still addresses only the researcher’s public role. What we need are new cases that address the social implications of ongoing research, such as that in nanoscience.
In fact, I think current research in nanoethics may be an important avenue in which to explore how to teach social responsibility in the conduct of research. This well-funded area is a vibrant one. The NSF has established a National Nanotechnology Infrastructure Network, which includes social and ethical implications in its research agenda. And in 2005 the NSF funded two Centers for Nanotechnology in Society (at Arizona State University and the University of California at Santa Barbara), whose charge also includes ethical issues. These centers have explored many ways of educating researchers and the public about ethical issues of nanoscience and technology. Nanoethics, a new scholarly journal, was launched in 2007.
I’ll comment on two recent projects that integrate social responsibility in research ethics.9 In 2010, Robert McGinn proposed some brief ethical guidelines for nano researchers (McGinn 2010, pp. 1–2). These covered the existing microethics issues of laboratory safety, intellectual property rights, and integrity of data; the existing mesoethics issue of dealing with the public; and the new macroethics issues of accepting social responsibility for protecting the safety and welfare of the public. One of his ethical responsibilities, for example, states that “if a NT [nanotechnology] researcher has reason to believe that her or his work will be applied to society so as to create a risk of significant harm to humans, he or she has an ethical responsibility to alert appropriate authorities about the potential danger.” This duty is similar to a provision that has long been part of engineering codes of ethics. Essentially what McGinn has done is to merge research and engineering ethics for researchers working in nanoscience and technology. Although he does not ground his guidelines in the extensive scholarship in research and engineering ethics, his attempt to merge these fields moves in the right direction, in my view (McGinn 2010, p. 9).10
A more radical project is to integrate concerns about social responsibility into the early stages of research and development (R&D) through a method called “midstream modulation.” One variant of this approach, “laboratory engagement studies,” embeds an ethicist in the laboratory to help researchers reflect on the “social responsibilities of their research practices.” Proponents of this approach refer to it as a form of learning, as researchers learn about the socioethical context of their work upon being prompted to reflect on it by the “embedded ethicist.” Daan Schuurbiers recently described his experiences with this type of intensive engagement with a small number of
9 Most scholarly research on nanoethics, however, does not address research practices. See McGinn (2010, p. 2) and Lewenstein (2006).
10 McGinn cites the NSPE code of ethics regarding the requirement for the engineer to be a faithful agent of the employer and to hold paramount the public’s health, safety, and welfare (notes 18 and 19, p. 4), but does not do so for his other ethical responsibilities. On the issue of macro-micro ethics in engineering, see Herkert (2005) and Kline (2010).
researchers at two biotechnology laboratories, one at Delft University of Technology in the Netherlands and one at Arizona State University. Both labs researched the production of alternative resources and fuels. Schuurbiers concluded that this engagement helped to make “broader socio-ethical issues more visible in the lab” and “encouraged research participants to critically reflect on these broader issues. Contrary to their initial claims, participants came to acknowledge that broader socio-ethical dimensions permeated their research” (Schuurbiers 2011, pp. 769, 786). One could not ask for much more than that when teaching social responsibility in research ethics.
These are just a few of the ways that the obstacle of the social agnosticism of science can be overcome to teach social responsibility in research ethics. Although this agnosticism is reinforced by the standard literature in engineering and research ethics, which was shaped by responses to scandals and accidents of the 1970s and 1980s, we live in a new era of nanoscience, biotechnology, and other emerging fields in which the hybridity of science and engineering is evident. I’m not advocating that social responsibility should be the number one priority in research ethics. I’m suggesting that we rebalance the priorities in that field to recognize that the traditional view of the relationship between science and engineering is untenable. If that is true for researchers in science and engineering, it should also be true for those of us who teach them ethics.
Broad W, Wade N. Betrayers of the Truth. New York: Simon and Schuster.
Committee on Science, Engineering, and Public Policy; National Academy of Sciences, National Academy of Engineering, Institutes of Medicine. 1992–1993 Responsible Science: Ensuring the Integrity of the Research Process. Washington: National Academy Press.
Committee on Science, Engineering, and Public Policy, National Academy of Sciences, National Academy of Engineering, Institute of Medicine. 2009. On Being a Scientist: A Guide to Responsible Conduct in Research, 3rd ed. Washington: National Academies Press.
Davis M. 2001. Three myths about codes of engineering ethics. IEEE Technology and Society Magazine 20(3):8–14.
Davis M. 2012. ‘Ain’t No One Here but Us Social Forces’: Constructing the Professional Responsibility of Engineers. Science and Engineering Ethics 18:13–34.
Gold BD. 1992–1993. Congressional activities regarding misconduct and integrity in science. In: Responsible Science: Ensuring the Integrity of the Research Process, Committee on Science, Engineering, and Public Policy; National Academy of Sciences, National Academy of Engineering, Institutes of Medicine. Washington: National Academy Press.
Hackett EJ, Amsterdamska O, Lynch M, Wajcman J, eds. 2008. Handbook of Science and Technology Studies, 3rd ed. Cambridge, MA: MIT Press.
Herkert J. 2000. Engineering ethics education in the USA: Content, pedagogy, and curriculum. European Journal of Engineering Education 25:303–313.
Herkert J. 2005. Ways of thinking about and teaching ethical problem solving: Microethics and macroethics in engineering. Science and Engineering Ethics 11:373–385.
Johnson D. 1992. Do engineers have social responsibility? Journal of Applied Ethics 9:21–34.
Kevles DJ. 1998. The Baltimore Case: A Trial of Politics, Science, and Character. New York: W.W. Norton.
Kline RR. 2001/2002. Using history and sociology to teach engineering ethics. IEEE Technology and Society Magazine 20(4):13–20.
Kline R. 2005. Research ethics, engineering ethics, and science and technology studies. In: Encyclopedia of Science, Technology, and Ethics, ed. Carl Mitcham, vol 1, pp. xxxv–xli. New York: MacMillan.
Kline R. 2010. Engineering case studies: Bridging micro and macro ethics. IEEE Technology and Society Magazine 29(4):16–19.
Levi BG. 2002. Investigation finds that one Lucent physicist engaged in scientific misconduct. Physics Today, November, p. 15.
Lewenstein B. 1992. Cold fusion and hot history. Osiris, 2nd series, 7:135–163.
Lewenstein B. 2006. What counts as a “social and ethical issue” in nanotechnology? In: Nanotechnology: Implications for Philosophy, Ethics, and Society, eds. Joachim Schummer and David Baird. Singapore: World Scientific.
Martin MW, Schinzinger R. 2005. Ethics in Engineering, 4th ed. New York: McGraw-Hill.
McGinn R. 2010. Ethical responsibilities of nanotechnology researchers: A short guide. Nanoethics 4:1– 12.
Mitcham C. 2003a. Co-responsibility for research integrity. Science and Engineering Ethics 9:273–290.
Mitcham C. 2003b. Professional idealism among scientists and engineers: A neglected tradition in STS studies. Technology in Society 25:249–262.
Schrag B, ed. 2001. A pHish tale. Graduate Research Ethics: Cases and Commentaries, vol. 5. Bloomington: University of Indiana. Available online at www.onlineethics.org/Resources/Cases/pHish.aspx.
Schuurbiers D. 2011. What happens in the lab: Applying midstream modulation to enhance critical reflection in the laboratory. Science and Engineering Ethics 17:769–788.
Shamoo A, Resnik D. 2009. The Responsible Conduct of Research. Oxford: Oxford University Press.
Whitbeck C. 1995. Ethics in Engineering Practice and Research. Cambridge UK: Cambridge University Press.