In the end, a commitment to the ethical standard of truthfulness, through an understanding of its meaning to science, is essential to enhance objectivity and diminish bias. Unfortunately, the ethos of concern for scientific misconduct continues to dominate the research-ethics movement. This focus is damaging because it turns the attention to seeking and finding wrong-doers and determining punishment rather than discussing generic issues of doing the right thing, preventing harms, seeking benefits, and understanding the right-making and wrong-making characteristics of actions. The focus on scientific misconduct makes ethical issues appear synonymous with legal issues and the search for ethical understanding synonymous with carrying out an investigation.
Synopsis: Integrity is essential to the functioning of the research enterprise and personally important to the vast majority of those who dedicate their lives to science. Yet research misconduct and detrimental research practices are facts of life. They must be understood and addressed. This chapter begins with a brief historical overview of misconduct in science, followed by a discussion of definitions and categories that the committee recommends for use by the research enterprise going forward. This framework retains many key aspects of the 1992 committee’s work but suggests several changes.
Prominent cases of research misconduct have been uncovered regularly over the time that science has existed as an organized activity. The Piltdown Man hoax of the early 20th century is perhaps the most famous of numerous archaeological hoaxes and frauds that have continued up to recent times. In 2000, amateur archaeologist Shinichi Fujimura was found to have “discovered” artifacts that he had placed in older strata than where they had actually been found. Other fields, such as evolutionary biology, are also represented. Fraudulent work in the first half of the 20th century by Paul Kammerer and Trofim Lysenko purported to prove environmentally acquired inheritance. Questions have even been raised about the integrity of work by revered scientists from the past (Broad and Wade, 1983; Goodstein, 2010).
According to the report Responsible Science (NAS-NAE-IOM, 1992), “until [recently] scientists, research institutions, and government agencies relied solely on a system of self-regulation based on shared ethical principles and generally accepted research practices to ensure integrity in the research process.” As discussed in Chapter 2, science and research have not had defined mechanisms for certification, licensure, and imposing penalties for unethical behavior that have developed in professions such as medicine, law, and some areas of professional engineering. Behaviors such as fabrication of research results and plagiarism might be punished by employers but were generally not subject to legal action, at least in the United States.1
Unethical behavior in research first emerged as a policy issue in connection with the treatment of human research subjects and laboratory animals. While ethical concerns about human subjects were first raised earlier, it was the Nazi and Japanese military experiments on prisoners during World War II that led to the development of formal international codes. The Tuskegee syphilis study by the U.S. Public Health Service (PHS) that was launched in the 1930s, but only became subject to publicity and critical examination in 1972, provided impetus for policy changes. Policies to protect human subjects and laboratory animals were adopted in the United States during the 1960s and 1970s.
A series of cases in which researchers fabricated data or plagiarized the work of others garnered considerable publicity and prompted congressional hearings in 1981 (Medawar, 1996; Rennie and Gunsalus, 2001; Steneck, 1994). Conflict-of-interest questions also began arising in this period, related to the effects of researchers benefiting from studies by being awarded stock and other rewards. Due in part to the growth of the research enterprise and the steady increase in federal funding for research, these high-profile cases of fabrication or plagiarism in publicly funded studies were seen as examples of defrauding taxpayers and resulted in congressional attention. Federal agencies began to develop policies on research misconduct during the 1980s. During the late 1980s and early 1990s, cases of alleged immunology data falsification and fabrication against pathologist Thereza Imanishi-Kari of Tufts University (a collaborator of Nobel Prize winner David Baltimore) and data falsification allegations against Mikulas Popovic and Robert Gallo at the National Institutes of Health attracted significant attention from Congress and the news media (Gold, 1993; Kaiser, 1997; Kevles, 1998). After lengthy, complicated, and controversial investigations and adjudication processes, none of the accused in these cases was found to have committed research misconduct. However, these cases provided an important impetus for federal agencies—the Department of Health and Human Services and the National Science Foundation (NSF) in particular—to regularize how allegations of research misconduct would be investigated and adjudicated by specifying the responsi-
bilities of research institutions, the practices that constitute misconduct and are subject to corrective action, and the oversight roles of the agencies themselves.
These cases had a significant impact on the development of federal and institutional approaches to addressing misconduct. The evolution of these approaches is summarized in Table 4-1. Current approaches to addressing research misconduct and detrimental research practices are described in detail in Chapter 7.
Chapter 2 explored the values underlying research and the behaviors that express those values. As behaviors that violate those values, such as data fabrication, emerged as serious problems, researchers and policy makers sought to develop a framework of concepts and definitions to use in preventing, investigating, taking corrective action, and otherwise addressing those behaviors. The
TABLE 4-1 Research Integrity Policy Time Line
|Year||U.S. Policy Changes||Important Contemporary Events|
|Post–World War II||Experiments on prisoners by the Nazis and Japanese military during WWII uncovered.|
|1966||Animal Welfare Act (P.L. 89-544) signed into law, providing for USDA oversight and regulation of facilities performing research on laboratory animals.|
|1972||Tuskegee syphilis experiment becomes public.|
|1974||Department of Health, Education, and Welfare (DHEW) raises the National Institutes of Health’s (NIH’s) Policies for the Protection of Human Subjects (issued in 1966) to regulatory status. The regulations established the institutional review board as one mechanism through which human subjects would be protected. National Research Act (P.L. 93-348) signed into law, creating the National Commission for the Protection for Human Subjects of Biomedical and Behavioral Research.|
|Year||U.S. Policy Changes||Important Contemporary Events|
|Mid-1970s Early 1980s||Several cases of research misconduct are uncovered and widely publicized, including Summerline, Soman, and Darsee cases.|
|1979||Belmont Report released.|
|1981||The Department of Health and Human Services (HHS, formerly DHEW) and the Food and Drug Administration revise human subjects protection regulations based on work by the National Commission for the Protection for Human Subjects of Biomedical and Behavioral Research (the Belmont Report). HHS regulations are contained in Title 45, Part 46 of the Code of Federal Regulations. These regulations were revised in 1983 and 1991.||Investigations and Oversight Subcommittee of the House Science and Technology Committee, chaired by Rep. Albert Gore, Jr., holds hearings on fraud in biomedical research.|
|1985||Health Research Extension Act (P.L. 99-158) signed into law. Under one provision, HHS requires Public Health Service (PHS) funding applicant or awardee institutions to establish “an administrative process to review reports of scientific fraud” and “report to the Secretary any investigation of alleged scientific fraud which appears substantial.” NIH also established “a process for receiving and responding to reports from institutions.” This legislation complemented existing authority under which the PHS pursued research misconduct in the 1970s and early 1980s. Guidelines were published in the NIH Guide for Grants and Contracts in July 1986; the Final Rule, “Responsibilities of Awardee and Applicant Institutions for Dealing with and Reporting Possible Misconduct in Science,” was published in the Federal Register on August 8, 1989, and codified as 42 CFR Part 50, Subpart A.|
|Year||U.S. Policy Changes||Important Contemporary Events|
|Mid- to late 1980s||High-profile investigations of research misconduct allegations made against Robert C. Gallo and Thereza Imanishi-Kari receive significant media and congressional attention.|
|1987||National Science Foundation (NSF) establishes procedures for investigating scientific misconduct (Federal Register, Vol. 52, pp. 24486 ff, July 1, 1987).|
|1988||First edition of the NAS-NAE-IOM educational guide On Being a Scientist is published.|
|1989||PHS creates the Office of Scientific Integrity (OSI) in the Office of the Director, NIH, and the Office of Scientific Integrity Review (OSIR) in the Office of the Assistant Secretary for Health (OASH). NSF creates the Office of Inspector General, which assumes responsibility for investigating scientific misconduct.|
|1990||NIH mandates responsible conduct of research training under certain training grants.|
|1991||Adoption of the Federal Policy for the Protection of Human Subjects (“Common Rule”) by 16 federal agencies that conduct, support, or otherwise regulate human subjects research; the FDA also adopted certain of its provisions.|
|1992||OSI and OSIR are consolidated into the Office of Research Integrity (ORI). HHS also establishes a mechanism for scientists formally charged with research misconduct to receive a hearing before the Research Integrity Adjudications Panel of the Departmental Appeals Board, HHS.||Responsible Science is published.|
|1994||Ryan Commission report is released.|
|Year||U.S. Policy Changes||Important Contemporary Events|
|1990s and 2000s||ORI extramural program supporting research and education efforts in RCR develops and grows.|
|1999||Data Access Act requires that data from federally funded research be made available to requesting parties under Freedom of Information Act procedures.|
|2000||Federal Policy on Research Misconduct becomes effective, establishing a common definition of research misconduct across the federal government.|
|Early and mid-2000s||Schön case, Hwang case, growing international interest, series of international reports.|
|2007||America COMPETES Act signed into law. Includes provision that applicants for NSF funding provide responsible conduct of research training to students and postdoctoral fellows participating in research.|
|2009||OSTP launches federal scientific integrity activity.|
remainder of this chapter reviews concepts and definitions of behaviors that violate the values of research, the evolution of definitions underlying U.S. federal policies, and alternatives that are used by some U.S. institutions as well as by governments and research institutions outside the United States. Rationales for different approaches are explored, and this committee’s recommended framework is presented and explained.
Some issues affecting the advantages and disadvantages of alternative approaches only become clear when considering how concepts and definitions related to violations of research integrity are actually understood and utilized in specific contexts, such as institutional investigations of alleged misconduct that are overseen by federal agencies. Issues arising from implementation of these concepts and definitions are covered in Chapter 7.
In order to develop policies and implementing mechanisms that define how and under what circumstances research institutions are to be answerable to the federal government for the research-related behaviors of their employees, it is
necessary for those behaviors to be identified. It is in this context that the definitions of research misconduct and other terms have policy implications. These concepts and definitions also have a broader significance to the research enterprise and its stakeholders, since fostering high-quality research that advances knowledge requires identifying and preventing behaviors that violate the values of research (IAC-IAP, 2012).
The 1992 report Responsible Science put forward a framework of terms to describe and categorize behaviors that depart from scientific integrity (NAS-NAE-IOM, 1992). This framework was developed around the terms misconduct in science, questionable research practices, and other misconduct. One of the tasks of this committee was to examine this framework and make recommendations about whether and how it should be updated. The goal is to describe a framework of terms and definitions that is appropriate for today’s environment and that advances efforts to foster research integrity.
The sources or causes of actions that violate the values of research suggest different potential responses or approaches to preventing and addressing them. If the action arises from ignorance, education and mentoring may be the most appropriate responses. If the action arises from perverse incentives in the research enterprise, the removal or mitigation of those incentives may be warranted. If the action is criminal or violates the requirements of employment contracts or research grants, then appropriate penalties or other corrective actions would apply.
However, human actions often cannot be neatly ascribed to a single one of these causes. Rather, a given action can be multiply determined and therefore call for a multifaceted response. Furthermore, the causes of research misconduct and other actions that violate the values of research generally do not all lie within the individual. The social and institutional context of research, ranging from the atmosphere within a given research group to the national governance of research systems, creates incentives and disincentives for particular actions. These issues are explored in more detail in Chapter 6.
Developing a workable definition of research misconduct requires grappling with several issues. First, actions covered by the definition should represent significant departures from research values and related norms, whether these are field-specific or more global, and also be committed with the intent to mislead or deceive.
In addition, the definition of research misconduct should have clear and logically supportable boundaries. The actions included should be distinguished from transgressions that may occur on the part of researchers, and perhaps in the context of doing research, but which are better addressed by other frameworks. This will partly depend on what those other frameworks are, meaning that a definition of research misconduct appropriate in a given country might not be
appropriate elsewhere. For example, while the United States has separate policies and regulations for dealing with accusations of fabrication of data, protecting human research subjects, and ensuring humane treatment of laboratory animals, in some countries these issues are covered by a unified regulatory framework.
Also, as will be discussed further below, research institutions themselves may choose to adopt definitions of research misconduct for the purposes of their own internal management and employment policies that are broader than the definition adopted by the federal government. In the discussion below, the appropriateness or suitability of research misconduct definitions is considered primarily from the standpoint of U.S. federal policy.
The 1992 Responsible Science report defined misconduct in science as “fabrication, falsification, or plagiarism in proposing, performing, or reporting research” (NAS-NAE-IOM, 1992). It added that misconduct in science does not include errors of judgment; errors in the recording, selection, or analysis of data; differences in opinions involving the interpretation of data; or misconduct unrelated to the research process. Further, failure in scientific research is to be expected, since exploration entails risks. Projects or studies that fall short of hopes and expectations are not a sufficient basis for identifying misconduct.
Since 1992 the definition of misconduct in science as fabrication, falsification, or plagiarism (FFP) has become a central feature of U.S. institutional and governmental approaches to addressing breaches of scientific integrity. In 2000 the term research misconduct was adopted by the Office of Science and Technology Policy (OSTP) in the Executive Office of the President as part of its Federal Policy on Research Misconduct and was defined as FFP:
I. Research Misconduct Defined
Research misconduct is defined as fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.
Fabrication is making up data or results and recording or reporting them.
Falsification is manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record.
Plagiarism is the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit.
Research misconduct does not include honest error or differences of opinion. (OSTP, 2000)
Alternative Definitions and Non-FFP Elements
The adoption of FFP as the definition of research misconduct by OSTP came about as part of a lengthy, contentious process. Alternative definitions were developed, considered, and debated over a period of years. At the same time and
up until today, other countries have confronted similar issues and have reached a variety of conclusions. Exploring these approaches is useful in understanding the relative advantages of the FFP-only definition of research misconduct and possible alternatives.
It is noteworthy that all of the alternative definitions of research misconduct that the committee is aware of—past or present, recommended or implemented—include fabrication, falsification, and plagiarism. The differences all emerge from the question of whether other behaviors should be included as well.
Other Serious Deviations
Prior to the adoption of the unified federal definition of research misconduct in 2000, the U.S. Public Health Service (which oversees research supported and performed by the National Institutes of Health) defined misconduct in science as “falsification, fabrication, plagiarism, or other practices that seriously deviate from those that are commonly accepted within the research enterprise for proposing, conducting, or reporting research” (Rennie and Gunsalus, 1993). The definition specified that misconduct “does not include honest error or honest difference in interpretations or judgments of data” (Price, 2013). The National Science Foundation’s definition included FFP and “other serious deviations from accepted practices in proposing, carrying out, or reporting research results from activities funded by NSF” (Price, 2013). NSF’s definition also included “retaliation of any kind against a person who reported or provided information about suspected or alleged misconduct and who has not acted in bad faith.”
Both the PHS and NSF definitions allowed room to consider offenses other than FFP as research misconduct. Much of the research enterprise, including research universities and the associations representing them, opposed the inclusion of elements other than FFP in federal definitions, particularly the “other serious deviations” clause. For example, Responsible Science states that “the vagueness of this category has led to confusion about which actions constitute misconduct in science” (NAS-NAE-IOM, 1992). Concerns have also been raised that the clause would open the door to penalizing innovative approaches to research that could potentially yield significant advances.
A concrete illustration of the disagreement over “other serious deviations” arose when the Office of the Inspector General at the National Science Foundation (NSF-OIG) used the clause to launch a misconduct investigation against an investigator who “was accused of a range of coercive sexual offenses against various female undergraduate students and teaching assistants, up to and including rape” while on research trips to foreign countries led by the investigator (Buzzelli, 1993). While Office of Inspector General officials asserted that the case supported the need for the “other serious deviations” clause, one prominent scientist argued that the case represented “a preposterous and appalling application of the definition of scientific misconduct” (Schachman, 1993).
The “other serious deviations” clause remained in the two primary federal research misconduct definitions for a number of years following this case. During that time, there do not appear to have been additional cases in which its application was controversial, or any evidence that innovative research approaches were discouraged as a result, suggesting that there is cause to be skeptical about some of the arguments made against the clause. At the same time, it is not clear that the “other serious deviations” clause has been particularly missed in the years since. In the discussion below and in Chapter 7, the specific elements that might be covered by the “other serious deviations” clause are explored in order to see whether there are research behaviors that might not be adequately investigated or subject to corrective action under current policies, and if so, whether changing the federal research misconduct policy is the best way to accomplish this. Denmark’s experience with the Lomborg case and its aftermath, in which a controversial finding of “scientific dishonesty” was later overturned (discussed later in this chapter), serves as an additional cautionary example of what can occur when governments and institutions utilize a broad, nonspecific definition of research misconduct.
On the basis of current knowledge, it appears that the “other serious deviations” clause and similar formulations may not have the adverse impacts on research that some have feared, but they may introduce the risk that a controversial or mishandled case could lead to turmoil and a loss of credibility on the part of the institutions and agencies charged with addressing research misconduct.
The Ryan Commission
In 1995, the Commission on Research Integrity was organized by Congress to “advise the Secretary of Health and Human Services and Congress about ways to improve the Public Health Service (PHS) response to misconduct in biomedical and behavioral research receiving PHS funding.” Known as the Ryan Commission after its chairman, Harvard professor Kenneth Ryan, it released a report on misconduct in research and treatment of good-faith whistleblowers (Commission on Research Integrity, 1995).
The report articulated the interest of the federal government in the integrity of research it funded and concluded that the definition of misconduct should be based on the “fundamental principle that scientists be truthful and fair in the conduct of research and the dissemination of research results.” The commission defined its driving concern as “What is in the best interest of the public and science?” Its work aimed to provide “vital guidance for personal and ethical judgments and decisions concerning the professional behavior of scientists.”
The commission recommended broadening the definition of misconduct beyond FFP to encompass misappropriation, interference, and misrepresentation:
1. Research Misconduct
Research misconduct is significant misbehavior that improperly appropriates the intellectual property or contributions of others, that intentionally impedes
the progress of research, or that risks corrupting the scientific record or compromising the integrity of scientific practices. Such behaviors are unethical and unacceptable in proposing, conducting, or reporting research, or in reviewing the proposals or research reports of others.
Examples of research misconduct include but are not limited to the following:
Misappropriation: An investigator or reviewer shall not intentionally or recklessly
- plagiarize, which shall be understood to mean the presentation of the documented words or ideas of another as his or her own, without attribution appropriate for the medium of presentation; or
- make use of any information in breach of any duty of confidentiality associated with the review of any manuscript or grant application.
Interference: An investigator or reviewer shall not intentionally and without authorization take or sequester or materially damage any research-related property of another, including without limitation the apparatus, reagents, biological materials, writings, data, hardware, software, or any other substance or device used or produced in the conduct of research.
Misrepresentation: An investigator or reviewer shall not with intent to deceive, or in reckless disregard for the truth,
- state or present a material or significant falsehood; or
- omit a fact so that what is stated or presented as a whole states or presents a material or significant falsehood. (Commission on Research Integrity, 1995)
The commission based its recommendation to include “interference” as an element of misconduct based on testimony it received about cases where researchers sabotaged the experiments of others or absconded with vital data, arguing that existing laws against vandalism were often not adequate to address these situations. It also recommended defining other forms of “professional misconduct” as obstruction of investigations of research misconduct and repeated noncompliance with research regulations after notice. Finally, the commission made several recommendations concerning the conduct and oversight of investigations, including a “Whistleblower’s Bill of Rights.”
The Ryan Commission’s proposed misappropriation, interference, and misrepresentation definition of research misconduct was opposed by some members of the research enterprise, including the leadership of the Federation of American Societies for Experimental Biology and the National Research Council of the National Academy of Sciences. The criticisms of the definition focused on two issues.2 First, the definition took the form of “leading principles with examples,” which was characterized as “vague and open-ended” (Alberts et al., 1996). The commission’s report had itself argued that fabrication, falsification, and plagia-
rism as understood in the agency policies in effect at that time were “neither narrow nor precise” (Commission on Research Integrity, 1995). Second, regarding the examples themselves, the concern was raised that the inclusion of omitting facts as an example of misrepresentation could open the door to regarding omissions or mistakes in citation as misconduct (Glazer, 1997). While many of the commission’s recommendations later were incorporated into governmental regulatory approaches, its approach to the definition of research misconduct was abandoned.
Alternative Research Misconduct Definitions Used by U.S. Research Institutions and Private Sponsors
While U.S. research institutions must apply the federal research misconduct definition to federally supported work, they are free to adopt definitions of research misconduct that include behaviors other than FFP. A recent analysis found that more than half of 189 universities studied “had research misconduct policies that went beyond the federal standard” (Resnik et al., 2015). The most common non-FFP element was “other serious deviations,” with more than 45 percent of institutions including it. Other misconduct elements adopted by at least 10 percent of institutions were “significant or material violations of regulations,” “misuse of confidential information,” “misconduct related to misconduct,” “unethical authorship other than plagiarism,” “other deception involving data manipulation,” and “misappropriation of property/theft” (Resnik et al., 2015). Institutional investigations of non-FFP misconduct are not reported to federal agencies or reviewed by them. Most of the policies that went beyond FFP were adopted after 2001, and a higher proportion of institutions in the lowest quartile of research funding adopted such policies than those in the upper quartiles.
Nonfederal research sponsors may also adopt research misconduct definitions different from those of the federal government. For example, the Howard Hughes Medical Institute’s policy, adopted in 2007, defines research misconduct as FFP and “any other serious deviations or significant departures from accepted and professional research practices, such as the abuse or mistreatment of human or animal research subjects” (HHMI, 2007).
Policy approaches to fostering research integrity vary widely around the world, and the same variety can be seen in how research misconduct is defined (or not defined). A recent survey of research misconduct policies around the world found that 22 of the top 40 R&D performing countries have national policies, and several more are in the process of developing a policy (Resnik et al., 2015). All of the countries that have policies include FFP in their definitions, with many including additional elements such as unethical authorship and publication practices,
other serious deviations, and violation of regulations protecting human research subjects or laboratory animals (Resnik et al., 2015). The following examples illustrate the choices other countries have made, which are relevant to the question of how U.S. definitions and policies operate in a global context.
Research Councils UK, the organization of the United Kingdom’s government-funding agencies, has a lengthy and detailed definition of “unacceptable conduct”:
Unacceptable conduct includes each of the following:
This comprises the creation of false data or other aspects of research, including documentation and participant consent.
This comprises the inappropriate manipulation and/or selection of data, imagery and/or consents.
This comprises the misappropriation or use of others’ ideas, intellectual property or work (written or otherwise), without acknowledgement or permission.
- misrepresentation of data, for example suppression of relevant findings and/or data, or knowingly, recklessly or by gross negligence, presenting a flawed interpretation of data;
- undisclosed duplication of publication, including undisclosed duplicate submission of manuscripts for publication;
- misrepresentation of interests, including failure to declare material interests either of the researcher or of the funders of the research;
- misrepresentation of qualifications and/or experience, including claiming or implying qualifications or experience which are not held;
- misrepresentation of involvement, such as inappropriate claims to authorship and/or attribution of work where there has been no significant contribution, or the denial of authorship where an author has made a significant contribution.
Breach of duty of care, whether deliberately, recklessly or by gross negligence:
- disclosing improperly the identity of individuals or groups involved in research without their consent, or other breach of confidentiality;
- placing any of those involved in research in danger, whether as subjects, participants or associated individuals, without their prior consent, and without appropriate safeguards even with consent; this includes reputational danger where that can be anticipated;
- not taking all reasonable care to ensure that the risks and dangers, the broad objectives and the sponsors of the research are known to participants or their legal representatives, to ensure appropriate informed consent is obtained properly, explicitly and transparently;
- not observing legal and reasonable ethical requirements or obligations of care for animal subjects, human organs or tissue used in research, or for the protection of the environment;
- improper conduct in peer review of research proposals or results (including manuscripts submitted for publication); this includes failure to disclose conflicts of interest; inadequate disclosure of clearly limited competence; misappropriation of the content of material; and breach of confidentiality or abuse of material provided in confidence for peer review purposes.
Improper dealing with allegations of misconduct
- Failing to address possible infringements including attempts to cover up misconduct or reprisals against whistle-blowers
- Failing to deal appropriately with malicious allegations, which should be handled formally as breaches of good conduct. (RCUK, 2013)
Another example is Denmark, whose approach has evolved over time. The first Danish Committees on Scientific Dishonesty (DCSD) was established by the Danish Medical Research Council in 1992, with additional committees being added in 1998 so as to cover all of science (Resnik and Master, 2013). At first, the DCSD employed a broad definition of scientific dishonesty based on “actions or omissions in research which give rise to falsification or distortion of the scientific message or gross misrepresentation of a person’s involvement in the research,” (DCSD, 2015) with nine specific elements, including FFP, as well as “consciously distorted reproduction of others’ results” and “inappropriate credit as the author or authors” (DCSD, 2002).
However, in 2003, the Danish Committees on Scientific Dishonesty investigated allegations of scientific dishonesty made against Bjørn Lomborg, whose book The Skeptical Environmentalist challenged the view that global environmental problems are worsening. Its finding that Lomborg had committed scientific dishonesty was controversial and was ultimately overturned by Denmark’s Ministry of Science, Technology and Innovation, which cited insufficient evidence and arguments and an overly broad definition of scientific dishonesty (Resnik and Master, 2013). Several years later, Denmark’s definition of scientific dishonesty was narrowed to (DCSD, 2014):
The term ”scientific dishonesty” (research misconduct) is defined as: falsification, fabrication, plagiarism and other serious violations of good scientific practice committed intentionally or due to gross negligence during the planning, implementation or reporting of research results.
There have been several international efforts to foster research integrity at the regional or global levels. For example, the European Code of Conduct for Research Integrity puts forward a definition that includes FFP as well as:
failure to meet clear ethical and legal requirements such as misrepresentation of interests, breach of confidentiality, lack of informed consent and abuse of research subjects or materials. Misconduct also includes improper dealing with infringements, such as attempts to cover up misconduct and reprisals on whistleblowers. (ESF-ALLEA, 2011)
In finding that a researcher has committed misconduct, intention plays a critical role. Fabrication and falsification generally are associated with an intention to deceive. If a researcher produces incorrect results out of negligence or carelessness, the behavior is typically criticized but would not be considered misconduct, since there was no conscious deception. Likewise, plagiarism is often intentional but can also result from sloppy work practices that could be characterized as “reckless.” In addition to stipulating that research misconduct does not include “honest error,” the federal research misconduct policy includes the provision that the behavior must be “committed intentionally, or knowingly, or recklessly” in order for a finding of misconduct to be warranted (OSTP, 2000).
Dresser (1993) has pointed out that terms such as “intentional” and “fraudulent” are too broad and poorly defined to be useful in determining the culpability of researchers and in establishing penalties and other corrective steps for a given action. She pointed instead to the 1962 publication of the Model Penal Code, which sought to replace “eighty or so” culpability terms previously found in state and federal criminal codes with four culpable mental state provisions (American Law Institute, 1985). Individuals act “purposely” if their “conscious object” is to engage in proscribed conduct. They act “knowingly” if they are aware of a high probability that they are engaging in such conduct. They act “recklessly” if they are aware of and “consciously disregard” a substantial risk that they are engaging in prohibited conduct. And they act “negligently” if they should be aware of a substantial risk that they are engaging in prohibited conduct. The first three terms are “subjective” culpability in which an individual has some level of personal awareness of engaging in prohibited behavior.
Distinguishing “honest error” from deception can be very difficult, yet it is important for those charged with investigating an allegation to try to do so. A classic example that illustrates this is the “cold fusion” episode of 1989 involving Martin Fleischmann and B. Stanley Pons of the University of Utah (Goodstein, 2010). While that case involved research behavior that fell far short of good research practices, many observers and experts believe that it did not rise to the level of misconduct. The Fleischmann-Pons case also featured institutional and researcher choices about pursuing press conference science and secrecy to protect
intellectual property instead of publication that remain controversial to this day. Even in the most egregious cases, a researcher may claim extenuating circumstances, negligence, or error rather than admitting culpability. Furthermore, the researcher engaging in the behavior may choose not to examine the motivations behind those acts so as to reduce personal accountability. In such cases, it can be difficult to establish culpability for a given behavior.
The intent to deceive is often difficult to prove. Proof almost always relies on circumstantial evidence, which can, however, include an analysis of the behavior of the person accused of misconduct. One commonly accepted principle, adopted by the Ryan Commission, is that the intent to deceive may be inferred from a person’s acting in reckless disregard for the truth (Commission on Research Integrity, 1995). Providing guidance of this sort for misconduct investigative committees would likely be valuable going forward, given that it is often difficult to establish intent.
Implications of Retaining FFP as the Federal Misconduct Definition and Possible Changes
The above review of the debate over the U.S. research misconduct definition and alternatives past and present reveals examples of non-FFP behaviors that could be included in an amended federal research misconduct definition. Whether they should be or not depends on whether the behavior is adequately addressed under current policies related to research misconduct and other areas and, if not, whether the behavior would be addressed most effectively by including it in the federal research misconduct definition versus other options. For example, some behaviors that are included in non-U.S. definitions of research misconduct—such as violating the rights of human research subjects—are already addressed by a well-developed set of regulations and institutions in the United States (see the discussion of “other misconduct” below). Therefore, they will not be considered further in this context. Other behaviors such as sabotaging the experiments of others or retaliating against good-faith whistleblowers are worth examining in light of how the federal policy on research misconduct is actually operating within institutions and with regard to agency oversight. These issues will be discussed in Chapter 7.
In the meantime, it is worth considering an issue that the committee spent considerable time discussing, that of authorship misrepresentation that might not be clearly included in OSTP’s definition of plagiarism. A footnote in the 1992 report Responsible Science states that “it is possible that some extreme cases of noncontributing authorship may be regarded as misconduct because they constitute a form of falsification” (NAS-NAE-IOM, 1992). Responsible Science also noted that in 1989 a Public Health Service annual report of its activities to address research misconduct included several abuses of authorship in examples of misconduct, such as “preparation and publication of a book chapter listing
co-authors who were unaware of being named as co-authors,” and “engaging in inappropriate authorship practices on a publication and failure to acknowledge that data used in a grant application were developed by another scientist.” It should be noted that this formulation predated the 2000 federal policy on research misconduct and could have included cases considered under the “other serious deviations” provision.
As in the cases of whistleblower retaliation and sabotage, evaluating whether changes in federal policy should be made to better address authorship abuses involves considering the scale of the problem and weighing the advantages and disadvantages of policy changes against other alternatives. This will be covered in Chapter 7.
The 1992 Responsible Science report identified an additional set of actions “that violate traditional values of the research enterprise and that may be detrimental to the research process,” but for which “there is at present neither broad agreement as to the seriousness of these actions nor any consensus on standards for behavior in such matters.” As examples of these actions, it cited
failing to retain significant research data for a reasonable period, maintaining inadequate research records, conferring or requesting authorship on the basis of a specialized service or contribution that is not significantly related to the research reported in the paper, refusing to give peers reasonable access to unique research materials or data that support published papers, using inappropriate statistical or other methods of measurement to enhance the significance of research findings, and misrepresenting speculations as fact or releasing preliminary research results, especially in the public media, without providing sufficient data to allow peers to judge the validity of the results or to reproduce the experiments.
Many of the actions the 1992 panel identified as questionable research practices (often labeled QRPs) have gained less institutional consensus, and consequently there is less agreement on policies and incentives to address them. However, this panel has identified some of these practices as not questionable at all but as clear violations of the fundamental tenets of research. As will be covered in detail in Chapter 5, the past several decades of experience have clarified the damage that these practices are wreaking on the research enterprise, which might surpass the damage that research misconduct causes. Codes of responsible conduct of research in other countries include some of these practices in definitions of research misconduct that are broader than in the United States.
Also, it is important to remember that Responsible Science and other analyses of its time focused on the actions of individual researchers, and that their concepts and definitions were framed accordingly. In light of several decades
of subsequent experience and the massive changes in the scientific landscape detailed in Chapter 3, it is clear that the organizations that make up the research enterprise, such as research institutions, research sponsors, and journals, may also engage in behaviors that damage research integrity. It is just as necessary to identify and actively discourage these organizational actions and incentives as it is to better address individual behaviors.
This committee believes that many of the practices that up to now have been considered questionable research practices, as well as damaging behaviors by research institutions, sponsors, or journals, should be considered detrimental research practices (DRPs). Researchers, research institutions, research sponsors, journals, and societies should discourage and in some cases take corrective actions in response to DRPs.
Rather than develop a definitive list and specific corrective actions, the committee seeks to catalyze discussion within the research enterprise on what can be done to more actively discourage DRPs than what has been done up to now. Indeed, the committee’s primary recommended response to DRPs is for all participants in the research enterprise to seek to significantly improve practices. How this may be done is covered in detail in Chapter 9.
These are examples of DRPs that the committee has considered and agrees on:
- Detrimental authorship practices that may not be considered misconduct, such as honorary authorship, demanding authorship in return for access to previously collected data or materials, or denying authorship to those who deserve to be designated as authors;
- Not retaining or making data, code, or other information/materials underlying research results available as specified in institutional or sponsor policies, or standard practices in the field;
- Neglectful or exploitative supervision in research;
- Misleading statistical analysis that falls short of falsification;
- Inadequate institutional policies, procedures, or capacity to foster research integrity and address research misconduct allegations, and deficient implementation of policies and procedures; and
- Abusive or irresponsible publication practices by journal editors and peer reviewers.
Further discussion of DRPs, how and why they are harmful, and how they should be discouraged are topics explored in Chapter 5.
In addition to research misconduct and questionable research practices, Responsible Science identified a category of unacceptable behaviors that the panel termed other misconduct. These behaviors are not unique to the conduct of
research even when they occur in a research environment. Such behaviors include “sexual and other forms of harassment of individuals; misuse of funds; gross negligence by persons in their professional activities; vandalism, including tampering with research experiments or instrumentation; and violations of government research regulations, such as those dealing with radioactive materials, recombinant DNA research, and the use of human or animal subjects.”
Because such actions are not unique to the research process, they do not constitute research misconduct, the panel said. They should, therefore, be addressed in other ways, such as the legal system, employment actions, or other mechanisms that address violations of professional standards. However, the panel added that some forms of other misconduct are directly associated with research misconduct, including “cover-ups of misconduct in science, reprisals against whistle-blowers, malicious allegations of misconduct in science, and violations of due process protections in handling complaints of misconduct in science.” As a result, these forms of other misconduct “may require action and special administrative procedures” (NAS-NAE-IOM, 1992).
As discussed above, whistleblower retaliation and tampering/sabotage will be explored further in Chapter 7. Otherwise, this committee agrees that the category of other misconduct should remain as it was recommended in Responsible Science.