By working collaboratively, researchers can hope to answer questions never addressed before, including those with substantial influence on society. At the same time, today’s international, interdisciplinary, team-oriented, and technology-intensive research has created an environment more fraught with the potential for error and distortion.
Synopsis: A number of the elements in the research environment that were identified in the early 1990s as perhaps problematic for ensuring research integrity and maintaining good scientific practices have generally continued along their long-term trend lines, including the size and scope of the research enterprise, the complexity of collaboration, the growth of regulatory requirements, and the importance of industry sponsorship and entrepreneurial research. Several important new trends that were not examined in the 1992 Responsible Science report have also emerged, including the pervasive and growing importance of information technology in research, the globalization of research, and the increasing relevance of knowledge generated in certain fields to policy issues and political debates. These changes—the growing importance of information technology in particular—have led to important shifts in the institutions that support and underlie the research enterprise, such as scholarly publishing. They also have important implications for the ability of researchers, research institutions, journals, and sponsors to foster integrity and prevent research misconduct and detrimental research practices.
The 1992 report Responsible Science: Ensuring the Integrity of the Research Process devoted a chapter to describing the contemporary research environment and outlining the most important changes that had occurred over the previous decades (NAS-NAE-IOM, 1992). Responsible Science also described several additional features of the U.S. research scene of the early 1990s that had become the subject of discussion and concern due to possible negative impacts on the research environment, including research integrity (NAS-NAE-IOM, 1992). This chapter will first explore the research environment issues identified in 1992—except for the reward system in science, which is covered in Chapter 6—and describe trends over the past two decades. The second part of the chapter will
explore several important shifts in the research environment that have appeared since 1992 and were not considered in Responsible Science. These shifts carry several important implications for research integrity.
HOW RESEARCH ENVIRONMENT ISSUES IDENTIFIED IN RESPONSIBLE SCIENCE HAVE EVOLVED SINCE THE EARLY 1990s
Size and Scope of the Research Enterprise
The 1992 report’s overview described growth in the size and scope of the research enterprise. The report observed that research in the pre–World War II United States—academic research in particular—was a mostly small-scale avocation of individual scientists, supported by limited funding from industry, government, and foundations. Following the significant wartime contributions of research efforts such as MIT’s Radiation Laboratory, federal support for science and engineering research increased rapidly. By 1991, research and development (R&D) was a $160 billion (current dollars) enterprise in the United States, employing about 744,000 people in industrial, academic, and governmental laboratories and producing more than 140,000 research articles annually (NSB, 1996, 2014b; OECD, 2015).
Over the following two decades, the enterprise has continued to grow, with U.S. R&D totaling $456 billion in 2013, R&D employment rising to about 1,252,000, and the number of published research articles reaching more than 412,000 (NSB, 2014b, 2016; OECD, 2015). The 1992 report paid particular attention to the growth in academic research and federal support, and this growth has continued. Between 1991 and 2014, academic R&D grew from around $17.5 billion to $67.1 billion, with federal support constituting 60–75 percent of the total (NSB, 2016).1 The number of science, engineering, and health doctorate holders employed in academia rose from 211,000 in 1991 to almost 309,000 in 2013 (NSB, 2016). The number of PhDs awarded in science and engineering more than doubled, from approximately 19,000 in 1988 to almost 37,000 in 2013, with an increasing percentage of these doctorate recipients going to work outside academia (NSB, 2016).
The 1992 Responsible Science report raised the concern that the increased size of the research enterprise might put stresses on key capabilities, such as the “overall workload associated with critical evaluation” (NAS-NAE-IOM, 1992). The number and capacity of effective peer reviewers might not be keeping pace with the relentless growth in manuscripts and proposals. Concerns also have been raised about the increasing use of bibliometric-based metrics in evaluating
1 From 2010, the total includes academic R&D outside of science and engineering, which adds several billion dollars.
research as a substitute or replacement for expert judgment (P. B. Lowry et al., 2012).
Complexity of Collaboration
Responsible Science described the growth of collaborative research after World War II, which has continued since the early 1990s. In contrast to earlier times, when articles with more than four co-authors and work involving more than one laboratory or research institution were rare, collaborative research of various types is now very common. The number of authors listed on articles is only one measure of collaboration, but it clearly reveals the overall trend. In an analysis of approximately 20 million research articles published since 1955 and 2 million patents registered since 1975, the number of authors on scientific papers grew from an average of 1.9 in 1955 to 3.5 in 2000 (Wuchty et al., 2007). At the same time, single-author articles are becoming less common, constituting only about 11 percent of the total in 2012 (King, 2013).
Several factors are driving the trend toward larger-scale research in general and in specific fields (Stephan, 2012a). These include the need for more elaborate and expensive equipment and the often related requirement for a variety of specialized skills and knowledge. These characteristics of “big science” have long been a given in fields such as high-energy physics and astronomy, in the form of particle accelerators such as the Large Hadron Collider and modern telescopes. They have become more prominent recently in many areas of the life sciences as well. In describing the results of large life sciences research projects such as the Human Genome Project and ENCODE (Encyclopedia of DNA Elements), former Science editor-in-chief Bruce Alberts (2012) noted that “the increased efficiency of data production by such projects is impressive.” In addition, as will be discussed in more detail below, the information technology revolution has radically lowered the costs of communication and collaboration of all types, including research collaboration.
Another factor contributing to the growth of team research has been an increase in the amount of interdisciplinary research. Interdisciplinary research efforts have continued to grow in importance and are extremely diverse (Derrick et al., 2012). Interdisciplinary teams can range from local and informal to transnational and highly structured. They can be composed largely or entirely of researchers accustomed to working within a disciplinary framework, or they can consist partly or wholly of researchers who have been educated and have worked in interdisciplinary fields. Integration of knowledge from multiple disciplines can occur within the mind of a single person or through the collaborative efforts of a large team. For example, with the advent of “big data” and computational science, statisticians are increasingly included on projects where researchers have collected domain-specific data that they do not have the expertise to analyze. Interdisciplinary research is often focused on problems that have important so-
cietal implications. One current example of a growing interdisciplinary field is synthetic biology, which seeks a fundamental understanding of the workings of living systems along with the capability of re-creating living systems for a variety of applications in areas such as medicine and the environment. Synthetic biology research involves “biologists of many specialties, engineers, physicists, computer scientists, and others” (NRC, 2010).
According to one analysis of trends in interdisciplinary research in six research fields, the growth of interdisciplinarity has been modest—about 5 percent—even as the number of authors per article has grown by 75 percent (Porter and Rafols, 2009). This study found that the number of disciplines cited by papers in these six fields—mathematics, neurosciences, electrical and electronic engineering, biotechnology and applied microbiology, research and experimental medicine, and atomic, molecular, and chemical physics—has increased, but the distribution of citations is within neighboring research areas and has only slightly broadened. According to the authors, “These findings suggest that science is indeed becoming more interdisciplinary, but in small steps—drawing mainly from neighboring fields and only modestly increasing the connections to distant cognitive areas.”
Collaborative science requires that researchers focus at least some attention on coordination and interaction, which in theory might detract from the time and effort devoted to research. Yet Wuchty et al. (2007) found that multiauthor teams produced more highly cited work in each broad area of research and at each point in time. In addition, though solo authors in 1955 were more likely to produce papers that were highly cited, suggesting that these papers reported on the most influential concepts, results, or technologies, teams are more likely to produce highly cited papers today. As the authors wrote, “solo authors did produce the papers of singular distinction in science and engineering and social science in the 1950s, but the mantle of extraordinarily cited work has passed to teams by 2000.”
As more researchers work collaboratively and as the size of teams grows, the relationships among team members can become more complex. Team members can be at different research institutions and have different disciplinary backgrounds. Teams can contain researchers at all stages of their careers, from undergraduate and graduate students involved in research to senior researchers. The diversity and geographic spread of people involved in teams can create opportunities for miscommunication, misunderstandings, unrealistic expectations, and unresolved disputes. Whether these opportunities account for part of the increase in reports of undesirable research practices is unclear, but they can make the research environment more complicated and difficult than when teams were smaller, colocated more regularly, and more homogeneous in terms of discipline or nationality.
As research projects are undertaken by larger groups that bring together a greater diversity of expertise, encompass a broader range of disciplines, and strive for a greater degree of synthesis, the potential for misunderstandings can grow. Coordination of research inevitably becomes more complex, and the members
of a team may have less familiarity with the discipline-specific practices of other team members, making it more difficult for each collaborator to check and verify the work done by others. As the number of collaborators increases, there is more scope for disagreements over the allocation of credit. It becomes much more challenging to reward and recognize individual contributions, which has a big impact on junior researchers in particular. In addition, the mentoring of students in responsible research practices can become more impersonal and generic. The mental model of graduate education and training in which mentors work closely with graduate students and are able to take the time and effort to ensure that mentees understand the rules and can follow them may describe a smaller and smaller part of the research enterprise. Interdisciplinary work increases the possibility that the standards and expectations of different fields may come into conflict.
Regulation and Accountability
The 1992 report also noted that research activities were “increasingly subject to government regulations and guidelines that impose financial and administrative requirements” in areas such as laboratory safety, human subjects protection, drug-free workplace assurance, laboratory animal care, and the research use of recombinant DNA and toxic and radioactive materials. Along with the relatively new requirements and regulations related to research misconduct, the development of which is covered in Chapter 4 of this report, ensuring compliance with these expanding regulatory requirements had resulted in an expansion of administrative and oversight functions and staff at universities and required increasing time and attention from investigators. As an increasing percentage of faculty time goes toward fulfilling the requirements of various regulations and reporting requirements, research-related tasks such as mentoring and checking the work of subordinates may be shortchanged.
The administrative and regulatory compliance burden on research institutions and researchers remains significant. For example, respondents to a 2012 survey of 13,453 principal investigators undertaken by the Federal Demonstration Partnership estimated that, on average, 42 percent of the time they spent working on federally funded research projects was devoted to meeting regulatory and administrative requirements (Schneider et al., 2012). According to the survey results, areas of regulation where compliance is particularly time consuming include those related to finances, personnel, and effort reporting. In 2014 the National Science Board issued a report that analyzes the regulatory compliance burden on faculty and makes recommendations for how it might be reduced (NSB, 2014c). A 2016 National Academies report evaluated current approaches to regulating academic research and made recommendations for achieving the goals of regulation while reducing financial and time burdens on institutions and faculty (National Academies of Sciences, Engineering, and Medicine, 2016).
Industry-Sponsored Research and Other Research Aimed at Commercialization
Increasingly, the scientific enterprise has been recognized not only as a place to expand knowledge but also as an engine for the creation of new products, novel therapies for disease, improved technologies, and new industries and jobs. To quote President Obama (2009b), “scientific innovation offers us a chance to achieve prosperity.” The economic potential of science, however, also offers unique challenges to the responsible conduct of research, which were described in Responsible Science. These challenges can be seen in scientific research conducted in an industrial setting, scientific research conducted in university and research institutions in collaboration with industry, and university research that leads to entrepreneurial efforts by the researchers, requiring that they integrate both within themselves and in their professional behavior often divergent cultural understandings about the nature, purposes, and outcomes of research. These challenges include the potential of economic incentives to introduce scientific bias, the perception of conflict of interest due to economic incentives, and the potential effect of intellectual property protection on the timely dissemination of knowledge.
Industry funds and conducts a substantial amount of research in the United States. For both basic and applied research, as defined by the National Science Foundation, industry conducts 40 percent of the U.S. total (NSB, 2016). Even considering just basic research, industry conducts approximately 24 percent, almost 90 percent of which it funds itself. Unlike academic research, corporate research is often driven by the needs of a company to remain financially solvent and to be accountable to shareholders. Corporate researchers often exist under hierarchical chains of supervision where management maintains greater control over the research process.
Only a fraction of the results of industry-funded research is published in the scientific and engineering literature and is thereby submitted to formal peer review. Of the articles published in 2013, authors from industry accounted for only 6 percent of the total, and that percentage has been declining over the past two decades (NSB, 2014). This can be a product of the need to preserve intellectual property interests for trade secrets and obtaining patents. One consequence is that the knowledge gained in such research may not be widely disseminated or evaluated through the peer review process. This is not to say that such industry research is not of high quality or is not carefully reviewed. Companies can have strict protocols regarding the collection, documentation, and storage of data, particularly when there are strong regulatory or economic reasons to do so. Checking mechanisms may be built into industrial research to verify especially critical results (Williams, 2012). And, as with all research, the use of research results in subsequent activities—including the production of commercial products—provides further checks on the validity of results.
However, both industrial research and industry-sponsored research in aca-
demic settings have been found to occasionally show signs of both unintentional and intentional bias.2 For example, one might observe bias in the lack of publication of results with negative consequences for the profitability of a product or in the restriction of published findings to those that reflect positively on a product. An extreme case is the tobacco industry, which undertook a systematic effort over the course of decades to obscure the harmful effects of smoking (Proctor, 2011). Other examples include episodes of alleged ghostwriting in some medical research, including the Paxil case described in Appendix D and also discussed in Chapter 7. Such research tarnishes all other research by demonstrating that research agendas and techniques can be manipulated so severely as to subvert truth to other interests. Many journals have moved to reporting the financial interests of authors, whether the work has an industry sponsor or not, so that readers are made aware of the potential for bias.
In addition to collaborations with established industries, academic institutions have increasingly encouraged entrepreneurship and innovation for commercialization, particularly since the passage of the Bayh-Dole Act in 1980, which allowed institutions to hold patents on innovations produced with federal funding. Having seen the success of academic research products such as Gatorade and the Google search algorithm patent in generating revenue, institutions may hope that their researchers can achieve similar results. For fiscal 2011 the Association of University Technology Managers reported that the 186 institutions responding to its annual survey earned a total of $1.5 billion in running royalty income, executed 4,899 licenses, created 591 commercial products, and formed 671 start-up companies from their research (AUTM, 2012).
One result of the commercialization of university-generated technology is that the need to manage possible conflicts of interest has become an important issue in academic settings. A 2009 Institute of Medicine report explores the issue of institutional conflict of interest in more detail (IOM, 2009). Individual conflicts of interest exist if the investigator is also the founder of a company conducting research or has a significant monetary stake in the research. This can also apply to an institution if it owns part of a company or has a financial stake in a faculty member’s research findings. Under the U.S. Financial Conflict of Interest (FCOI) policy, research funded by the Public Health Service requires institutions to maintain and enforce a FCOI policy; manage, reduce, or eliminate identified conflicts; report identified conflicts, the value of the conflicts, and a management plan to the Public Health Service Awarding Component; and publish significant financial interests of any personnel involved in the research on a publicly accessible website (HHS, 2011b). Currently, the Department of Health and Human Services does not have institutional regulations in the same manner as investigator FCOI regulations (required disclosure of FCOIs). Strengthened institutional FCOI regulations have been considered, but there is a need for further and separate consideration.
2 This is not meant to imply that research that is not sponsored by industry is necessarily unbiased.
The National Science Foundation policy is consistent with that of the Department of Health and Human Services. Regulations of individual financial conflicts of interest are further discussed in Chapter 7 and are also addressed in the context of best practices in Chapter 9.
Additional individual conflicts of interest, or secondary interests, can also affect a research study, including political biases, white hat bias, commitment conflicts, career considerations, and favors to others (IOM, 2009; Lesser et al., 2007). A political opinion, bias, or long-standing scientific viewpoint toward one position or another may influence the interpretation of findings, despite contradictory evidence (Lesser et al., 2007). Similarly, white hat bias, or “bias leading to distortion of information in the service of what may be perceived to be righteous end,” also has the potential to influence conclusions (Cope and Allison, 2010). An example of a conflict of commitment would be a principal investigator who does not have the time to perform all the duties for which he or she is responsible, such as securing funding, setting the overall direction for research in a lab, administrative responsibilities, and adequately supervising graduate students and postdocs. Secondary interests are rarely regulated, as they are considered a lesser incentive than financial interests.
Closer linkages between research and commercialization have introduced the possibility of financial gain from research more widely across the enterprise. This can pose challenges in terms of defining appropriate behavior and establishing guidelines for dealing with conflicts of interest, and it can complicate collaborations among individual researchers and among organizations.
Information Technologies in Research
The continued exponential rise in the power of information and computing technologies has had a dramatic impact on research across many disciplines. These technologies have not only increased the speed and scope of research but have made it possible to conduct investigations that were not possible before. Information technology advances have enabled new forms of inquiry such as those based on numerical simulation of physical and biological systems and the analysis of massive datasets to detect and assess the nature of relationships that otherwise would go unseen.
The contrast in computing capabilities since the publication of Responsible Science is especially stark. In 1992, use of e-mail was less than a decade old, and the World Wide Web had just been invented and was not widely known. Three-and-a-half-inch floppy disks for data storage had replaced 5-1/4-inch disks just a few years before. People made telephone calls on landlines, used letters to communicate in writing, and circulated preprints via the postal system. For
young researchers, the circumstances in which research was conducted in 1992 are almost entirely foreign.
One effect of information technologies in many areas of research has been to introduce intermediate analyses of considerable complexity between the “raw” data gathered by sensors and observations, and produced by data-creating devices such as DNA sequencers, and the results of research. Re-creating the steps from data to results can be impossible without a detailed knowledge of data production and analyzing software, which sometimes is dependent on the particular computer on which the software runs. This intermediate analysis complicates the replication of scientific results and can create opportunities to manipulate analyses so as to achieve desired results, as well as undermine the ability of others to validate findings.
Digital technologies can pose other temptations for researchers to violate the standards of scientific practice. For example, the manipulation of images using image-processing software has caused many journals to implement spot checks and other procedures to guard against falsification. The inappropriate application of statistical packages can lead to greater confidence in the results than is warranted. Data-mining techniques can generate false positives and spurious correlations. In many fields, the development of standards governing the application of technology in the derivation of research results remains incomplete even as continuing technological advances raise new issues. In a recent paper, two prominent biologists wrote, “Although scientists have always comforted themselves with the thought that science is self-correcting, the immediacy and rapidity with which knowledge disseminates today means that incorrect information can have a profound impact before any corrective process can take place” (Casadevall and Fang, 2012).
The widespread utilization of information technologies in research may also introduce new sources of unintentional error and irreproducibility of results. A survey of researchers who utilize species distribution modeling software found that only 8 percent had validated the software they had chosen against other methods, with higher percentages relying on recommendations from colleagues or the reputation of the developer (Joppa et al., 2013). The latter approaches pose risks of incorrect implementation and error for the research being pursued, particularly if software is not shared or subjected to critical review. Issues surrounding irreproducibility and information technologies are discussed further in Chapter 5.
Besides affecting the conduct of research, information and communication technologies have transformed the communication of scientific results and interactions among researchers. In theory, if not always in practice, all the data contributing to a research result can now be stored electronically and communicated to interested researchers. This capability has contributed to a growing movement for much more open forms of research in which researchers work collectively on problems, often through electronic media (Nielsen, 2012). However, this trend toward greater transparency has created tasks and responsibilities for research-
ers and the research enterprise that did not previously exist, such as creating, documenting, storing, and sharing scientific software and immense databases and providing guidance in the use of these new digital objects. For example, software produced by scientists in the course of analyzing the data is often carried out as a collaborative online process. This digitization makes it easier than ever to perform very complex analyses that not only lead to new discoveries but create new problems of opacity for the peer review process. And while technology is making many aspects of research more efficient, it may also create new tasks and responsibilities that are burdensome for researchers and that they may find difficult or impossible to fulfill.
The movement toward open science has encouraged the efforts of citizen scientists who are eager to monitor, contribute to, and in some cases criticize scientific advances (Stodden, 2010). Review of scientific results from outside a research discipline can provide another check on the accuracy of results, but it also can introduce questions about the validity of findings that are not adequately grounded in knowledge of the research. Moreover, it can alter the relationship between researchers and the public in ways that require new levels of effort and sophistication among researchers involved in public outreach.
Advances in information technology are transforming the research enterprise, discipline by discipline, by changing the sorts of questions that can be addressed and the methods used to address them. There may be more opportunities to fabricate, falsify, or plagiarize, but there are also more tools to uncover such behavior. Issues related to research reproducibility and related practices are covered in Chapter 5.
The Globalization of Research
Because knowledge passes freely across national borders, scientific research has always been an international endeavor. But this internationalization has intensified over the past two decades. Nations have realized that they cannot expect to benefit from the global research enterprise without national research systems that can absorb and build on that knowledge. As a result, they have incorporated science and technology into national plans and have established goals for increased R&D investments. They also have encouraged their own students and researchers to travel to other countries to study and work and have welcomed researchers from other countries. At the same time, private-sector companies have increased their R&D investments in other countries to take advantage of local talent, gain access to local markets, and in some cases lower their costs for labor and facilities. These and other trends, including cheaper transportation, better communications, and the spread of English as the worldwide language of science, are producing a new golden age of global science.
Once again, the trend is apparent in the author lists of scientific and engineering articles. Between 1988 and 2013, the percentage of science and engineer-
ing articles published worldwide with coauthors from more than one country increased from 8 percent to 19 percent (NSB, 2016). Also, some countries have dramatically increased their representation in the science and engineering literature. Between 1999 and 2013, the average number of science and engineering articles published by Chinese authors rose 18.9 percent annually, so that by 2013 China, with 18 percent of the total, was the world’s second-largest national producer of science and engineering articles. Authors from China also increased their share of internationally coauthored articles from 5 percent to 13 percent between 2000 and 2010. Other countries that dramatically expanded their number of articles published included South Korea, India, Taiwan, Brazil, Turkey, Iran, Greece, Singapore, Portugal, Ireland, Thailand, Malaysia, Pakistan, and Tunisia, though some of these countries started from very low bases.
Another measure of the increasing internationalization of research is the number of foreign-born researchers studying and working in the United States. More than 193,000 foreign students were enrolled in U.S. graduate programs in science and engineering in 2013, and foreign-born U.S. science and engineering doctorate holders held 48 percent of postdoctoral positions in 2013 (NSB, 2016). Science and engineering doctorate holders employed in U.S. colleges and universities who were born outside the United States increased from 12 percent in 1973 to nearly 27 percent in 2013. The United States remains the destination for the largest number of foreign students at the graduate and undergraduate levels, though its share of foreign students worldwide declined from 25 percent in 2000 to 19 percent in 2013.
Internationalization offers many benefits to the research enterprise. It can speed the advance of knowledge and permit projects that could not be done by any one country working alone. It increases cooperation across borders and can contribute to a reduction in tensions between nations. It enhances the use of resources by reducing duplication of effort and by combining disparate skills and viewpoints. The experiences students and researchers gain by working in other countries are irreplaceable.
But globalization also can complicate efforts to ensure that researchers adhere to responsible research practices (Heitman and Petty, 2010). Education in the responsible conduct of research, while far from universal among U.S. science and engineering students, is nevertheless more extensive in the United States than in many other countries (Heitman et al., 2007). Codes of responsible conduct differ from country to country, despite efforts to forge greater international consensus on basic principles (ESF-ALLEA, 2011; IAC-IAP, 2012). In some countries with rapidly developing research systems, research misconduct and detrimental research practices appear to be more common than in countries with more established research systems (Altman and Broad, 2005). Students from different countries may have quite different expectations regarding such issues as conflicts of interest, the deference to be accorded instructors and mentors, the treatment of research subjects, the handling of data, and the standards for authorship. For
example, one issue often noticed with foreign students in the United States is the different standards they apply to the use of ideas and phrases from others, which can lead to problems with plagiarism (Heitman and Litewka, 2011).
As the sizes of individual national research enterprises grow and become more competitive, institutions and sponsors can experience more problems with research misconduct. Differences in national policy frameworks may constitute barriers to cross-border collaboration, but efforts are being made to harmonize or at least make these frameworks interoperable. Collaboration among researchers from different countries and cultures may expose differences in training, expectations, and values that affect behavior.
Relevance of Research Results to Policy and Political Debates
The rapid expansion of government support for scientific research in the decades after World War II was spurred by recognition of the importance of new knowledge in meeting human needs and solving problems. Over the past few decades, the link between scientific knowledge and issues in the broader society has become ever more apparent. Science is a critical factor in public discussions of and policy decisions concerning stem cells, food safety, climate change, nuclear proliferation, education, energy production, environmental influences on health, national competitiveness, and many other issues. Although all these topics cannot be covered here, this section will describe several of the key issues affecting science, policy, and the public and how they affect (and are affected by) research integrity.
To begin with, the federal government itself performs a significant amount of research through government laboratories, some of which is published. Federal agencies that perform research generally have policies and procedures in place to investigate allegations of research misconduct in their intramural programs (see NIH, 2012a, for an example of such policies and procedures, and see Chapter 7 for a more detailed discussion).
In addition, the Obama administration led an initiative on scientific integrity in the federal government starting in 2010 (Holdren, 2010). Executive departments and agencies were instructed by the Office of Science and Technology Policy (OSTP) to develop policies that address a range of issues, including promoting a culture of scientific integrity, ensuring the credibility of government research, fostering open communication, and preventing bias from affecting how science is used in decision making or in communications with the public. The exercise is largely complete, as agencies have developed and implemented policies in response to the Office of Science and Technology Policy guidance (Grifo, 2013; OSTP, 2013).
Research also comes into play in debates and decisions over numerous contentious policy issues. Science is not the only factor in these discussions. Many considerations outside of science influence policy choices, such as personal and
political beliefs, lessons from experience, trial-and-error learning, and reasoning by analogy (NRC, 2012b). To contribute to public policy decisions, researchers must be able to separate their expertise as scientists from their views as advocates for particular public policy positions. Furthermore, they often contribute to these discussions outside the peer-reviewed literature, whether in public forums, blogs, or opinion articles in newspapers. According to the document Responsible Conduct in the Global Research Enterprise: A Policy Report (IAC-IAP, 2012), “Researchers should resist speaking or writing with the authority of science or scholarship on complex, unresolved topics outside their areas of expertise. Researchers can risk their credibility by becoming advocates for public policy issues that can be resolved only with inputs from outside the research community.”
One example of an area where science, public debate, and policy making have been closely tied and contentious in recent years is climate science. This has raised challenges for researchers and the institutions through which scientists provide policy advice. According to a recent National Research Council report, “Climate change is occurring, is very likely caused by human activities, and poses significant risks for a broad range of human and natural systems. The environmental, economic, and humanitarian risks of climate change indicate a pressing need for substantial action to limit the magnitude of climate change and to prepare to adapt to its impacts” (NRC, 2011). The global climate is a highly complex system, and there is considerable uncertainty about the timing and magnitude of climate change, the effect of measures to reduce greenhouse gas emissions from human activities, regional impacts, and many other issues. Effectively limiting greenhouse gas emissions presents economic and technological challenges and affects countries and industries differently, making policy changes by individual countries difficult. The development of the United Nations Framework Convention on Climate Change and its evolution over time illustrate the barriers to collective action on a global level.3
In this environment of significant uncertainty on key scientific questions, difficult policy choices, the possibility of large impacts on powerful economic interests, and highly mobilized advocacy operations on all sides of the climate change issue, the climate science community has faced challenges in maintaining its credibility and public trust as it contributes its expertise. This experience might provide lessons on what researchers and scientific institutions need to do and what they need to avoid as highly charged issues arise with important scientific components. For example, the Intergovernmental Panel on Climate Change (IPCC), which was awarded the Nobel Peace Prize in 2007, is an international body that undertakes periodic scientific assessments of climate science and constitutes the primary mechanism for scientists to inform policy makers at the global level. In November 2009 the unauthorized leak of e-mail conversations among climate researchers, a number of whom were heavily involved with the IPCC process,
appeared to reveal a number of questionable actions, including efforts to limit or deny access to data, failure to preserve raw data, and efforts to influence the peer review practices of journals. While subsequent investigations cleared the researchers of misconduct, the “Climategate” scandal and subsequent discovery of errors in IPCC’s most recent assessment raised questions about the quality and impartiality of the organization’s work. A 2010 study by the InterAcademy Council recommended a number of reforms in IPCC governance and management, review processes, methods for communicating uncertainty, and transparency (IAC, 2010). One possible lesson from the recent climate change experience is that researchers, institutions, and fields whose work becomes relevant to controversial policy debates will need to consciously examine and upgrade their practices in areas such as data access and transparency (NAS-NAE-IOM, 2009a).
Recent high-profile international cases in which scientists have been criticized and even prosecuted based on their advisory activities include the statements of scientists in the aftermath of the Fukushima earthquake and tsunami in 2011, and the manslaughter convictions of seismologists whose statements were misconstrued by a government official, Bernardo De Bernardinis, to mean that there was no risk of danger immediately prior to an earthquake in L’Aquila, Italy, that killed more than 300 people (Cartlidge, 2012; Jordan, 2013; Normile, 2012). An appeals court overturned the convictions 2 years later for the six seismologists involved, but not for De Bernardinis (Cartlidge, 2014).
Other issues involving science and policy that raise questions about integrity seemingly appear in the media on a weekly basis. During 2012, controversy erupted over a University of Texas sociologist’s research findings that adult children of parents who had same-sex relationships fared worse than those raised by parents who had not had same-sex relationships; his research methodologies have been severely criticized, but an institutional inquiry cleared him of research misconduct (Peterson, 2012). A federal appeals court upheld a South Dakota statute requiring doctors to tell women seeking abortions that they face an increased risk of suicide; despite extremely weak research evidence to support the statute, the court decided not to strike it down as an undue burden on abortion rights or on First Amendment grounds (Planned Parenthood Minnesota, N.D., S.D. v. Rounds, 2012). A French paper found that rats consuming genetically modified corn developed more tumors and died earlier than a control group, although food safety agencies have stated that the sample sizes were too small to reach a conclusion (Butler, 2012). And a criminal investigation of a Texas state agency established to fund research on cancer prevention and treatment revealed that some awards were made without scientific review, which led to a wave of resignations among staff and oversight board members (Berger and Ackerman, 2012). Needless to say, these cases underscore the salient role of scientific research in policy discussions.
For researchers, exercising responsibility in relations with society encompasses an increasing array of issues. For example, health and social science research in some communities, such as Native American tribes, requires adher-
ence to community rules for gaining approval. Research on people’s behavior on social networking websites raises questions about how human subject protections apply. Some emerging areas of research, such as crisis mapping and monitoring, raise human rights issues (AAAS, 2012). Finally, researchers in the life sciences are being asked to exercise responsibility in the area of preventing the misuse of research and technology (IAP, 2005).
Research findings are increasingly relevant to a broader range of policy-relevant questions, raising the magnitude of possible negative consequences of research misconduct and detrimental research practices. Researchers in a variety of fields are faced with more complicated choices with ethical dimensions. In this environment, maintaining rigorous peer review processes in scientific journals is a critical task. Decisions based on science suffer when non-peer-reviewed science, or science that was not well reviewed, is used.
Decisions about the authorship of research publications are an important aspect of the responsible conduct of research. Although many individuals other than those who conceive of and implement a research project typically contribute to the production of successful research, authors are considered to be the person or persons who made a significant and substantial contribution to the production and presentation of the new knowledge being published. A number of the conventions and practices that constitute scientific authorship have been influenced by the trends discussed previously in this chapter. Tracing how trends in research such as globalization and technology are affecting authorship provides a useful window into how research is changing more broadly.
Authorship practices have evolved to support the development and distribution of new knowledge, engaging the powerful human motivation to discover and receive credit for discovery. Researchers are often evaluated, rightly or wrongly, by the quantity and quality of their work, as measured by the number of their publications, the prestige of the journals in which their publications appear, and how widely cited their publications are. Authorship also serves to establish accountability for published work. For example, authors are responsible for the veracity and reliability of the reported results, for ensuring that the research was performed according to relevant laws and regulations, for interacting with journal editors and staff during publication, and for defending the work following publication (Smith and Williams-Jones, 2012).
Authorship practices vary between disciplines. Professional and journal standards and policies on authorship also vary. For example, in some disciplines the names of authors are listed alphabetically, while in other disciplines names are listed in descending order of contribution. In some disciplines, senior researchers are listed last and in others they are listed first.
At least three significant factors have changed authorship practices in recent
decades. First, the degree to which researchers make use of technology and the ways in which they use technology have changed dramatically. Researchers now frequently rely on computer software and hardware for many of the processes and analyses they undertake. They rely more on sophisticated software and computer models both in the analysis and in the presentation of results. The extent to which researchers understand how these tools affect data and results is a topic of concern in 21st-century research. Second, as a result of new information and communication technologies, especially the Internet, researchers engage in much more collaboration at a distance. This facilitates national and global collaboration and can lead to larger, more broadly scoped projects. Data gathering and analysis can be parsed out to different locations, with information potentially easily accessed and shared regardless of location. Researchers are able to electronically maintain frequent contact, have group meetings, and coauthor documents. Third, as a result of software and hardware developments, huge databases of information can be gathered and used, and researchers have access to and must deal with much more information than ever before. Consequently, researchers have to manage data in new ways and may be held to higher standards of knowing and understanding other research that has been done in their area.
These changes raise a variety of challenges to researchers and the research enterprise. For example, in part because of the increased scale of research, the number of authors listed on papers in some disciplines has grown considerably. Extreme examples include the 1993 Global Utilization of Streptokinase and Tissue Plasminogen, or GUSTO, paper in the New England Journal of Medicine, which involved 976 authors (GUSTO Investigators, 1993), and a 1997 Nature article on genome sequencing that had 151 authors (Kunst et al., 1997, from Smith and Williams-Jones, 2012). The recent joint paper from the two teams collaborating on the mass estimate of the Higgs boson particle lists more than 5,000 authors (Castelvecchi, 2015). The original papers reporting the discovery of the Higgs boson had approximately 3,000 authors each (Hornyak, 2012). How can the primary author or authors be responsible for the work of a hundred individual researchers who are geographically dispersed and come from a wide range of disciplines? When an error is found or an accusation of wrongdoing is made, the problem has to be traced back to the component of the research that is called into question. In the process of tracing back the possible wrongdoing, the primary author or authors, while accountable, may not understand the area or have had much control over the researchers involved. The primary author or authors may be accountable but not blameworthy. These challenges are complicated by disciplinary differences in authorship conventions.