National Academies Press: OpenBook

Measuring Convergence in Science and Engineering: Proceedings of a Workshop (2021)

Chapter: 6 Outcomes and Impacts of Convergence

« Previous: 5 Convergence as Practiced
Suggested Citation:"6 Outcomes and Impacts of Convergence." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

6

Outcomes and Impacts of Convergence

Erin Leahey (University of Arizona) shared findings from her investigation of the outcomes and impacts of interdisciplinary research, which she refers to as IDR. She described the impacts at three different levels: the person level, the team or publication level, and the organizational level. Her own work focuses primarily on the organizational level, emphasizing not only the outcomes and impacts of IDR, but also its unique measurement on this level.

Based on interviews with fellows in the Mellon Foundation New Directions program, her earlier book chapter focused on IDR at the person level. Study participants were all scholars who had received doctoral degrees and tenure in a particular field and applied for the fellowship to pursue a different field. Leahey and McBee1 (2016) focus specifically on the challenges faced by participants. Common themes included production and evaluation hurdles. Production hurdles described by the participants included the extra time and greater commitment that IDR requires, which reduces one’s productivity. Leahey highlighted one participant’s response: “If one looks at my research output … there’s actually a big gap. It looks like I went into a coma or something.”

An evaluation hurdle shared by study participants was the need to placate disciplinary colleagues. Participants felt required to capture the interest of scholars from multiple disciplines and thus faced different types of criticism from each discipline. Participants perceived they were being

___________________

1 Leahey, E.E., and D. McBee. (2016). New directions in interdisciplinary training: trials and tribulations. In Investigating Interdisciplinary Research: Theory and Practice across Disciplines. Rutgers, NJ: Rutgers University Press.

Suggested Citation:"6 Outcomes and Impacts of Convergence." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

judged by the standards of others. Leahey said that all these participants held tenure in the humanities disciplines like philosophy and English in which sole-authored work is common. She wondered whether other disciplines in which team publication is more common would face the same challenges.

To address the team or publication level, Leahey described her 2017 article2 in Administrative Science Quarterly that she coauthored with Christine Beckman and Taran Stinko. This publication investigated 30,000 papers written by more than 900 authors across a variety of fields, collected from scholars at the Industry University Collaborative Research Center (IUCRC). The measure of interdisciplinary research was based on the 2007 Web of Knowledge Science Categories devised by Alan Porter and his colleagues3 to measure the level of integration. To measure how interdisciplinary a focal paper is, they counted the number of references cited. Leahey and her colleagues assigned a score between 0 and 1. The results suggested that interdisciplinary research imposed a productivity penalty, but it had a positive effect on visibility (as measured by number of citations). In terms of effect size (based on the standardized coefficients), the productivity penalty was greater than the boost in visibility.

According to Leahey, interdisciplinary research and collaboration pose cognitive challenges that contribute to the productivity penalty. It becomes more difficult to generate ideas, and research moves at a slower pace. Survey data also document collaborative challenges, including initial difficulty communicating within interdisciplinary teams. With repeated collaboration, though, coauthors build a comfort level that results in more productive collaboration. Leahey and her colleagues did not find that interdisciplinary research posed additional challenges during the review process. Based on working papers from archive.org, interdisciplinary papers were more likely than others to eventually be published.

Visibility for IDR research also depends on the fields represented. Life sciences, for example, which is considered a highly interdisciplinary field, experiences a more significant positive impact on author visibility. Leahey said that the effects of IDR are also greater when the cognitive distance between interdisciplinary fields is larger. Leahey made a distinction between cognitive distance and variety: when variety is the focus, the benefits of visibility are weakened, though the production penalty disappears. IDR is also risky in the sense that is increases the variability of visibility (as measured by the standard deviation of citations across all the scholar’s work).

___________________

2 Leahey, E., C.M. Beckman, and T.L. Stanko. (2017). Prominent but less productive: The impact of interdisciplinarity on scientists’ research. Administrative Science Quarterly 62(1):105–139. doi:10.1177/0001839216665364.

3 Porter, A., A. Cohen, J. Roessner, and M. Perreault. (2007). Measuring researcher interdisciplinarity. Scientometrics 72:117–147. doi:10.1007/s11192-007-1700-5.

Suggested Citation:"6 Outcomes and Impacts of Convergence." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

With funding from National Science Foundation (NSF) and Science of Science and Innovation Policy (SciSIP), Leahey partnered with Sondra Barringer to investigate IDR at the organizational level (University Commitment to Interdisciplinary Research: Scope, Causes and Consequences). This research focuses on structural commitments to interdisciplinary research, terms of funding, the organization of units within the university, and the question of whether or not these structural commitments bolster IDR. Units might be reorganized, for example, to add a new department or program or create a research center that connects multiple fields. Leahey and her colleagues collected data on roughly 160 Research I universities and analyzed the list of departments, names of those departments, and research centers. They used semi-supervised machine learning to identify which were interdisciplinary.

Their findings highlight four distinct indicators of a university’s structural commitment to IDR: (1) the ratio of research centers to departments, (2) the ratio of interdisciplinary research centers to research centers, (3) the ratio of interdisciplinary departments to all departments, and (4) the number of departments. Using traditional confirmatory factor analysis, Leahey and Barringer devised a score that represents the university’s commitment to IDR. They then used that score as a predictor to model research outcomes. Their conceptual model assumes a lag between the university’s structural commitment to IDR (measured in 2012–2013) and the effect on subsequent publications and grants (2015). They found that a structural commitment to IDR has a large effect on research activity, particularly for IDR. A moderate effect was found for interdisciplinary publications and interdisciplinary National Institutes of Health (NIH) grants, but they found no effect for interdisciplinary NSF grants.

Leahey speculated about the source of the discrepancy between the effects on NIH and NSF awards, noting that NSF grants are typically awarded to a single principal investigator or to multi-university teams. This may explain why the infrastructure within a particular university plays a lesser role than it does for NIH, which is more likely to award institutional grants that involve multiple departments within the same university.

They found that structural commitment to IDR had no measurable effect on research quality (number of highly cited researchers, number of articles published in Nature and Science). She concluded that universities seeking to promote high-impact research may need to move beyond structural commitments to interdisciplinary research and express their commitment to IDR in other ways.

In another study, Leahey and colleagues investigated the precursors of a university’s commitment to IDR. They used a wide variety of sources including National Center for Science and Engineering Statistics (NCSES) datasets, the Survey of Earned Doctorates (SED), the Higher Education

Suggested Citation:"6 Outcomes and Impacts of Convergence." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

Research and Development (HERD) survey, the Integrated Postsecondary Education Data System (IPEDS), and NSF/NIH award data. She described both top-down and bottom-up influences that determine structural commitment to IDR. To quantify top-down support for IDR, they analyzed textual data from speeches delivered by the university president. Then, they compared the measure of presidential rhetoric supporting IDR with data from the survey of earned doctorates to test whether it predicts the number of interdisciplinary doctorates the university graduated. Top-down administrative efforts and bottom-up efforts from faculty both promote a structural commitment to IDR as the number of interdisciplinary doctoral graduates suggests.

In summary, Leahey concluded that individuals and teams benefit from interdisciplinary research through increased visibility as measured by citations, although they may experience decreased productivity. At the organizational level, commitments to IDR increased interdisciplinary research activity in terms of publications and NIH grants.

Ben Jones (Northwestern University) described the importance and difficulty of convergence and the policy design principles that follow from these characteristics. Jones noted that while some of his economics colleagues believe research is suffering a dearth of ideas, he believes that ideas such as those about the human lifespan, climate change, nuclear fusion, artificial intelligence (AI), and more are instead becoming more difficult to explore. The problem may lie not in the ideas needing research, but in the tools available to the researcher.

Jones attributed this difficulty in part to the burden of knowledge. Past research has amassed a large volume of insights, facts, and conceptual theories about the universe, but a single person has a finite knowledge capacity. As individuals, our ability narrows. To invest the time to become an expert in an increasingly complex area, a researcher’s focus must be narrow. Jones argues that as individual expertise narrows, teams become essential vectors to drive scientific advancement on large-scale problems. An assembly of diverse and complementary expertise is essential to achieve breakthroughs. Although teams are not synonymous with convergence, they are related, constituting aggregations of people with a variety of expertise.

Compared to sole authors, the importance of teams is illustrated by their advantage in producing the highest-impact work as measured by citations received, a phenomenon that represents a reversal from the 1950s and 1960s. Jones investigated this advantage by considering the relationship among teams, citation practices, and the probability of a groundbreaking publication. He began by using the set of references cited by a given publication as a measure of novelty. That set could be scored for how commonly its particular network of references appeared together across published works, and he labeled uncommon combinations as novel. Publications

Suggested Citation:"6 Outcomes and Impacts of Convergence." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

that balanced conventional and novel references had a higher probability of becoming highly cited and influential. Publications with only common combinations have a low probability of success, as do those with too many novel combinations. Additionally, teams have an increasing advantage in achieving this balance and therefore in developing high-impact ideas compared to sole-authored publications.

This dynamic also holds true relative to the “age” of citations (the length of time between the release of a cited publication and its citation in another publication). Jones found that the most influential papers balance extensive citation of the most recent literature (within the past 5 years) with variable citation of older papers (10 to 15 years old or more). Achieving this balance requires appreciating long-standing knowledge while simultaneously being grounded in a constant influx of new information. Although an individual specialist can achieve this balance within one field, that individual will likely not be able to achieve such a command of the literature across many fields. Indeed, teams again have an advantage in achieving this kind of chronological balance.

Although Jones’ research suggests that teams are critical to producing knowledge, that observation raises the question of how to facilitate good combinations that allow groups to work together effectively. He suggested that research and development (R&D) outcomes are difficult to anticipate, as exemplified by the poor predictive record of venture capitalists. The challenge is confounded by the narrow expertise of individuals, reviewers, and funders of research: Each individual is constrained by his or her own blind spots and areas of expertise. Jones added that the potential for research combinations is effectively infinite, although most combinations are likely to be useless.

In closing, Jones presented tentative design principles for high-impact convergence work. Jones highlighted a focus on demand as critical and relatively easy: Researchers have an ample number of problems they would like to solve. Defining which problems to solve is a starting point, and agencies such as NSF can play a major role in guiding this selection. For basic research, a source of funding that does not require a monetary return on investment is ideal. This approach ensures that researchers can focus on knowledge production rather than product creation. Although others might choose the selection of problems, Jones suggests allowing teams to self-assemble and letting their work converge organically. Unforced collaboration is likely to support better communication and compatibility. As a final suggestion, Jones suggested that a research portfolio should expect and plan for failure by creating a portfolio of diverse avenues of problem-solving and coordinating efforts rather than pretending to control outcomes. Failure is a regular occurrence within R&D, although there will be political and economic pressure to avoid investing in endeavors that may fail.

Suggested Citation:"6 Outcomes and Impacts of Convergence." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

DISCUSSION

Following the individual presentations, Holbrook asked Leahey about the extreme polarity between the “hits and misses” experienced in attempting to publish interdisciplinary research findings. Holbrook noted, for example, that when he has submitted grant applications, reviewers frequently markedly gravitate toward extremes, half of them rating the proposal as excellent and the other half rating it as poor. He wondered aloud whether Leahey sees bipolarity as a potential indicator of interdisciplinarity, convergence, or potentially transformative research. Leahey acknowledged that anecdotal evidence suggests that Holbrook’s experience is not unique, but she did not have data that could address the question. She noted that her data have suggested that the peer review process for interdisciplinary publications is no more onerous than for monodisciplinary work.

Bethany Laursen (Michigan State University) added a related comment about the use of standard deviation to capture some features of bibliometrics. She suggested that quantitative researchers could use some derived measures rather than direct measures to understand aspects of diversity and convergent versus divergent thinking. Owen-Smith responded that using a portfolio framework to carry out research would allow those derived measures to be calculated over a broader distribution of collected data.

Khargonekar asked Jones whether collaborative success is based more on random luck than expert judgment. Jones suggested that expert judgment matters a great deal but is difficult to measure. He also emphasized that while some researchers may celebrate team science as inherently valuable, he sees it simply as a necessary solution to the brutal problem of the increasingly narrowed focus in various disciplines. Moreover, he noted that teams have their own problems stemming from frictions between diverse perspectives and the implicit and intuitive knowledge held by each team member that can never be fully accessed by others. The need for the team approach in academia, though, requires that these challenges be worked out organically.

Ismael Rafols (University of Leiden) suggested that evidence may not support the use of citations as a proxy for societal impact in applied fields and asked the entire panel whether other measures can better measure the quality of work. Jones responded that bibliometrics is a useful metric because it can be tracked across many documents and is readily available. Jones also clarified that although this proxy may not measure quality, it can measure impact. Work that is cited is likely to be more influential, even if its quality is lacking, however, Jones agreed with Rafols that bibliometrics alone is imperfect and other measures are necessary. Having multiple ways of considering impact will ultimately be more useful.

Suggested Citation:"6 Outcomes and Impacts of Convergence." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 37
Suggested Citation:"6 Outcomes and Impacts of Convergence." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 38
Suggested Citation:"6 Outcomes and Impacts of Convergence." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 39
Suggested Citation:"6 Outcomes and Impacts of Convergence." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 40
Suggested Citation:"6 Outcomes and Impacts of Convergence." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 41
Suggested Citation:"6 Outcomes and Impacts of Convergence." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 42
Next: 7 Measurement Efforts »
Measuring Convergence in Science and Engineering: Proceedings of a Workshop Get This Book
×
 Measuring Convergence in Science and Engineering: Proceedings of a Workshop
Buy Paperback | $55.00 Buy Ebook | $44.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This Proceedings of a Workshop summarizes the presentations and discussions at the Workshop on the Implications of Convergence for How the National Center for Science and Engineering Statistics (NCSES) Measures the Science and Engineering Workforce, which was held virtually and livestreamed on October 22-23, 2020. The workshop was convened by the Committee on National Statistics to help NCSES, a division of the National Science Foundation, set an agenda to inform its methodological research and better measure and assess the implications of convergence for the science and engineering workforce and enterprise. The workshop brought together scientists and researchers from multiple disciplines, along with experts in science policy, university administration, and other stakeholders to review and provide input on defining and measuring convergence and its impact on science and scientists.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!