National Academies Press: OpenBook
« Previous: 6 Outcomes and Impacts of Convergence
Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

7

Measurement Efforts

The workshop included several presentations focused on existing efforts to measure convergence. Julia Lane (New York University) reminded workshop participants of the National Science Foundation’s (NSF’s) description of convergence as using many approaches to address complex problems focused on a societal need. She argued that many societal needs are empirical. To address the needs of the National Center for Science and Engineering Statistics (NCSES), she asked: (1) how does one obtain the data required to answer a given research question, and (2) how will those data be valued after analysis? Lane argued that several technologies could be used to construct datasets, particularly federally funded datasets. As a labor economist, she previously worked with confidential micro datasets, and in her view, one major challenge has been that the current data infrastructure and traditional measures (unemployment, gross domestic product) have become less salient since their initial development. Data infrastructure will need to adapt as needs change. In the past two decades, for example, privacy concerns and level of willingness to provide personal data has drastically changed.

Convergence has led to the availability of new types of data, which can be leveraged to solve complex societal problems, but making data openly available does not ensure that its quality is high enough to be usable. This problem is exemplified by data concerning COVID-19. Massive amounts of data have been collected and are now available. Still, there is little organizational structure to characterize the types of data available, and there is no method to identify who is using the data and whether particular results have been replicated or reproduced. She cited Google Scholar as an example of an infrastructure developed to help researchers find

Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

publications. Unfortunately, there is no such infrastructure to facilitate a search for datasets.

Lane said that it is important to consider different ways of collaborating with other researchers while utilizing confidential datasets. New technologies such as FedRAMP allow data to be securely shared via cloud storage. FedRAMP provides a secure data environment that will host, secure, monitor, and control data access, thereby reducing much of the burden associated with using a confidential dataset.

Data science is becoming increasingly important across the United States, and more universities are offering degrees in this discipline. Lane suggested that a shift in how data scientists are trained could be beneficial: Instead of students learning to code “Hello, world” in perfectly curated datasets, they could be trained to work with real-world government data. This approach would allow skill improvement while serving the public good. To facilitate the service-learning idea, it would be important to build an infrastructure that can support various new ideas and approaches.

A greater focus on solving large-scale social problems could begin with large government datasets but later be extended to other data sources. Government datasets account for nearly one-quarter of economic activity and can be used to answer questions related to a variety of topics. Lane’s research group has been partnering with a subset of agencies, including the U.S. Department of Agriculture (USDA), National Oceanic and Atmospheric Administration (NOAA), National Institute of Standards and Technology (NIST), and National Institutes of Health (NIH) to determine which datasets should be curated and highlighted. NOAA, for example, needed a means of identifying forests with high conservation value, so it developed metrics based on the biological diversity of ecosystems and how the forest serves community needs. Tyler Christenson (NOAA) has proposed that those measures could be used by the National Forest Service to identify forests with high conservation values.

As examples of such metrics, Lane explained that one could simply count the number of times a dataset is used in a way similar to that used to track publication citations. Such tracking could help identify high-impact datasets by, for example, documenting how datasets are applied to community needs. Through two open-source competitions, Lane’s team challenged the computer science community to develop a machine learning and natural language processing algorithm that could be used to identify datasets used in publications (in other words, to create what Lane’s team calls a “dataset publication dyad”).

Lane said a group of computer scientists built a training corpus and artificial intelligence (AI) model that correctly identified a dataset with 78 percent accuracy. Their next goal is to improve the capabilities by sharing the critical core of dataset publication links with the public research

Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

community, which can then help identify mistakes and suggest improvements. Now that proof of concept has been completed, the next stage is community engagement. Lane described the work she has done with the Agricultural Research Management Survey (ARMS), which began with a set of metrics that includes publication topics, publications that cite ARMS, the researchers involved, institutional representation, and the ecosystem where operations occurred. When applied, these metrics begin to answer questions about high conservation value for agricultural research.

Work has also expanded the corpus for CHORUS, a publication and compliance data service. Lane noted that it took six graduate students a few months to build this hand-curated corpus. A machine learning model was then applied to it, which increased the number of publications identified by 30 to 50 percent within 2 hours, depending on the measure. A validation tool was included to verify the results (the datasets used in each publication, that is).

Lane described her work with Mike Stebbins using Kaggle, a data modeling and analysis platform designed for competitions. Kaggle can be leveraged to increase engagement and allows community-based improvements through what is known as the Kaggle process. The process is designed to encourage thousands of computer scientists to improve machine learning models that generate the corpus of publications, with the hope of reaching 95 percent accuracy. Lane and Stebbins hope that other scientists and researchers will engage, improve, and automate the corpus continuously to build an effective infrastructure around database searches.

Federal statistics, federal measures, and the larger research community can overcome the current challenges faced. She believes that this work provides a classic example of convergence: data from multiple sources could be combined in order to answer complex social challenges.

Bethany Laursen (Laursen Evaluation & Design and Michigan State University) shared findings from a study conducted by her private evaluation consultancy and coauthors Nicole Motzer (National Socio-Environmental Synthesis Center or SESYNC) and Kelly Anderson (University of Maryland).

Laursen presented a systematic review of pathways for assessing interdisciplinarity. She began by defining each of the core terms in that effort. First, the review accepts the definition of interdisciplinarity adopted by each author under review: A publication’s use of the term “interdisciplinarity” qualified that publication for inclusion. Publications that relied on certain elements of transdisciplinarity but traced the overlap between these elements and interdisciplinarity were also included. The study considers convergence to be a type of interdisciplinarity or transdisciplinarity. Second, the study defines assessment as an empirical summary of important characteristics, with “importance” determined by authors themselves. Finally, the study uses “measure” to include qualitative and quantitative observations.

Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

Laursen suggested that every assessment unfolds along an “assessment pathway,” a series of choices that guide assessment efforts. The pathway may include all of the following phases: examining a particular context and the goals within it; developing the intended output of the empirical study; developing criteria and (in a full evaluation) developing standards for those criteria; choosing units of observation (such as temporal or spatial), measures or indicators of those units, and analytic methods to observe those indicators; developing an aggregation method for multiple indicators; and (in an evaluation of quality or merit) developing evaluative judgment methods that can generate a declaration of the object’s potential, current, or total quality or merit. Such a declaration could inform a decision to retain or discontinue the object of the assessment. Laursen demonstrated this structure’s application by showing two examples of assessment pathways used in an evaluation of the diversity of research teams funded by the National Institute of Aging.

The assessment pathway shows that NCSES researchers must make many choices when assessing interdisciplinarity or convergence and that failure to consider or act on various choices along the pathway could mean an incomplete evaluation. When Laursen and colleagues began their own assessment of interdisciplinarity, they confronted an unmanageable range of methods and pathways to conduct an evaluation of “interdisciplinary synthesis” at SESYNC. They thus shifted their work to a systematic review of all published pathways in the interdisciplinary assessment literature between 2000 and 2019. Using a systematic review design, search criteria, and specific databases, Laursen and her colleagues found 1,006 unique assessment pathways.

Laursen shared four key lessons. First, authors conducting an assessment should either monitor or evaluate and explicitly declare their choice. Monitoring describes measures or observations without making a value judgment about the results. By contrast, evaluation arrives at a value judgment or diagnosis. Authors should choose whether to monitor or evaluate based on whether or not they want to arrive at an evaluative judgment that can warrant further action.

Second, researchers should use rigorous evaluative reasoning to avoid logical “dead ends.” Laursen explained that to move logically from a description of a measured criterion to a diagnostic value judgment of that criterion, a researcher needs an explicit standard or scale of value. Moreover, each of these components must be explicit, although at least 83 percent of pathways surveyed by Laursen and team that aimed to form evaluative judgments did not include these minimum elements of rigorous evaluative reasoning. Most thus failed to reach an evaluative judgment and, in the process, did not offer readers a clear picture of how to adopt the pathway to their own assessment efforts.

Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

Third, researchers should use mixed methodologies to observe multiple facets of interdisciplinarity. Of the studies identified by Laursen and her coauthors, 81 percent used exclusively qualitative or exclusively quantitative methods. Only 10 percent used mixed methods. Among the studies that employed mixed methods, many did not describe how multiple measures were aggregated to support their final judgments. Of all the pathways identified, moreover, 6 percent used only one measure to form evaluative judgments, which seems inadequate to measure the highly multifaceted phenomenon of interdisciplinarity. Laursen highlighted rubrics as an exemplar method for evaluating multidimensional, multifaceted, and complex phenomena such as interdisciplinarity and convergence. A rubric allows a researcher to specify standard performance levels for each criterion. As Laursen explained, the rubric must specify criteria that are accurately measured and define appropriate, detectable performance levels.

Fourth, Laursen encouraged researchers to use the dataset of pathways developed by her team to identify criteria, standards, measures, methods, and research approaches that may be appropriate for assessing convergence. Laursen and colleagues have built an interactive dashboard that allows the user to filter and show dominant modes and approaches that have previously been used to assess interdisciplinarity.1

Laursen closed by arguing that the published literature on assessing interdisciplinarity generally lacks evaluation basics. Well-established basics will help to increase the rigor of assessments and provide a shared vocabulary among practitioners. Practitioners should choose, explain, and follow their assessment pathway clearly and carefully.

Dan McFarland (Stanford University) discussed his work deriving convergence measures using network analysis and natural language processing. He focused on social convergence (how stakeholders relate to one another in intellectual creation) and intellectual convergence (how conceptual relations and understandings arise from scientific discovery). McFarland mapped these dimensions of convergence onto two axes that converge in various forms of interdisciplinary engagement, from macrolevel “trading zones” across disciplines within universities to shared projects at interdisciplinary centers, the development of “boundary objects” shared by multiple fields (such as a patent), and conceptual linkages studied by the philosophy of science.

McFarland has focused on investigating the impact of interdisciplinary centers on social convergence by answering three key questions: Do they generate productivity and new collaborations? Do they hinder high-quality disciplinary work? Do they induce unique forms of social convergence? McFarland explored these questions by examining the impact of several

___________________

1 See https://shiny.sesync.org/apps/evaluation-sankey.

Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

Stanford University institutes on departmental connections and the composition of graduate dissertation committees. The first institute is BioX, an interdisciplinary bioengineering center the founding of which strengthened rather than disrupted preexisting collaborations and created a kind of “thoroughfare” that facilitates activity without disadvantaging individual departments. In another case, the Woods Environmental Center generated a new intellectual community on campus that has led the university to consider creating an entirely new degree-granting school.

Intellectual convergence, the exploration of the relationships between ideas, has also been studied. Bibliometric analyses and citation practices have been the typical proxy for this type of convergence. McFarland has found that citing work outside of one’s discipline can be a high-risk practice (although there is also a corresponding potential for high reward). McFarland also considered the impact of annual review papers as a form of synthesizing convergence. He found that authors mentioned in annual reviews actually experienced a decrease in citations of their own work, as the review was increasingly cited in place of the papers it discussed. McFarland described this phenomenon as a type of creative destruction that suggests convergence may be symbolically violent: It requires eliminating linkages in the interest of simplification and creating new methods to bridge and integrate.

McFarland has explored potential links between intellectual and social convergence such as including the directionality of this relationship (whether social connections promote intellectual convergence or intellectual interests foster social convergence), as well as factors that facilitate both forms of convergence. McFarland finds that social convergence frequently crosses ranks and ages (senior and junior faculty are more likely to collaborate with each other) and that cultures of collaboration differ widely between STEM fields (which tend to follow a more rapid pace of team scientific discovery) and non-STEM fields. Changes also emerge over time: With the development of the Internet, intellectual connections tend to precede and lead to social collaborations, while the opposite order tended to unfold before the Internet existed.

This directional shift led McFarland to ask whether the drivers of convergence are primarily intellectual or social, but he eventually concluded that these factors are often simply distinct in various research products: Collaborating researchers from different fields, for example, may adopt a division of intellectual labor that preserves their disciplinary differences such as writing discrete sections of a publication. Truly mixed-method research may treat a particular discipline as the primary one rather than stage an interdisciplinary encounter of equals. McFarland acknowledged that the aspirational model of a true merger of disciplines does exist, typically in the form of a graduate student or postdoc who synthesizes expertise in multiple

Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

disciplines. Such researchers, though, typically do not consider the implications of convergence for their disciplines but instead focus more narrowly on specific publications or products.

McFarland concluded by discussing examples of intellectual convergence. Using the field of natural language processing as an example, McFarland showed that certain fields have “phenomena anchors,” or core problems that remain remarkably stable over time while the methodologies to study them are much more volatile and continually changing. Other fields, however—such as biology—may see continual change in the phenomena they study and not just the methods they use to do so. McFarland suggested that intellectual convergence across such disciplines would involve finding a way to operate across the levels of phenomena and method.

In his talk, Ismael Rafols (University of Leiden) argued that gathering different types of knowledge to foster innovation and address social challenges requires consideration of two issues: epistemic integration (how knowledge is combined) and the direction in which knowledge combination moves. Using COVID-19 as an example, Rafols said that knowledge from various disciplines could be combined toward risk communication, vaccine development, health care management, and more. The description of directions of research requires new kinds of multidimensional metrics that describe portfolios of research and new ways of communicating those portfolios (including potentially tables, networks, and maps).

Previous efforts to develop universal or general indicators of interdisciplinarity for policy use has been unsuccessful. Rafols, for example, served as an advisor to efforts by the British Council performed by Elsevier and then by Digital Science to develop national measures of interdisciplinarity that generated inconsistent, conflicting, and confounding results. Indicators of interdisciplinarity developed in previous studies had been successful and valuable for particular research areas but did not apply to the variety of contexts at the national level. Wang and Schneider have recently reported similar findings regarding inconsistency of universal indicators.2 Rafols concluded that using a single or even a few atomistic indicators to measure complex research activities capable of addressing societal problems is misguided.

Rafols proposed shifting from an atomistic to a portfolio approach modeled on ecology. An ecology portfolio would not map degrees of interdisciplinarity: It would instead investigate the entire landscape that makes convergence possible. As with environmental ecology, it would connect structure to function by relating the production of science to an organization’s mission or societal need. Descriptions of the portfolio landscape would include the variety of knowledge in the landscape rather than within

___________________

2 Wang, Q., and J.W. Schneider. (2020). Consistency and validity of interdisciplinarity measures. Quantitative Science Studies 1(1):239–263.

Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

individual papers, diversity in the social background of researchers (which Rafols likened to soil conditions), linkages between bodies of knowledge in the landscape, and the balance and directions of knowledge. Shifting to a portfolio approach avoids unidimensional indicators in favor of a multidimensional map in which different directions are possible. The new framework contains both the distribution of knowledge (“spaces”) and efforts to integrate that knowledge (“bridges”). Bridges may exist at the organizational, cognitive, or societal levels.

Rafols offered an example of landscape mapping at the organizational level.3 Using a global map of different scientific fields as a starting point, he mapped the diversity of scholarship in departments at the London Business School and the Institute for the Study of Science, Technology, and Innovation (ISSTI) in Edinburgh. Linkages across fields can be investigated using bibliometric data to determine whether expectations of crossover are met or exceeded. Rafols’ approach allows one to see not only the fact of interdisciplinarity but also the directions in which it is proceeding, which cannot be determined from unidimensional measures of interdisciplinarity.

He moved on to show convergence as a combination of topic distribution and integrative efforts. The combination allows for a variety of meaningful perspectives that are flexible to the question asked. Rafols explained further with a mapping of rice research4 that represented six distinct topic areas and tracked efforts in those areas over time to understand how research has evolved. This framework also allows one to visualize how research priorities may differ across countries that have distinct needs and concerns.

The kinds of analyses Rafols presented illustrate the characteristics of mapping a convergence landscape. This approach measures types and directions of knowledge combinations, not just the quantity of combinations that are captured by interdisciplanarity indices. This approach also relies on maps that can visualize the distribution of research and linkages between topic areas rather than traditional statistical reports. As previously mentioned, Rafols suggests a combination of top-down and bottom-up approaches.

Finally, it is vital to consider what type of convergence is most relevant. Measurements could focus on changes in organizations such as university departments or in relation to specific societal problems.

Josh Schnell (Clarivate) followed with his presentation on the value of bibliometric approaches. He acknowledged that multidimensional value

___________________

3 Rafols, I., L. Leydesdorff, A. O’Hare, P. Nightingale, and A. Stirling. (2012). How journal rankings can suppress interdisciplinary research: A comparison between innovation studies and business and management. Research Policy 41(7):1262–1282.

4 Ciarli, T., and I. Rafols. (2019). The relation between research priorities and societal demands: The case of rice. Research Policy 48(4):949–967.

Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

systems of science are not always well reflected in science metrics. New measurement efforts ultimately will depend on what NCSES and its stakeholders want to know about convergence within a national context. Schnell described one attempt to map the trajectory of research approaches employed by Japanese researchers. The study5 found that approaches became less diverse and more conservative over time. This mapping analysis was then used to reveal to policy makers, university leaders, and researchers how science was actually being practiced.

Convergence can be viewed from either a theoretical and conceptual perspective (the science of how convergence should be studied) or from a practical perspective (the question of how researchers are actually collaborating). Any study of convergence involves multiple dimensions, including disciplinary interactions and topics; the problem-orientedness of the research; people and organizations involved; institutional context; tools, equipment, and methods involved; the learning and educational dimension; and infrastructure. Schnell added that convergence efforts do not all need to be novel: A convergence lens can be applied to existing NCSES data, and linked datasets can be particularly important for measurement. Interdisciplinary data on the Survey of Earned Doctorates (SED), for example, could be linked to bibliometric data to offer a view of both processes and product. Although challenging and time-consuming, the linkage of datasets could enable researchers to ask increasingly exciting questions and reflect more on multidimensionality.

Schnell discussed several examples that demonstrate the value and limitations of bibliometrics. He shared a map of emerging highly cited research, or “research fronts,” that visualized disciplinary and topical interactions. This kind of mapping can be based on bibliometric data, however, bibliometric analysis works best when it is refined and linked to other data. Measurements of the role of researchers and organizations in convergence can, for example, be refined by considering data on the impacts of COVID-19 on women in the research community. The field similarly still lacks a proxy measure of problem orientation that is based on bibliometric data, and bibliometric analysis itself has limitations for national-level interdisciplinary research (IDR) or convergence (as Rafols’ presentation also noted). In addition to refinement of these technical challenges, Schnell suggested that bibliometric analysis should be informed by and confirmed with stakeholders.

Potential for the further use of bibliometrics hinges on a more robust understanding of the theory and practice of convergence, which Schnell’s team at Clarivate is working to produce. Surveys and field research should be used to understand several elements of convergence, including tools

___________________

5 Igami, M., and A. Saka. (2016). Decreasing diversity in Japanese science, evidence from in-depth analyses of science maps. Scientometrics 106:383–403. doi:10.1007/s11192-015-1648-9.

Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

and equipment needed, methods involved, and learning and educational dimensions. Schnell sees a potential in bibliometric analysis for considering citation networks, to examining the process of convergence through text-mining techniques, and analyzing the full text of scientific articles to understand citation context. These mining techniques hold the potential to visualize the integration of knowledge as it pertains to convergence.

Michèle Lamont (Harvard University) spoke about a socioemotional cognitive platform for interdisciplinary collaboration based on research conducted in 2010 and published in a 2016 paper with Veronica Boix Mansilla and Kyoko Sato. Their study is based on in-depth interviews conducted with individuals from nine different research groups—supported by grants from the Canadian Institute for Advanced Research (CIFAR), the MacArthur Foundation, and the Santa Fe Institute—to understand how they define successful interdisciplinary collaboration. The interviews focused on how members of the group define markers and facilitators of success; how they experience the construction of group membership and collective work; and the research infrastructure created by funding organizations that enable success.

Interviewed participants were life and social scientists who had collaborated for 1 to 8 years. In addition to conducting interviews, Lamont’s team also observed meetings, administered questionnaires, and reviewed the research team’s publications. To capture the respondents’ multidimensional experiences, Lamont’s team proposed a shared socioemotional-cognitive (SSEC) model that registers not only the cognitive dimensions of interdisciplinary collaborations—which are generally addressed in the extant literature—but also its emotional and interactional dimensions.

The SSEC model highlights the importance of intellectual, interactional, and institutional factors. Interactional factors include emotion, which plays a constitutive role in creating and sustaining successful interdisciplinary collaboration. Collective excitement was mentioned by 51 percent of respondents; 28 percent described the joy of collaboration; 53 percent mentioned the importance of group deliberation and learning how to learn from each other; and 32 percent noted the value of developing meaningful relationships. Within intellectual or cognitive factors, 67 percent mentioned the importance of cross-disciplinary exchange, and 40 percent mentioned sharing intellectual tools. Various factors facilitate these valued activities, such as socializing outside of meetings. Each of the nine groups experienced cognitive, emotional, and interactional factors in different proportions, with cognitive factors the most salient.

The intellectual dimension of interdisciplinary collaboration is marked by several features. First, group members participate in productive problem framing: The definition of their research problem evolves iteratively over time as group members engage in codefinition. An optimal level of ambiguity is essential to this process. Having an overly strict definition of a

Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

problem may impede interdisciplinary collaboration, while the appropriate amount of ambiguity will allow the group flexibility and provide a means to develop the interdisciplinary exchange. One participant suggested that ambiguity allows every creator to offer their expertise and analogized the process to building scaffolding as work progresses.

Lamont stated that a full understanding of interdisciplinary collaboration is impossible until attention is paid to emotional aspects. One participant highlighted this importance, comparing interdisciplinary research collaborations to marriage. Emotional success was enabled by feelings of group belonging, respect and admiration of peers, a climate of conviviality, and effective leadership by individuals who understood the cognitive, emotional, and social demands of successful interdisciplinary collaboration. Although many remarks emphasized the positives of interdisciplinary research, Lamont and colleagues found that participants did mention markers of unsuccessful collaboration, including the failure to frame a problem in ways that are shared by network participants and the failure to establish relatively shared expectations. They also pointed to several barriers to successful collaboration, such as the self-interestedness of individual members, disciplinary closed-mindedness, disciplinary language, and conflicting ideologies, epistemologies, and communication styles.

DISCUSSION

Nora Cate Schaeffer (University of Wisconsin–Madison) asked Laursen whether studies using multiple measures linked each measure to a conceptual definition. Laursen confirmed that links were nearly always made to a conceptual definition. Publication authors often named a construct or subconstruct, and explicitly explained how they proposed a new measure of that construct.

Klein and Kara Hall (National Cancer Institute) both asked Lamont about the potential implications of NSF’s effort to measure socioemotional and cognitive aspects of convergence. Lamont noted that of the three organizations she studied, CIFAR does not have a short-term deliverable. She also noted that open-endedness appeared to allow researchers to be more adventurous in their collaborations. She thus suggested that this fact might lead to the creation of funding models that are less explicitly tied to proposals with well-defined, expected results. She also acknowledged that this type of research funding is expensive and likely to meet resistance. She suggested, nonetheless, that a more inductive and multidimensional focus on emotions and relationship creation in interdisciplinary work might increase alternative forms of collaboration and funding outside of NSF.

McFarland was asked how network analysis could help assess progress made on reducing barriers to convergence. He explained that concept

Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

networks are used to identify the locations of problems and bottlenecks. His research investigates where connections are made or lost and identifies pervasive problems or gaps, and based on that research, he concluded that network analysis cannot determine whether filling a gap is truly a sustainable solution.

Hall followed by asking McFarland how he might conceptualize measurements that could examine interdisciplinary and emergent research on a national scale. McFarland responded that the objects of network analysis frequently require smaller networks. Some methodologies can be used for more extensive networks but they are computationally taxing. Additionally, it is possible to “community detect”—discover cohesive groups or clusters—within large networks and then perform more computational procedures on the resulting data. McFarland is currently drafting a textbook for social network analysis in order to address analysis at varying scales.

Hall asked Laursen whether she could share any promising measures of interdisciplinarity that NCSES could use to explore convergence. Laursen noted that her team extracted 890 unique measures from 142 publications investigated. Laursen said that the most promising measures were combinations or aggregations of quantitative and qualitative measures because they unite a countable output (such as bibliometric data) that measures the products of convergence and a qualitative measure that helps unpack the process of convergence. Qualitative measures are generally not as clearly defined as the quantitative measure, but dozens of incisive critical questions are available to investigate the process aspects of convergence. These questions often still have several weaknesses: They almost always take a yes/no format that is difficult to translate into indicators for action. They also often implicitly embed standards in a way that obscures what counts as a “good” answer, and publications often do not explain how to synthesize the answers to a list of questions in order to derive a coherent overall judgment. She added that the method used to aggregate measures can be even more important than the measure itself because it is closer to the final judgment of quality. Researchers who use a rubric to aggregate measures can synthesize multiple qualitative and quantitative measures in a clear way. Coauthor Nicole Motzer (SESYNC) added that the group’s upcoming work will include a cluster analysis that investigates which published works move furthest along their assessment pathway. That analysis may provide guidance for convergence measures. Laursen noted that the group also plans to publish a subanalysis of scaling differences.

Khargonekar asked Rafols to describe the distinction between direction and combinations of knowledge. Rafols said that combinations bring different bodies of knowledge from various disciplines together, while direction captures how these types of knowledge lead toward specific research agendas and influence the kinds of innovations that are possible.

Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

Schnell was asked how his company, Clarivate, approaches coverage and disambiguation of individuals in bibliometric analysis. He responded that both are areas of concern for Clarivate. Disambiguation has been aided by advancements in machine learning technologies and by ORCID, which creates a unique identifier for authors, but more work remains to be done. Schnell noted that complete coverage is impossible and thus reiterated the importance of aligning decisions about research on convergence with a specific purpose. Researchers must determine what kinds of literature and topic areas should be measured and which aspects of convergence they seek to understand. Schnell also addressed concerns about bias within bibliometric data, noting that work has been done to understand potential biases and reiterating that bibliometric analysis may leave gaps that require different dimensions of measurement to fill.

Both Rafols and Schnell were asked about the prospects for providing interactive maps and data visualizations that would be useful to various stakeholders. Schnell stated that interactive tools could help policy makers and the public understand various policy questions better. He cautioned that these tools are not always comprehensible for lay audiences, so it might be good to use other methods to convey information about convergence or a portfolio of research. Rafols agreed and pointed to other limitations on interactive mapping. He requested input from those with expertise in qualitative research to assist in developing qualitative processes to enhance productive use of interactive quantitative tools.

The two presenters were asked about the use of bibliometric analyses of convergence in forecasting emerging trends or fields. Schnell warned that the term “prediction” can be problematic, but agreed that bibliometrics can provide some indication of emerging trends and fields. On the other hand, he noted that the datasets of bibliometric analyses are produced on a delayed time frame (there is a lag between completion of a work and its eventual publication). Rafols emphasized the need for caution in using the term “prediction.” He suggested that because convergence always moves in the direction of some research agenda, researchers can use interactive tools to consider “what futures we want to build” rather than to predict what future will inevitably come to pass.

The moderator asked Rafols to elaborate on the question he raised on the first day of the workshop about the use of networks and scientific mapping to measure convergence. Rafols explained that he is interested in new kinds of indicators that consider cognitive distances between the categories mapped in studies of convergence. Distance is important because when subject areas are more distant from each other (sociology and biochemistry, for instance), it becomes increasingly difficult for the involved researchers to understand one another.

Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×

Khargonekar asked the two presenters whether it would be useful to distinguish between predefined intentional trajectories or vectors of convergence and serendipitous or emergent trajectories or vectors. Schnell answered that more research is needed to understand the degree of directedness of convergence in practice, both in NSF grants and in the scientific literature. He also emphasized the importance of ensuring that these practices are captured in bibliometric analyses. Rafols suggested that intentional and serendipitous trajectories do not always exist in dichotomous form. He presented the example of the Zika virus vaccine, which was serendipitously developed based on research from related areas such as dengue fever. Mapping, however, would allow a user to anticipate the areas from which such serendipitous connections can most easily be—or are more likely to be—made.

The final questioner noted that a problem orientation is frequently the motivation for convergence research and asked whether it is also a necessary condition. From the perspective of Clarivate, Schnell stated that convergence does require a specific or compelling problem to be its driver. Although nonconvergent research can be problem-oriented, convergence is distinguished by combining deep integration across disciplines with a necessary orientation toward solving a specific or compelling problem. Rafols addressed the question from the other direction: He noted that convergence is a highly desirable if not necessary feature of robust, problem-oriented research. Excessively narrow approaches to problem-oriented research, he suggested, are often the flaw responsible for poor-quality policy advice, since they may not consider all relevant angles.

Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 43
Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 44
Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 45
Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 46
Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 47
Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 48
Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 49
Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 50
Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 51
Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 52
Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 53
Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 54
Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 55
Suggested Citation:"7 Measurement Efforts." National Academies of Sciences, Engineering, and Medicine. 2021. Measuring Convergence in Science and Engineering: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/26040.
×
Page 56
Next: 8 Feasibility and Implementation »
Measuring Convergence in Science and Engineering: Proceedings of a Workshop Get This Book
×
 Measuring Convergence in Science and Engineering: Proceedings of a Workshop
Buy Paperback | $55.00 Buy Ebook | $44.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This Proceedings of a Workshop summarizes the presentations and discussions at the Workshop on the Implications of Convergence for How the National Center for Science and Engineering Statistics (NCSES) Measures the Science and Engineering Workforce, which was held virtually and livestreamed on October 22-23, 2020. The workshop was convened by the Committee on National Statistics to help NCSES, a division of the National Science Foundation, set an agenda to inform its methodological research and better measure and assess the implications of convergence for the science and engineering workforce and enterprise. The workshop brought together scientists and researchers from multiple disciplines, along with experts in science policy, university administration, and other stakeholders to review and provide input on defining and measuring convergence and its impact on science and scientists.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!