Entwisle invited each of the steering committee members to share their thoughts on key themes and on ways of moving convergence measurement efforts forward. The steering committee formulated six discussion questions, and each member was invited to focus on the areas they considered most salient:
- What would be the value in constructing national-level measures of convergence?
- What ideas and strategies would be most worth pursuing to push thinking on this initiative forward?
- What framework could the National Center for Science and Engineering Statistics (NCSES) use for measuring convergence?
- What definitions and criteria would we consider appropriate and what are their limitations?
- What types of NCSES products would be most valuable and for what audiences?
- What should NCSES prioritize as part of its work in this area?
Hall began by reading the National Science Foundation (NSF) mission statement: to “promote the progress of science; to advance the national health, prosperity, and welfare; and to secure the national defense and for other purposes.”1 This expansive mission covers complex societal and scientific challenges, necessitating multilevel, multidimensional, multifactorial,
1 From the National Science Foundation Act of 1950 (P.L. 81-507).
and multidisciplinary approaches that are similar to the convergence approach. Hall described the evolving context under which research occurs, in which advancements in technology make it possible to analyze problems of greater complexity, more numerous variables, and a greater variety of variables, as well as to enhance the speed, scale, and convenience of interactions and collaborations. While advances in technology over the past century have enabled the development of tools and platforms to support asynchronous and synchronous communication, the COVID-19 pandemic has created an imperative to establish new collaborative norms, including the increased acceptance of virtual meetings for conducting business.
Hall mentioned a number of elements that facilitate intentional and evidence-informed convergence research and highlighted considerations for evaluating convergence. She highlighted conceptual models that researchers can draw from to help discern the elements and emphasized the need to move beyond publication metrics. In addition to the conceptual models from which researchers can draw to help discern the elements, it is important to establish structures that enable people to work together in a team or system and to design and develop processes and platforms to help teams create products that align with the team or system’s vision and goals. A typical large research initiative at the National Institutes of Health (NIH), for example, might involve a governing body, research teams, working groups, and data centers, all of which must work together to create successful outcomes. She also distinguished the need for high-quality convergence and the requisite resources to support the capacity to conduct, manage, and support convergence research in terms of researchers to engage in the research, teams to manage the processes, and institutions with proper incentives and expectations.
Hall also considered what a portfolio of research might look like at the national level. How much convergence is desired? What mix of approaches beyond convergence might be considered? What are the dimensions that create different variations or types of convergence? What are the implications of these variations? Nested, interrelated levels of integration across departments and institutions, across projects and initiatives, and within projects are needed to support the readiness to undertake interdisciplinary and convergent projects, and this can prove self-reinforcing. Supportive university structures such as interdisciplinary centers can increase its likelihood of driving more interdisciplinary and convergent projects when designed to address barriers (distribution of indirect funds, for instance) and encourage facilitators (interaction across disciplines around common problems). She pointed to convergence profiles that differentiate by number of members (2, 10, 100, the latter possibly including teams of teams), number of disciplines, the distance between disciplines, the type of contributors (scholars, professionals, public, and patients), and the degree of integration
(multidisciplinary, interdisciplinary, transdisciplinary), which highlights the heterogeneity of convergence collaborations. Likewise, she identified the need to consider variations in the types of convergence approaches such as research that brings existing datasets together, which one could label post facto convergence. She offered other examples, including in vivo convergence (dynamic, interactive idea generation), fundamental convergence, translational convergence, and social and intellectual convergence. These variations point to the need for research and evaluation to capture the depth and breadth of convergence and highlights how there is not a one-size-fits-all approach to supporting convergence.
Funding agencies can intervene to help facilitate the conduct of team research needed to pursue convergent science projects that is not happening organically due to existing barriers to collaboration. One such strategy includes issuing Funding Opportunity Announcements to help incentivize institutions to participate. By adding criteria that require grantees to collaborate across departments and institutions, funding agencies can create an imperative for institutional processes to follow. Paradoxically and with few exceptions, current tenure and promotion policies at U.S. institutions emphasize autonomy and independence and do not necessarily reward or recognize the kind of increasing interdependency needed for the complex topic of convergence. Initial descriptive and diagnostic methods can be used to understand what will enable convergent research. Given the discussions throughout the workshop that suggest we currently do not have the assessment tools needed to immediately launch a brief national survey, Hall concluded that at this juncture, our scientific enterprise may most need to recognize whether they have a context that enables convergence research so that it can happen when it is needed.
Holbrook addressed the value in constructing a national measure of convergence. Even if it is possible to do and could be done well, it does not automatically qualify as a wise decision if there are unintended consequences. It might end up harming people, fields of research, personnel within the research workforce, and communities impacted by the research. Holbrook urged NCSES to be very clear and upfront about stating its values: what it is trying to do, why, who it is intended to help, and who it is trying to avoid harming. He then turned to his second point, which emphasized making the initial effort: “to try something.” He supported use of an iterative experimental approach that helps to incorporate feedback and ultimately reaches the intended goals without hurting fields of research, or people who are not directly involved in research, but are part of communities that are impacted by it. Though considerations about potential harm are valid, Holbrook favored action over fear of unintended consequences.
Khargonekar started to think about convergence soon after he joined the NSF as the assistant director for its Directorate of Engineering in
March 2013, a position he held until June 2016. He observed that the workshop led to a deeper and nuanced discussion of convergence. Khargonekar reminded the audience about the 2014 National Academies of Sciences, Engineering, and Medicine report on the deep integration of knowledge, tools, techniques, and ways of thinking from multiple disciplines as a novel method for producing new knowledge that is particularly suited to solving complex problems of societal interest.2 He agreed with Ben Jones (Northwestern University) that an immense demand exists for such knowledge, but the demand encounters a bottleneck on the supply side, because the numbers of possible combinations are exponentially large and a priori knowledge of what would be truly successful is not available. Consequently, extremely large investments of resources are required. Considering convergence as a multidimensional and multifaceted concept and phenomenon, Khargonekar found the ecological framework proposed by Dan Stokols (University of California, Irvine) very compelling. Given this ecological view, all stakeholders have important roles in the convergence model of knowledge creation and their actions could help facilitate progress on measurements of convergence. The presenters’ ideas ranged from the level of granularity at which such measurements might be made to various types of tagging, labeling, analysis, and a full life cycle of research from idea sharing to proposals to projects to products to impacts of research.
Khargonekar contended that a single measure of convergence at the national level is neither likely nor useful. He suggested looking for multiple indicators, maps, and other depictions of convergence. Quantitative and qualitative aspects of convergence must remain top of mind. There are already tools in various stages of development and maturity that hold great potential for mapping convergence. These tools should continue to be developed and made accessible to the research community and other stakeholders. As the entire research community becomes more familiar with these tools, it will be able to use them to improve all aspects of research processes and enhance the usefulness of the survey-based tools that NCSES uses.
If a problem-solving framework was accepted for convergence, then Khargonekar believed it would be logical for mission-driven agencies, such as the Environmental Protection Agency, the National Aeronautics and Space Administration, the National Oceanic and Atmospheric Administration, NIH, and the U.S. Departments of Defense and Energy, and others to take on the responsibility and allocate resources to address key problems of societal interest. He proposed that NSF take a leadership role in advancing
2 National Research Council. (2014). Convergence: Facilitating Transdisciplinary Integration of Life Sciences, Physical Sciences, Engineering, and Beyond. Washington, DC: The National Academies Press.
the use of convergence research frameworks and measurement tools at mission agencies. As a pilot program, perhaps some key problem areas could be identified where these tools could be put to use to see if they shed new light on trajectories of progress.
Khargonekar considered it critical to have better tools for measuring convergence among doctoral graduates. A key challenge is assessing convergence within a doctoral dissertation that is meant to evaluate the work of one individual. Would the extent of convergence be determined by the publications that emerge from the dissertation or potentially from the diversity of the dissertation committee? How should the intellectual, emotional, and social components critical to convergence research in relation to the preparation and training of the PhD graduates be assessed? Would it be useful for NCSES to work with the Graduate Education Division at NSF on a national dialogue on doctoral student preparation, training, and dissertation production to gain insight on indicators? Could NCSES partner with other directorates and divisions at NSF that might be interested in measuring convergence among their research programs and funded researchers? How might NSF and NCSES work with industry on the majors and maps of convergence? Could convergence measurements and maps be useful for public-private partnerships? Khargonekar urged NCSES to make its tools more ubiquitous in the NCSES reports and publications both in print and online. He considered the portfolio-level maps to be most compelling and interesting.
Klein focused her comments on two topics: the value of constructing national-level measures and an overarching question of definition. Regarding the first point, she observed that presenters repeatedly raised lack of clarity and precision concerning how institutions and different researchers use multiple connotations of convergence as well as concepts of multidisciplinarity, transdisciplinarity, and interdisciplinarity. She emphasized that each term’s definition is already grounded in authoritative literature, and consequently, efforts can now be made to create consistent language across institutions. These efforts will lead to rigor in the way that we approach integrative and collaborative research. Klein also highlighted a few caveats raised during the workshop. Nora Cate Schaeffer (University of Wisconsin–Madison) questioned, for example, whether a single definition is necessary, and asked how it might overlap with relevant concepts such as transdisciplinary synthesis. Graham Kalton (Westat) cautioned that we also need to ask who responds to surveys, as different people fill out a survey at different points.
To address her second point about definition, Klein stated that five keywords came to mind: purpose, focus, definition, unit, and methods. She observed that presenters throughout the workshop had challenged participants to clearly articulate the purpose of their efforts: is it monitoring or
evaluation, progress or prediction, indicators or impacts? Is the goal to create a culture of convergence or to create a new field? Regarding focus: If a problem orientation is used, the audience and stakeholders need to be identified. As for definitions: Participants heard repeatedly that there are different types, discussed whether a more nuanced measure is also needed, and asked whether emerging disciplines could be factored in. For unit of analysis: The level of granularity reflects the relationship between simplicity and complexity. Levels might require different disciplines and criteria, or distinguish between project, goals, and products, or research versus policy, or even researchers versus commissions. Finally, what choices will be made about methods? Options might reflect a multidimensional approach, over-reliance on bibliometric data, the directionality of measures, or a portfolio approach.
Klein also addressed the steering committee’s final question, which asked what NCSES might prioritize as part of its work in convergence. She recognized that the committee had not been tasked with providing formal recommendations, but she highlighted possible priorities that emerged from presenters. She referenced Kalton’s pessimism about operating on a national scale at this time, Schaeffer’s suggestion that convergence could expand over time, Khargonekar’s caveat about not adopting one criterion, Owen-Smith’s caveat regarding survey research limitations given ambiguities inherent in surveys with human subjects, Holbrook’s concerns regarding the impact that standardization could have on convergence approaches, and Joseph DeSimone’s (Stanford University) direct suggestion that trends must be considered. She then addressed the second question about ideas and strategies most worth pursuing, citing authoritative reports, white papers, and exemplar pilot studies consistent with an experimental mode. In considering possible products, Klein also noted a diversity of possible products ranging from surveys to topic modeling, interviews, maps, and data visualizations.
Owen-Smith began his remarks by positing that convergence is not a noun: It is not something that can exist or not exist or exist to some degree. It is a verb, a process that plays out across multiple levels of analysis that he as a sociologist believes are interrelated and causally affect each other. Convergence occurs over different timescales: a very micro timescale involving one-on-one in-person interactions among different people who may or may not feel particular affinities or frustrations, a medium term that is about the life span of a project, and a longer term that relates to fields of science, institutions, and careers. Owen-Smith encouraged the group to consider how the process could be unpacked over time. To the extent that researchers imagine a single cross-section of time, it makes sense to refer to convergence as a noun.
For those interested in being able to intervene, timing and temporal order are of significant interest to the convergence process. Treating convergence
as a process highlights the need to consider who needs to be protected and empowered as efforts move forward. Acknowledgment, identification, and communication with stakeholders are essential. Owen-Smith also highlighted convergence’s inherent dilemmas, particularly the potential trade-off between researcher visibility and productivity, which suggests there might be consequences particularly for scholars at earlier stages of their career and those less well connected to the core of their field. Convergence also creates a tension between the collective and the individual such that having a diverse team is advantageous for convergence but being the person who makes the team diverse and interesting can be individually disadvantageous. He observed that most of the socioemotional and organizational work required of interdisciplinary collaborations tends to fall to younger, particularly female scholars. Another dilemma relates to creative destruction, which emphasizes the creation but minimizes the destruction. This type of tension is also reflected in the need for strong disciplinary research, and how particular types of convergent research are supported.
In considering the convergence process, Owen-Smith favored a portfolio approach. Capturing convergence characteristics as they evolve will be important for understanding whether activities can accurately be considered convergent. He saw no reason, for example, why a convergent research process could not produce a disciplinary contribution. He added that to think about the characteristic of a particular entity, we need both point estimates and distributions. In the absence of interesting survey-based measures, he would recommend an iterative approach that holds multiple coordinated, concrete, community-engaged processes to capture particular aspects or measures in coordination with stakeholder groups. Owen-Smith contended that this approach might be particularly effective in arriving at a well-defined and well-articulated measure, particularly with input from an expert group to help avoid the drawbacks.
Smyth concurred with Owen-Smith’s characterization of convergence as a process that crosses through time. This conceptualization has important implications for how we think about measuring convergence at a point in time, in particular the parts of the process we are not measuring and where errors tend to get introduced.
Conversations about interdisciplinarity and convergence research centers and their funding levels signify that a level of progress has already occurred. Some process must have brought the researchers together, and initializing factors must have led to convergence happening on a minute scale. Smyth raised a few related questions: First, at what point do participating individuals understand something as convergent and can report it as such, rather than underreporting convergent research that may be happening because they do not recognize it? Second, do errors related to the process and measures of convergence vary across organizations? And are
some universities more likely to overlook convergence than others based on the metrics chosen and the point at which those metrics become important in the process?
As a survey methodologist, Smyth tends to break down components into error sources where the biases and variance affect the measure. She asked how we might choose a metric that accurately captures the process of convergence. Smyth agreed with Andrew Zukerberg’s (National Center for Education Statistics) points relating to the use of definitions and their accompanying difficulties. Placing a shorter definition in a survey while providing a longer, more involved definition behind a hyperlink is unlikely to be the best method for standardizing this type of measurement. She noted that not all NCSES surveys are web-based: Paper questionnaires do still exist. More thought needs to go into what the measurement instruments are and how strategies will differ depending on varied survey modes.
Smyth returned to her starting point regarding opportunity costs. There are many ways to conduct this research, and each aspect does not need to be covered entirely by NCSES: Other agencies, organizations, and partnerships could form a team science approach with NCSES to address a robust measurement. Such an approach might maximize time-limited resources as well as limited surveys and respondents if researchers can be thoughtful about measurement and all of their priorities.
Entwisle began her closing remarks by identifying three components in the definition of convergence: (1) deep integration of multiple fields; (2) drawing on diverse viewpoints and experiences; and (3) solving “wicked” problems. Two of these components are part of the NSF definition, and one is not. She then addressed each of them, in turn.
With respect to the first component, Entwisle distinguished between deep integration and multiple fields and discussed them separately, beginning with the latter. In her view, an interest in the integration of multiple fields derives from a theory of progress based on crossing boundaries, where boundaries can be disciplines, departments, universities, and other entities, such as nations. She particularly wanted to draw attention to nations because the workshop did not include extensive discussion of international research. She noted that different ideas come from people with different experiences, from different places, and different points of view.
A challenge for measurement is that boundaries themselves may change. When convergence was introduced as a goal for research in 2010, the relevant fields were engineering, physical sciences, and biology. Since then, the list of relevant fields has expanded, the definition of fields has changed, and the boundaries themselves are changing. Entwisle said that typically, we do not want to change the measure if what we want to do is measure change. Posing the question of whether this was a problem, she responded, “maybe, or maybe that is the point.” She emphasized that it is important
to understand that the context of research is itself changing. Trying to standardize measurement without acknowledging that change is occurring can be counterproductive.
Entwisle then turned to some specific measures of boundary crossing. In her previous role as vice chancellor, she developed measures of convergence based on collaborations on funded research involving faculty from different departments and from different schools at the University of North Carolina at Chapel Hill. These metrics, based on numbers of and funds involved in collaborative projects, were relatively simple to construct and track. However, the metrics she developed were specific to the organization of her university and might look quite different at other academic institutions, such as Arizona State University. The measures Entwisle used did not match the information currently requested in the Higher Education Research and Development (HERD) survey either, although she saw potential for appropriate questions to be designed.
Entwisle contended that crossing of boundaries is a necessary condition for convergence, and relatively straightforward to measure. Deep integration is much more difficult to define and it is also more difficult to achieve. It may involve the social and emotional aspects of collaboration as well as the scientific ones. She said that deep integration is the difference between scientists each writing chapters for a book that will be stapled together versus being deeply engaged in creating an entirely new product. Due to the complexities, Entwisle expressed uncertainty about how well deep integration can be measured.
In Entwisle’s view, the second component of convergence is diversity. This is not currently part of NSF’s definition of convergence. She argued that truly convergent work requires more than linking disciplines and organizational units. Diverse viewpoints and experiences are also important. The Survey of Doctorate Recipients (SDR) is a potential source of this kind of information. The SDR collects personal information that can be linked with dissertation committees, dissertation abstracts, bibliometric databases, and the like. Similarly, the NCSES surveys could be linked with other administrative data collected by other federal government agencies or units.
She then addressed the third component of convergence: problem orientation. She agreed with others that convergence is needed to solve major societal problems, but she said that she was not convinced that solving a large-scale social problem needs to be a requirement for work to be considered convergent. She wondered how one would evaluate whether solving a major social problem is the motivation for a particular research project, given that scientists who are good at their craft are also good at explaining their research in this way. She then reflected on time frame: How long does it take to show progress on solving a challenging social problem, and how is success determined.
Entwisle said that the COVID-19 pandemic is a great example of a social problem, though it is a unique case because much of the world is trying to solve it. Many social problems will not receive that level of attention and widespread efforts. As an example of convergence research that receives less attention, Entwisle mentioned a project focused on storm inundation on the eastern North Carolina coast, which brings together scientists from several fields, including engineering, ecology, geography, and various social sciences to solve a problem.
Entwisle argued that some convergent research is not immediately applicable to a problem. She added that if application is to be a required component of convergence research, then additional reflection about businesses and other nonacademic units and their collaborations as well as translational research is needed. These were not discussed very much in the workshop.
Entwisle concluded by saying that while we do not have all of the answers, this should not stand in the way of progress. NSF needs information about convergence research to manage its own research portfolio and also to advise on the national portfolio.
In her closing remarks, Emilda Rivers expressed NCSES’ interest in remaining connected with the community on best approaches to follow moving forward. She pointed out that NCSES is a federal statistical agency with a mission to report on the science and engineering enterprise in a frequently global context and receives several requests for new areas and topics to measure. This workshop allowed participants to see the breadth of knowledge that NCSES covers from both the national and international science and engineering communities.
Rivers said that she found ideas for leveraging administrative records to assist NCSES efforts especially interesting because of concerns associated with the respondent burden posed by survey data collections. Another concern that resonated with Rivers is the need for measures to make sense to a respondent. She encouraged participants and attendees as members of the research community to sustain and remain vigilant on the status of convergence and to continue thinking about aspects that should be studied and insightful avenues that would move progress forward.