The Future of Interdisciplinary Research and Training
The seeds of progress germinate, and the shape of the future unfolds in our conviviality, at the convergence of all our different paths. It is in this gradual cross-fertilization that the future of knowledge—and indeed of the world—resides
— Federico Mayor
As the committee reviewed the many programs, interviewed institutional representatives, and examined funding mechanisms, a key component to interdisciplinary efforts emerged: leadership. The committee heard about the dedicated efforts of individuals, leaders with a vision to establish a program that spanned disciplines. It required vision, creativity, and perseverance. It required education of scientific colleagues and administrators about the potential that exists in interdisciplinary efforts. As discussed in chapter 2, a research question or field of study first needs to be identified. It might be necessary to start with a small effort until it becomes clear to colleagues, administrators, and funders that the collaborations are fruitful and likely to lead to answers that would not otherwise be found. As presented in chapter 3, the leaders must strive to overcome the obstacles that face them. University administrators need to be convinced that an interdisciplinary approach is profitable for them and their institution. An investment (financial and administrative) in interdisciplinary programs can breed additional successes—in research, in obtaining funding, in training the leaders of the future. As described in chapter 4, funding mechanisms exist to support training throughout a scientific career. In their current form, these must sometimes be used creatively, be used in combination, and have multiple sources. With these tools, the training programs can create the leaders of the future who will forge new paths to solve the difficult problems that can be tackled only through an interdisciplinary approach. This chapter looks to some of the opportunities presented by future technologies and asks how we will recognize the success of our interdisciplinary programs.
INNOVATIVE APPROACHES AND OPPORTUNITIES
Programs of the future will be less constrained by geography. A variety of developments in modern communication technology might improve interdisciplinary training by decreasing institutional and geographical barriers. These include advances in Internet communication, electronic journals, and real-time, low-cost telecommunication abilities.
The opportunities of electronic publishing could greatly increase the accessibility of information from diverse fields (although some have expressed concerns about the dangers of this broad dissemination without adequate peer review; see Relman, 199916). Many journals already provide full text on line, allowing access to articles in many disciplines. Publishing on line can go beyond simple text and figures. The capability exists to include complex data in the form of “Java applets,” which are computer programs or models that run on a Web browser.3 The inclusion of links to related papers can yield a network of cross-disciplinary information for interested readers.
Those advances can reduce information-based barriers to the furthering of interdisciplinary research. However, the value of the enormous and growing databases will be in proportion to the ease with which information can be accessed and categorized appropriately. Improved, consumer-friendly search engines are a must for the use of these information resources by the widest possible audience. Current search engines for Web-based searches and for literature-based searches can miss pertinent references and obscure relevant data in a cloud of extraneous citations.
Videoconferences and virtual meetings could become increasingly important for conducting interdisciplinary research. The falling costs of individual cameras for PC-based platforms can enhance communication by allowing real-time transmission of video and auditory images. The growing access to Internet II, the next generation of Internet technology, allows broadband transmission of conferences and lectures. Lectures by world experts in any field could potentially be provided on the Web and made available to people interested in expanding their horizons. Already, for instance, the National Institute of Mental Health has a Web site presenting some of the symposia that it sponsors.12
The Internet is important not only for distance learning and virtual meetings, but also for making possible the sharing of data and analytic equipment over long distances. A program at the University of California, San Diego (UCSD) plans to make a high-voltage electron microscope accessible to researchers throughout the country.5 Specimens sent to UCSD will be inserted into the microscope by local personnel but scanned by the remote investigator through an Internet link. Further processing, such as three-dimensional reconstruction by tomography, can be accomplished on line through a link to a supercomputer. Equipment too expensive for many investigators to own thereby becomes accessible. Shared laboratory access through the Internet for education
is already in use at the Center for Biological Timing, where remote students can log onto a Web page to watch hamsters in real time (G. Block, IOM Workshop, 1999). As the technology improves and becomes cheaper and faster, these approaches are likely to become more common. On-line interactions will facilitate collaborative, and hence probably interdisciplinary, interactions.
With the many mechanisms available to encourage interdisciplinary efforts, how do we know which are effective? Even though there have been almost 50 years of discussion concerning interdisciplinary needs, data to support the need for and effectiveness of the many mechanisms are scanty. Why is there a lack of data when there is so much interest? The collection and evaluation of interdisciplinary training outcomes are tremendously complex and difficult. The committee faced this obstacle in its review of interdisciplinary programs and determined that a process for evaluation of programs is needed. The issues that face anyone undertaking this task are highly complicated. Not all research needs to be, or even should be, interdisciplinary, but the committee expects that successful interdisciplinary training will increase the options available to trainees and lead the trainees to produce, on average, more interdisciplinary research. To know whether interdisciplinary training promotes interdisciplinary research, it is necessary to have a method of identification for interdisciplinary research and training programs. To measure the outcome of the training programs, it is necessary to have methods that will accurately reflect their success in promoting interdisciplinary research.
Identifying Interdisciplinary Programs
It is not possible to evaluate interdisciplinary training programs if they cannot be identified. Perhaps the first hurdle to evaluation of these programs is to agree upon a definition of the term interdisciplinary. Interdisciplinary can mean different things to different people. It can apply, for example, to a person trained in two or more disciplines working on a specific problem, to people each trained in one discipline and actively working together to solve a single problem, to collaborations among single discipline trained people working separately to solve a single problem with a coordinator overseeing the operation, and to any combination of the above. A program description might include all or some combination of the above; regardless of the specifics, a universal, meaningful definition of interdisciplinary (and of translational) among funding agencies (e.g., NIH) would be a start in developing evaluation methods. The committee has offered its working definitions in chapter 1. Once an accepted description is established, an appropriate labeling mechanism will be necessary. One possibility would be to have an interdisciplinary check box on the cover sheet of grant
applications with a space to list the participating disciplines. That approach would allow funding agencies to define which training programs are to be tracked as interdisciplinary and to define which projects are interdisciplinary for outcome analysis.
How does one define, measure, and track the success of interdisciplinary training programs? What are the appropriate outcome measures for the promotion of interdisciplinary research? How is success defined? Should all trainees work in interdisciplinary research or should all trainees be able to understand interdisciplinary questions? How do you know if the trainees are prepared to tackle interdisciplinary questions should it become important to them? The committee believes that evaluation of training programs is needed, but qualitative assessments of the effectiveness and impact of training efforts are undoubtedly difficult to conduct. Over the last decade, numerous reports have lamented the lack of outcome data on federal training programs, such as the National Research Service Awards.8,14 For example, past efforts to assess the reasons for the underrepresentation of women and minorities in science have faltered in the face of insufficient data regarding training programs and training outcomes.8 When outcomes are more easily defined (for example, on the basis of producing successful grant recipients), analyses are more successful. For example, studies have shown that the training grant mechanism (T32) is less effective in inducing trainees to apply for NIH grants than is the fellowship mechanism (F32).8 The analysis by the National Institute of General Medical Sciences of the Medical Scientist Training Program revealed that graduating MD-PhDs were more likely to apply for and receive NIH grants than graduates with just an MD.11
Some funding agencies have attempted more extensive program evaluations. In 1998, the Pew Charitable Trust conducted a review to determine the impact of its McDonnell-Pew Program in Cognitive Science on establishing and promoting a new field.1 This review resulted in a volume that qualitatively assessed the growth of cognitive science. The Association of American Medical Colleges (AAMC) Group on Graduate Research, Education, and Training (GREAT Group) convened a Task Force on Benchmarks of Success in Graduate Programs in 1997 in recognition of the need to identify indicators of success of training programs. In June 1999, AAMC issued a Self-Assessment of Graduate Programs in the Biomedical Sciences, which describes some objectives of training programs and provides a survey instrument as a guideline.4 Although the GREAT Group's report does not specifically address interdisciplinary training, it does serve as an example of the type of approach that can be used to develop an assessment tool.
Most funding agencies and organizations recognize the importance of a formal assessment of individual training programs. Renewals of NIH training
grants require principal investigators to report the career achievements of previous trainees. The National Science Foundation (NSF) Integrative Graduate Education and Research Training (IGERT) programs require tracking and evaluation as well. The outcome measures tracked are generally the success of trainees in completing a degree, obtaining a position in research, publishing in peer-reviewed journals, and obtaining research grants. Those measures are important, but do not address the question, Did the training produce more interdisciplinary research? The answers to the question might lie in changes in the career paths of individuals or in changes within universities and funding agencies that promote further interdisciplinary research.
The general measures of success for those who conduct interdisciplinary research are the same as for those who conduct single disciplinary research—grants awarded, publications, tenure and rank, and laboratory size. The limitations of those indicators have been documented.2 A scientist in a government laboratory or in private industry might not need grant support, for example. Graduates in nonacademic settings might develop products or patents instead of publications. The number of publications or even citations may not reflect the impact of a research effort.9 Impact might be economic, health-related, or educational, and these are difficult to measure or attribute to a specific research program. In this regard, interdisciplinary research is no different from disciplinary research.
To address the effectiveness of interdisciplinary programs, additional measures should be included, such as whether graduates maintain an interdisciplinary approach in their work, as reflected by the nature of their collaborations, joint appointments in multiple departments, publishing of interdisciplinary papers, or obtaining grants with interdisciplinary themes. To assess the impact of interdisciplinary training, there also needs to be a point of reference or control group for comparison. How can we tell whether interdisciplinary training is achieving the goal of producing more interdisciplinary research if we do not know how much interdisciplinary research is being created by traditionally (or single disciplinary) trained people? The appropriate data should be collected on both interdisciplinary and single disciplinary trainees.
In some way, measures need to be collated to allow the evaluation of programs. Some assessment tools have been used to evaluate research outcome and could be used to compare interdisciplinary and disciplinary programs. For example, bibliometric analyses constructed around interdisciplinary research could be developed to compare the relative output of research institutions or to compare the relative productivity of one funding mechanism over another.17,18 Such data would highlight institutions that are producing high-quality interdisciplinary research, and this could lead to a greater understanding of the factors that contribute to the high output. Bibliometric analyses could also be used to deter-
mine whether interdisciplinary training programs produced scientists who were more likely to be involved in interdisciplinary research. These analyses would, of course, require a means of identifying which research is interdisciplinary, again presenting the problem of definition and tracking. If universal and meaningful search terms were developed, the databases IMPAC II, MEDLINE, and CRISP might be used to conduct such analyses. In fact, NIH used these databases in the 1980s to conduct bibliometric analyses to determine the effectiveness of different research support mechanisms, such as to determine whether center grants are more effective than R01s in supporting clinical research, or in evaluating whether some categories of investigators or institutions are more likely to conduct research relevant to an agency's mission. 17
Past tracking of federally funded training efforts has gathered data on the numbers of people trained on a T32, for example, or the percentage of fellows who entered academic versus industry careers after completion of training.10,13,14 These assessments tend to result in recommendations about the need for more or fewer PhDs, or for increased or decreased efforts in specific fields, such as molecular biology or immunology. Periodic studies track demographic data on graduate degree production, employment by sector, unemployment rates, race, ethnicity, age, and gender. These studies provide useful trend data about the size and demography of the scientific population, but they tell us little about the influence of training on career outcomes and scientific contributions. For example, academic degrees alone tell little about the training that a person received or the type of research he or she will pursue. And, they do not tell us about the research experience of the student —whether he or she worked in the laboratory of a single investigator on a single problem funded by one or two single disciplinary grants or in a group working on related problems funded by multiple grants and funding sources.
The Howard Hughes Medical Institute (HHMI) has developed an approach to tracking. It maintains an extensive database of nearly 2,000 previous HHMI fellows that can be searched by institution, program, research field, fellow, or mentor.7 A Web-based system records key data from the fellowship applications (for example, educational history) and collects additional information from annual reports of current fellows and career updates of former fellows. Information includes professional activities (for example, research, teaching, and clinical practice) and research involvement (for example, field, grants, faculty or industry appointments, and publications). Using information from fellowship applications, HHMI tracks applicants' prior participation in science education programs supported by HHMI and other funders (at the precollege, undergraduate, graduate, and postgraduate levels). HHMI also collaborates with AAMC to track the career outcomes of HHMI fellows, nonawardees, and graduates of U.S. medical schools, drawing from national databases.6
The evaluation of people will not be easy; and even when an appropriate method is devised, collecting this type of data will be plagued with concerns of
privacy. Is it appropriate for government or private agencies to expect people to report the details of their careers after graduating from educational training programs? Are people willing to report these details, and do they want to be tracked? Dealing with those concerns will be tricky and will require the evaluation and input of experts that can assess the ethics and confidentiality elements of the problem.
Changes in Universities and Funding Agencies
Although tracking efforts have focused primarily on the participants of training programs, successful interdisciplinary efforts might also be expected to show evidence of change at the university or institutional level. If opportunities for interdisciplinary research and training increase, particularly with adequate funding for administrative support, changes in academic institutions would be expected. Examples of such changes might be increases in the number and funding of academic research centers that are not aligned with particular departments, increases in collaborative research studies across departments, increases in faculty joint appointments, and increases in training programs that offer interdisciplinary opportunities.
Mechanisms also might be developed to assess the extent to which federal agencies and private foundations actively promote interdisciplinary research (and training). Measures could include counting the number of Request for Applications (RFAs) or ascertaining the level of funds dedicated to interdisciplinary efforts. The determination of the interdisciplinarity of an RFA would require agreed-on definitions of the term and be facilitated by a tracking label. Structural indicators of change in funding agencies might include mechanisms to broaden the scope of expertise of review panels to make interdisciplinary research more competitive within traditional competitions or new mechanisms, such as supplemental grants to support interdisciplinary efforts.
Finally, to understand fully the impact of interdisciplinary training efforts, a broader view of the research enterprise might be needed. Efforts by NSF—through its National Science Board—and AAMC have provided data on funding, student enrollment, characteristics of the science and engineering pipeline, and the size and sectors of employment. Among the data collected by the National Science Board's Science and Engineering Indicators (SEI) are measures of joint efforts across academe, industry, and government, including coauthorship and collaborative research initiatives. 15 Measures like these could provide a perspective on national trends following broad initiatives.
A VISION OF INTERDISCIPLINARY TRAINING
The analogy of interdisciplinary research to an orchestra was introduced in chapter 2. Training can produce the orchestra leader who understands enough
about each instrument to coordinate individual musicians to create a beautiful composition. Training can also produce the versatile musician who is expert in one instrument but understands enough about his colleagues' instruments to join them in harmony. Each orchestra member can solo, but together they produce more than any one alone.
Interdisciplinary research is happening in our institutions—despite the obstacles. The question we face is how best to facilitate, direct, and evaluate its growth. The committee encourages interdisciplinary training and research, not from a philosophic belief in “interdisciplinarity, ” but from the fact that many scientific problems are refractory to solution by the methods of a single discipline and require a broadening and a deepening in methodology through incorporation of concepts and methods from several disciplines simultaneously. The committee specifically warns against any attempt to create an interdisciplinary “jack of all trades” who will be master of none. The aim should be the thorough mastery of one discipline, perhaps two disciplines, plus sufficient knowledge and skill of parallel disciplines to work effectively with experts in those disciplines. Basic scientists should be taught about the scope of clinical problems. Clinician-scientists should be trained in research methods. Training cannot be merely theoretical: it must be hands-on as well. Appreciating the additional power for problem solving that arises from applying concepts and methods from several disciplines is possible only through experience of experimental work that exemplifies this approach. Training is a life-long process and should not stop with establishment of a career.
Funding agencies can support this process by expanding existing mechanisms and crafting new ones. Support for interdisciplinary training often will need to be drawn from several institutes or across federal agencies, such as NIH and NSF, or between government and the private sector. The critical problems that need an interdisciplinary approach need to be identified through, for instance, workshops of experts to discuss next steps in grappling with major research problems. The breakdown of institutional barriers can be facilitated through funding initiatives that require commitment from university administrators or through improvements in peer review. Universities can also do much on their own to enhance interdisciplinary training and research. Their commitment to such programs can be demonstrated through reallocation of existing resources, encouragement of shared facilities, creation of faculty positions that span departments, revision of tenure and promotion policies, and so on.
The committee emphasizes the importance of collecting data on the outcomes of interdisciplinary programs, but recognizes the difficulties inherent in follow-up studies. The results demonstrate whether a training program in existence 10 or more years ago (and probably altered in the interim) has had the desired end result when those graduates entered a job world, one that could be very different from that in existence when the evaluation is complete.
FINDINGS AND RECOMMENDATION
Establishing an evaluation process will require a means of identifying interdisciplinary research and training programs and evaluating their success. Devising an approach to track and evaluate interdisciplinary training and research programs will be challenging and should be the subject of analysis by people with appropriate expertise. The committee recommends the following:
Recommendation 6: NIH should develop and implement mechanisms to evaluate the outcomes of interdisciplinary training and research programs.
Identify interdisciplinary research and training as such in all federal grants to facilitate future analyses. The committee suggests a box on the cover sheet of grant applications indicating whether the applicant considers the work to be interdisciplinary. If so, the applicant should list on a continuation sheet the participating disciplines represented among the investigators and mentors and the interdisciplinary aspects of the research or training.
Establish a task force to develop a plan to track outcomes of interdisciplinary training and research programs. Outcomes should encompass, but not be limited to, career patterns and interdisciplinary efforts of trainees (for example, research focus, findings, and publications), changes in universities (for example, in administrative structure, in interdisciplinary research, and in interdisciplinary training opportunities), and changes in funding agencies (for example, funding profiles for interdisciplinary proposals).
1. Bechtel W. 1998. Evaluation of The Mcdonnell-Pew Initiative in Cognitive Neuroscience . A Report to the McDonnell Foundation and Pew Charitable Trust. St. Louis: Washington University.
2. Chubin DE. 1987. Designing research program evaluations: A science studies approach . Sci Public Policy 14:82–90.
3. Glanz J. 1999. Java Applet lets readers bite into research. Science 285:34.
4. Group on Graduate Research, Education, and Training. 1999. Self-assessment of graduate programs in the biomedical sciences. Task Force on Benchmarks of Success in Graduate Programs. American Association of Medical Colleges [Online]. Available: http://www.aamc.org/about/gre/narr_gud.pdf [accessed November 1999].
5. Hadida-Hassan M, Young SJ, Peltier ST, Wong M, Lamont S, Ellisman MH. 1999. Web-based telemicroscopy. J Struct Biol 125:235–245.
6. Howard Hughes Medical Institute. 1999. 1999 Meeting of predoctoral and physician postdoctoral fellows. [Online]. Available: http://www.hhmi.org/grants/graduate/prepost99/intro.htm [accessed January 28, 2000].
7. Howard Hughes Medical Institute. 1999. Fellows and their research. Search for current fellows. [Online]. Available: http://www.hhmi.org/grants/graduate/fellows/fellowsrch.htm [accessed January 28, 2000].
8. Institute of Medicine. Committee on Addressing Career Paths for Clinical Research. 1994. Careers in clinical research: Obstacles and opportunities. William N. Kelley and Mark A. Randolph. Washington, DC: National Academy Press.
9. Narin F. 1976. Evaluative Bibliometrics. The Use of Publication and Citation Analy sis in the Evaluation of Scientific Activity. Cherry Hill, NJ: Computer Horizons, Inc.
10. National Academy of Sciences, National Academy of Engineering, Institute of Medicine. 1995. Reshaping the Graduate Education of Scientists and Engineers. Washington, DC: National Academy Press.
11. National Institute of General Medical Sciences. 1998. The Careers and Professional Activities of Graduates of the NIGMS Medical Scientist Training Program. Pub. No. 98-4363. Bethesda, Md: National Institutes of Health.
12. National Institute of Mental Health. 1999. NIMH—Conferences on Video. [Online]. Available: http://www.nimh.nih.gov/events/meetingsvideo.cfm [accessed January 28, 2000].
13. National Research Council. 1998. Trends in the Early Careers of Life Scientists. Washington, DC: National Academy Press.
14. National Research Council. 1994. Meeting the Nation's Needs for Biomedical and Behavioral Scientists. Washington, DC: National Academy Press.
15. National Science Board Subcommittee on Science and Engineering Indicators . 1998. Science and Engineering Indicators—1998. Arlington, Va: National Science Foundation.
16. Relman AS. 1999. The NIH “E-biomed” proposal—A potential threat to the evaluation and orderly dissemination of new clinical studies [editorial]. N Engl J Med 340:1828–1829.
17. Office of Technology Assessment, United States Congress. 1986. Research funding as an investment: Can we measure the returns? Washington, DC: Office of Technology Assessment, Congress of the United States.
18. Office of Technology Assessment, United States Congress. 1991. Federally funded research: Decisions for a decade. Washington, DC: Office of Technology Assessment, Congress of the United States.