COSEPUP charged each oversight committee and its panel with the general task of benchmarking its own field. The activities of the groups varied somewhat by field, but the overall approaches chosen by the groups were similar. These approaches are summarized in this section.
As expected, the members of each oversight group sought out panel members who had broad understanding of their field and extensive connections with the international research community, including members of the academic, industrial, and government sectors.
The attempt to use discrete categories was complicated by overlap. For example, any person is likely to be both a researcher and a user of research. Similarly, some of the "US" academic researchers were born outside the United States or conducted collaborative research with non-US researchers. Still, about half the panel members selected were US academic researchers in the field. The remaining half were US researchers in related fields, "users" of research (as defined in section 2.3), and international (non-US) researchers.
Of the 12 members of the mathematics panel, three were non-US researchers, including a current and a former president of the respective national academies of sciences in their countries. Two were US researchers employed by industry, and one was a US Nobel Prize-winning chemist who uses mathematics in his research. The remaining six members were US academic researchers in mathematics.
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 13
Experiments in International Benchmarking of US Research Fields 3 RESULTS OF THE BENCHMARKING EXPERIMENTS COSEPUP charged each oversight committee and its panel with the general task of benchmarking its own field. The activities of the groups varied somewhat by field, but the overall approaches chosen by the groups were similar. These approaches are summarized in this section. 3.1 How Panels Were Selected As expected, the members of each oversight group sought out panel members who had broad understanding of their field and extensive connections with the international research community, including members of the academic, industrial, and government sectors. The attempt to use discrete categories was complicated by overlap. For example, any person is likely to be both a researcher and a user of research. Similarly, some of the "US" academic researchers were born outside the United States or conducted collaborative research with non-US researchers. Still, about half the panel members selected were US academic researchers in the field. The remaining half were US researchers in related fields, "users" of research (as defined in section 2.3), and international (non-US) researchers. Of the 12 members of the mathematics panel, three were non-US researchers, including a current and a former president of the respective national academies of sciences in their countries. Two were US researchers employed by industry, and one was a US Nobel Prize-winning chemist who uses mathematics in his research. The remaining six members were US academic researchers in mathematics.
OCR for page 13
Experiments in International Benchmarking of US Research Fields Similarly, the 13 members of the materials science and engineering panel included three non-US researchers, two US researchers-administrators in industry—all in materials science and engineering—and one US researcher in a related field. The remainder were US academic researchers in materials science and engineering. The 14 members of the immunology panel included three non-US researchers, two US researchers-administrators in industry, and one US policy analyst, in addition to US academic researchers. The oversight groups, charged with nominating panel members and overseeing the benchmarking experiments, were also diverse in membership, although they did not contain foreign members. Each comprised about six individuals. Two members of each oversight group were members of COSEPUP, and the remainder were representatives of related NRC commissions and boards. For example, the oversight group of the materials science and engineering panel included two COSEPUP members and members of the Commission on Physical Sciences, Mathematics, and Applications (CPSMA), the Commission on Engineering and Technical Systems (CETS), CETS's National Materials Advisory Board (NMAB), and CPSMA's Board on Physics and Astronomy (BPA); among them were four industry researchers and three academic researchers, all from the United States. 3.2 How Panels Assessed Their Fields Each panel used a variety of methods to assess its field; the methods depended on the disciplines within the field. The methods used included The "virtual congress". Citation analysis. Journal-publication analysis. Quantitative data analysis (for example, numbers of graduate students, degrees, and employment status). Prize analysis. International-congress speakers. Each method is described in more detail below. The assessments were to be current—that is, the status of US research in the field today, not in the past (most analysis relied on information collected within the past 5-10 years). Some information, such as that provided by the virtual congress, were current. A challenging aspect of the benchmarking exercise was the great size of the US research enterprise. In the three benchmarked fields, US researchers were found to perform a dominant portion of the world's research, as measured by numbers of publications. Because of this size
OCR for page 13
Experiments in International Benchmarking of US Research Fields dominance, the panels often drew comparisons of leadership status in relation to regions or to the rest of the world combined rather than to individual countries. Another challenge for the panels was to determine which subfields to analyze. Panel members used their own judgment and existing documents. For example, the mathematics panel used the meeting sections of the International Congress of Mathematics and the International Congress of Industrial and Applied Mathematics as starting points. The materials science and engineering panel started with subfields identified in a recent report by the National Science and Technology Council that discussed materials science and engineering across the federal government. The immunology panel faced a greater difficulty: the field is an amalgam of subfields in larger disciplines, and no subfield classification systems existed, so the panel chose to develop its own. A method of defining subfields that reflected the original purpose of the COSEPUP leadership recommendations was to define subfields as areas within which researchers can move and still work within their realm of expertise. That connects directly with the goal of leadership because it enables a country that is strong in a given subfield to respond to and enter a "hot" new area; US researchers were able to do this, for example, in many areas of materials science and engineering. 3.2.1 The Virtual Congress A technique used by all the panels was to assess leadership by creating an imaginary international "virtual congress". Each panel asked leading experts in the field to identify the "best of the best" researchers for particular subfields, anywhere in the world. That was possible because most subfields were relatively small and their outstanding leaders were well known to active researchers regardless of location. For example, the immunology panel divided immunology into four major subfields: cellular immunology, molecular immunology, immunogenetics, and clinical aspects of immunology. It divided each of those subfields into four to 10 sub-subfields. The panel then identified five to 15 current respected leaders in each sub-subfield and polled each leader in person, by telephone, or by mail, taking into account the number of US and non-US researchers polled. The pollees were asked to imagine themselves as organizers of a session on their particular sub-subfield and to furnish a list of 5-20 current desired speakers. (See table 2.1 in the immunology benchmarking report in attachment III.) The materials science and engineering panel varied the method to accommodate its larger field. For each of nine subfields, panel members asked colleagues to identify five or six current hot topics and eight to 10 of the best people in the world. The information was used to construct tables that characterized the relative position of the United States in each of the subfields now and projected into the future (see appendix B
OCR for page 13
Experiments in International Benchmarking of US Research Fields of the materials benchmarking report in attachment II of the present report). The first half of each table ranked the current US position relative to the world materials community for each subfield. For scoring purposes, 1 represented the "forefront", 3 represented "among world leaders", and 5 represented "behind world leaders." The second half of each table was an assessment of the likely future position of the United States relative to the world materials community. Here, 1 represented "gaining or extending'', 3 represented "maintaining", and 5 represented "losing". 3.2.2 Citation Analysis Citation analysis is a technique often proposed to evaluate the international standing of a country's research in a field. Each panel used an existing British analysis to evaluate US research quality.8 The analysis included both the numbers of citations and the "relative citation impact," which compares a country's citation rate (the number of citations per year) for a particular field to the worldwide citation rate for that field. A relative citation impact greater than 1 shows that the country's rate for the field is higher than the world's and is viewed by some as a reliable indicator of the quality of the average paper. This latter measure takes into account the size of the US enterprise relative to that in other countries. The immunology panel also commissioned a "high-impact" immunology database from the Institute for Scientific Information (ISI). (See http://www.isnet.com/products/rsg/impact.html for more information on high-impact papers.) The ISI database was scanned for the years 1990-1997. For each year, the 200 most-cited papers in journals relevant to immunology were selected. The 174 authors who had more than five papers on the list were ranked according to the average number of citations per paper (70.2-38.5 citations per paper). (See section 2.1.2 and 2.2.2 of attachment III.) That technique was not used by the other panels, because the degree to which citations are a critical component of leadership status is less in their fields, particularly in materials science and engineering. 3.2.3 Journal Publication Analysis The immunology panel developed a new method called "journal publication analysis." The panel identified four leading general journals (Nature, Science, Cell, and Blood) and one of the top journals that focused specifically on immunology (Immunity). Panel members 8 See, for example, The Quality of the UK Science Base, United Kingdom, Office of Science and Technology, Department of Trade and Industry, March 1997.
OCR for page 13
Experiments in International Benchmarking of US Research Fields scanned the tables of contents of each of the journals. In the general journals, they identified immunology papers and the laboratory nationality of the principal investigator; and in all the journals, they identified subfields. That allowed a quantitative comparison between publications by US-based investigators and publications by investigators based elsewhere. (See tables 2.3-2.8 and sections 2.1.3 and 2.2.3 of attachment III.) 3.2.4 Quantitative Data Analysis All the panels encountered difficulty in locating suitable unbiased information by which to compare the major features of the scientific enterprise (such as education, funding and government agencies) in different countries. For example, undergraduate and PhD degrees in the United States are not directly comparable with all similarly labeled degrees in other industrialized countries. In some fields, notably mathematics, quantitative information that could be used for international comparisons was available from the National Science Foundation (NSF). However, in other countries, comparable information is rare and limited in focus. Panelists found that information on some issues (such as trends in the numbers of graduate students and funding for particular fields) was available only for the United States. (See section 5.5 and appendix B of the mathematics benchmarking report in attachment I of the present report.) 3.2.5 Prize Analysis Each of the panels analyzed the key prizes given in its field. In mathematics, the key international prizes are the Fields medal and the Wolf prize. In materials science and engineering and in immunology, there are a variety of international prizes. (See section 3.1.1 of attachment II and table 3.1 of attachment III.) In each case, the numbers of non-US and US recipients of these medals were analyzed relative to the location where they now conduct their research. For example, a researcher who originally conducted research in Australia but is now at a US university would be counted as a US researcher; the analysis reflects the current, not past, residence status of a researcher in a country. 3.2.6 International Congress Speakers Another condition that can be quantified is representation among the plenary speakers at international congresses (not virtual congresses). Although conference organizers strive for geographic balance by inviting speakers from different countries, speakership is an interesting indicator if the US representation at congresses is the same as, smaller, or larger than that in publications generated in a field. (See section 3.1.1 of attachment I.)
OCR for page 13
Experiments in International Benchmarking of US Research Fields 3.3 How Panels Characterized US Research Each panel concluded that the United States was at least among the world leaders in its field. However, each panel also identified subfields in which the United States lagged the world leaders. Each panel identified key infrastructure concerns. The mathematics panel found that the United States was the world leader in the field but that there were "storm clouds on the horizon" because of its heavy reliance on foreign mathematicians who had recently immigrated to the United States. If that influx wanes, the US leadership might be in jeopardy, given the decline in the number of American students applying to graduate school in mathematics. (Although the other panels did not explicitly make that point, more than half the graduate students in science and engineering are foreign-born, so the national research enterprise depends heavily on researchers from other countries.) The materials science and engineering panel found that the United States was at least among the world leaders in all subfields of materials science and engineering and the leader in some subfields, although neither the United States nor any other country was the world leader in the field as a whole. A general area of US weakness for most subfields was in materials synthesis and processing. Of particular concern in this field was the lack of adequate funding to modernize major research facilities in the United States, many of which are much older than those in other countries, and to build the new facilities needed to maintain research leadership. The immunology panel found that the United States was the world leader in immunology but that, although US dominance was evident in the major subfields, the United States was only among the world leaders in some parts of subfields. One general concern was the increasing cost of maintaining the mouse facilities that provide a key portion of research infrastructure. Another was the increasing difficulty in locating sufficient numbers of patients for clinical trials. 3.4 Factors Influencing US Performance Each panel identified the key factors influencing US performance in its field and assessed the factors primarily on the basis of its members' judgment. No quantitative measures were identified that were sufficiently comprehensive to account for performance. The three panels differed somewhat in their lists of factors considered most important, as might be expected among three widely different fields. However, the three lists overlapped in the following ways: Human resources and graduate education—mathematics, materials, and immunology.
OCR for page 13
Experiments in International Benchmarking of US Research Fields Funding—mathematics, materials, and immunology. Innovation process and industry—materials and immunology. Infrastructure—materials and immunology. COSEPUP finds that degree of overlap to be significant, partly because other recent National Academies reports9 on science and engineering have identified the importance of these same factors. 3.5 Future Relative Position of US Research With respect to the likely future position of the United States in research relative to other countries, a more diverse set of factors was identified, largely by qualitative techniques: Intellectual quality of researchers and ability to attract talented researchers-mathematics, materials, and immunology. Ability to strengthen interdisciplinary research-mathematics and materials. Maintenance of strong, research-based graduate education-mathematics, immunology, and materials. Maintenance of a strong technological infrastructure-materials. Cooperation among government, industrial, and academic sectors-materials. Increased competition from Europe and other countries-immunology and materials. Effect of the shift toward health-maintenance organizations on clinical research-immunology. Adequate funding and other resources-mathematics, materials, and immunology. 9 Examples of these reports include Science, Technology, and the Federal Government: National Goals for a New Era, Evaluating Federal Research Programs: Research and the Government Performance and Results Act, and Allocating Federal Funds for Science and Technology.