BOX 2-1 Sources of Data Errors in the Assessment of Research-Doctorate Programs

1) Classification errors. The taxonomy of fields may not adequately reflect distinctions that the field itself considers to be important. For example, in anthropology physical anthropology is a different scholarly undertaking from cultural anthropology, and each subfield has different patterns of publication. By lumping together these subfields into one overall field, the committee is implying comparability. Were they separate, different weights might be given to publications or citations. Anthropology is not alone in this problem. Other fields are public health, communications, psychology, and integrated biological science. Although this study presents ranges of rankings across these fields, the committee encourages users to choose comparable programs and use the data, but apply their own weights or examine ranges of rankings only within their peer group.

2) Data collection errors. The committee provided detailed definitions of important data elements used in the study, such as doctoral program faculty, but not every program that responded paid careful attention to these definitions. The committee carried out broad statistical tests, examined outliers, and got back to the institutions when it had questions, but that does not mean it caught every mistake. In fields outside the humanities it counted publications by matching faculty names to Thomson Reuter’s data and tried to limit mistaken attribution of publications to people with similar names. Despite these efforts, some errors may remain.

3) Omission of field-specific measures of scholarly productivity. The measures of scholarly productivity used were journal articles and, in the humanities, books and articles. Some fields have additional important measures of scholarly productivity. These were included in only one field, the computer sciences. In that field peer-reviewed conference papers are very important. A discussion of data from the computer sciences with its professional society led to further work on counting publications for the entire field. In the humanities the committee omitted curated exhibition volumes for art history. It also omitted books for the science fields and edited volumes and articles in edited volumes for all fields, since these were not indexed by Thomson-Reuters. All of these omissions result in an undercounting of scholarly productivity. The committee regrets them, but it was limited by the available sources. In the future it might be possible to obtain data on these kinds of publication from résumés, but that is expensive and time-consuming.

NOTE: The computer sciences count as publications articles that are presented at refereed conferences, but until recently few of these papers were indexed by Thomson Reuters. To deal with this practice, the committee compiled a list of such conferences that were not indexed and counted these publications from faculty résumés, as it did in the humanities.

SOURCE: A Data-Based Assessment of Research-Doctorate Programs in the United States, p. 7.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement