Skip to main content
Research-Doctorate Programs Preface

Research-Doctorate Programs in the United States: Continuity and Change


PREFACE

Opportunities abound for talented individuals to seek advanced education in the United States. In 1993, over 300 universities offered the Ph.D. or related research doctorates in many fields of Science, Engineering, and Arts and Humanities. Of the 39,000 individuals who completed their doctoral studies that same year, about 30,000 earned a degree in one of those areas.

This vast educational enterprise grew significantly during the past century. From time to time, scholars and administrators alike have been prompted to examine the quality of doctoral programs available to students. At first, these analyses were modest studies based on faculty opinions of programs in their field. Over time, however, these reviews of programs of doctoral study became more and more sophisticated. Allan Cartter (1966) and then Kenneth Roose and Charles Andersen (1970) framed the first formal, national assessments of research-doctorate programs; Lyle Jones, Gardner Lindzey, and Porter Coggeshall (1982) expanded the scope of the effort. (See Chapter 1 for a more detailed description of some of these earlier studies.) Our study builds on those earlier efforts and serves to provide the interested reader with a fresh look at doctoral programs as they appeared in academic year 1992-1993.

Specifically, the authors of the 1982 study expressed hope that their work would become one of a recurring series of assessments. Our study was intended to provide some continuity of form and data with the 1982 assessment, but we have also included significant modifications and improvements suggested by the experience gained from preparing and using previous reports.

Both the 1982 and 1993 studies have similar purposes:

  • To assist students and advisers in matching students' career goals with the facilities and opportunities available in the relevant research-doctorate programs;
  • To inform the practical judgment of university administrators, national and state level policymakers, and managers of public and private funding agencies; and
  • To provide a large, recent data base that can be used by scholars who focus their work on characteristics of the national higher learning educational system and its associated research enterprise.

In keeping with these previous studies, we have collected information of two types: descriptive statistics of selected characteristics of research-doctorate programs (such as the number of faculty and students), and the views of faculty "peers" relative to program quality.

Because of the elements of continuity with the 1982 study, it is now also possible for the first time to examine some of the changes that have taken place in various aspects of higher education during the past decade. The richness of the data makes the potential range of such analyses very broad. Within the limits of the present report, we could not conduct more than a few of these analyses. However, the data base will be available to interested scholars, and we look forward to many sophisticated analyses of these data in the next few years. From these, we hope to gain insights into the factors that are associated with increased or reduced quality in the conduct of research-doctorate programs.

AUDIENCES FOR THE REPORT

In addition to scholars, we have kept in mind that there are several audiences for the information contained in this report, and that each of these audiences may choose to focus on subsets of data that are most important to them.

The potential graduate student, for example, may be most interested in comparing a subset of programs on the profile of variables most likely to affect his or her choice: years to degree, student/faculty ratio, financial aid, research publishing activities of the faculty, availability of research funding, and so on.

Administrators may be most interested in how their own programs have evolved since the 1982 assessment compared to those with which they regard themselves as competitive or in the kinds of objective characteristics that appear to be associated with perceived improvement.

For institutional planners, these data may help inform decisions about resource allocation. Department chairs and deans will be able to compare the size, faculty research activities, and other characteristics of their doctoral program with those of other departments in the same field, and bring this information to bear when advancing, or evaluating, requests for additional resources, or for internal resource allocation.

Policymakers may find it useful to focus on national and regional trends over time and across disciplines, such as changes in the median number of years required for students to receive the doctoral degree, changes in faculty size, and other factors that reflect the allocation and effective use of human resources in higher education. The nature of these changes may be analyzed across disciplines, institutional type (public and private), and so on. This report also includes data on the percentages of women, minorities, and United States citizens enrolled in each field and receiving degrees in that field. While no direct comparison of these variables can be made between the 1982 and 1993 assessments, the present findings should provide a useful benchmark for the analysis of future trends.

ANALYTICAL ISSUES

The committee has given serious attention to certain dilemmas and issues that are inherent in the attempt to assess the quality of an enterprise as complex as that of doctoral education. The contents and format of this report are the best testimony as to how we have addressed these dilemmas. However, in the interest of encouraging readers and users of the report to develop a realistic view of the limitations and subtleties of the data, in this section we enumerate some of the issues that we have identified and wrestled with throughout the study process.

First, although the central purpose of the present study was to assess the quality of individual doctoral programs in terms of their effectiveness in preparing graduates for careers in research and scholarship, the committee recognizes that the careers of many graduates develop outside academic settings. A comprehensive study would ideally include assessments from those who are familiar with the work of graduates in other settings, such as industry, business, government services, and the public sector generally. It would also involve direct assessment of the effectiveness of the programs in which they have been educated. Such assessments involve complexities arising from the interactions of many variables that contribute to individual performance; to conduct these assessments adequately requires resources that were unavailable to the committee. They remain as important goals for further effort.

In considering the central purpose of the present study, there is the fundamental question of whether it is possible to achieve this purpose by providing a description of a program by single numbers, or whether it is necessary to provide a range of indices reflecting the many ways in which programs differ from each other. The committee judged that it is not possible to provide a valid description of the quality of program by any method that relies exclusively on a single number. Rather than merely reporting where a given program ranks in its own field, it is critically important to indicate its relative standing on a number of measures. It is also important to report certain absolute quantitative measures of attributes that we believe are related to the quality of the education and training that the doctoral student receives at an institution.

Second, many thorny problems surround the assessment of quality of a particular research program or department. Given that a research career may be only one outcome of a doctoral program, what factors should we assess? Is it more valid to look at the research of a faculty member or of the institution as a whole? Should we consider the performance of students after their formal education is complete, in the course of their careers? How should we balance the elements of quality and quantity when examining the faculty of a particular research program? How can we distinguish the reputational effect of the contributions of one or two outstanding scholars in a program composed of otherwise less remarkable colleagues? Is such a program likely to be as effective as one in which the majority of the faculty have active programs of research and scholarship, even though no one of their number may have achieved great international prominence? Should I, as a potential graduate student, be as concerned about the density of quality at a graduate program as with its size and coverage--in short, how can summary numbers adequately capture the actual education environment of academic programs that vary greatly in their characteristics?

Based on the review of these issues and the measures available to the committee, we selected a combination of factors we believed to be most important in determining the effectiveness of a doctoral program for preparing students for careers in research and scholarship.

Third, to make this report as useful as possible to the widest possible audience, the committee sought to include a very large number of programs. The report covers more than 3,600 programs at over 270 institutions in 41 fields of study.

This approach provides for a review of the many different experiences students may have in a research-doctorate program and should assist students in determining which experience would be appropriate for them. While for most students the key factors in selecting a particular program relate to the career goals of becoming researchers, scholars, and college teachers, issues such as a department's commitment to achieving a diverse student body, and to the mentoring of doctoral students, can also play a large role in the choice of an institution.

Fourth, the committee emphasizes that a major component of this study is reputational measures, and that these are subjective measures that depend on the perception of the raters. When the judgments of numerous individual raters are pooled, there tends to be strong agreement about which programs are the strongest and which are the weakest; there is considerably less agreement about the programs in the middle range.

Because of the nature of reputational ratings, the committee also points out that differences in ranked order between two programs may reflect very small, unreliable, or insignificant differences in the actual quality of a program, and should be regarded by readers with great caution. Appendix Q illustrates this situation. Simple reputational rankings similar to those reported in the popular media may make for easier reading than the tables in this report. But because they mask subtleties that may be important to the reader, they also make for poorer information.

Fifth, the committee wishes to draw attention to the existence of significant differences among the different disciplines. Patterns of research and scholarship in the Arts and Humanities, for example, differ considerably from those in the Physical Sciences and Mathematics. These differences include the manner in which research findings are disseminated (books, articles, monographs, conferences, etc.), the expected period of time to complete a doctoral degree, the significance of the role of post-doctoral appointments in the education of the student, the role of the individual versus research-team contributions, and so on. These differences will be evident in the pages that follow. It is crucial that the reader interpret the meaning of particular indices in terms of the disciplinary field of a particular program rather than against some absolute ideal standard of graduate education in general.

In addition, since 1982, new fields of study--particularly in interdisciplinary combinations--have emerged. The number of programs in some others has declined, and yet others (most notably in the Biological Sciences) have undergone internal reorganization of sub-units leading to the creation of new doctoral programs. These changes betoken a lively evolution of concepts and methods in the fields concerned, but make simple comparisons between 1982 and 1993 problematic.

In sum, after considering all of these issues the committee concluded that it would be of most value to the readers of this volume to report and emphasize the importance of multiple indices of quality, and the lack of importance of minor differences in ranking. We have been particularly careful to incorporate a range of quantitative indices into our assessment variables, thereby placing reputational ratings into a proper and modest perspective. In a word, there is no single agreed index of a unitary attribute called "quality"; there are several "qualities," and the importance of them is largely a function of the needs of the reader.


Previous Section | HTML Home Page | Next Section

NAS Home Page | NAP Home Page | Reading Room | Report Home Page