National Academies Press: OpenBook
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R1
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R2
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R3
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R4
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R5
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R6
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R7
Page viii Cite
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R8
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R9
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R10
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R11
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R12
Page xiii Cite
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R13
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R14
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R15
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R16
Page xvii Cite
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R17
Page xviii Cite
Suggested Citation:"Front Matter." National Research Council. 2011. A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD). Washington, DC: The National Academies Press. doi: 10.17226/12994.
×
Page R18

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

A DA A-BASED ASSESSMENT OF T RESEARCH-DOCTORA TE PROGRAMS IN THE UNITED ST TES A Committee on an Assessment of Research Doctorate Programs Jeremiah P. Ostriker, Charlotte V. Kuh, and James A. Voytuk, Editors Board on Higher Education and Workforce Policy and Global Affairs THE NATIONAL ACADEMIES PRESS Washington, D.C. www.nap.edu

THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The project that is the subject of this report was approved by the Governing Board of the NRC, whose members are drawn from the councils of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine. The members of the committee responsible for the report were chosen for their special competences and with regard for appropriate balance. This project was supported by the Andrew W. Mellon Foundation, the Alfred P. Sloan Foundation, the U.S. Department of Energy (Grant DE-FG02-07ER35880), the National Institutes of Health (Grant N01-OD-4-2139, TO#170), the National Science Foundation (Grant OIA-0540823), the National Research Council, and contributions from 212 U.S. universities. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the organizations or agencies that provided support for the project. International Standard Book Number-13: 978-0-309-16030-8 International Standard Book Number-10: 0-309-16030-8 Library of Congress Control Number: 2011933643 Additional copies of this report are available from the National Academies Press, 500 Fifth Street, N.W., Lockbox 285, Washington, DC 20055; (800) 624-6242 or (202) 334-3313 (in the Washington metropolitan area); Internet, http://www.nap.edu Copyright 2011 by the National Academy of Sciences. All rights reserved. Printed in the United States of America

The National Academy of Sciences is a private, nonprofit, self-perpetuating society of distinguished scholars engaged in scientific and engineering research, dedicated to the furtherance of science and technology and to their use for the general welfare. Upon the authority of the charter granted to it by the Congress in 1863, the Academy has a mandate that requires it to advise the federal government on scientific and technical matters. Dr. Ralph J. Cicerone is president of the National Academy of Sciences. The National Academy of Engineering was established in 1964, under the charter of the National Academy of Sciences, as a parallel organization of outstanding engineers. It is autonomous in its administration and in the selection of its members, sharing with the National Academy of Sciences the responsibility for advising the federal government. The National Academy of Engineering also sponsors engineering programs aimed at meeting national needs, encourages education and research, and recognizes the superior achievements of engineers. Dr. Charles M. Vest is president of the National Academy of Engineering. The Institute of Medicine was established in 1970 by the National Academy of Sciences to secure the services of eminent members of appropriate professions in the examination of policy matters pertaining to the health of the public. The Institute acts under the responsibility given to the National Academy of Sciences by its congressional charter to be an adviser to the federal government and, upon its own initiative, to identify issues of medical care, research, and education. Dr. Harvey V. Fineberg is president of the Institute of Medicine. The NRC was organized by the National Academy of Sciences in 1916 to associate the broad community of science and technology with the Academy’s purposes of furthering knowledge and advising the federal government. Functioning in accordance with general policies determined by the Academy, the Council has become the principal operating agency of both the National Academy of Sciences and the National Academy of Engineering in providing services to the government, the public, and the scientific and engineering communities. The Council is administered jointly by both Academies and the Institute of Medicine. Dr. Ralph J. Cicerone and Dr. Charles M. Vest are chair and vice chair, respectively, of the NRC. www.national-academies.org

Committee on an Assessment of Research-Doctorate Programs Jeremiah P. Ostriker, Committee Chair, Charles A. Young Professor of Astronomy and Provost Emeritus, Princeton University Virginia S. Hinshaw, Vice Chair, Chancellor, University of Hawai’i at Mānoa Elton D. Aberle, Dean Emeritus of the College of Agricultural and Life Sciences, University of Wisconsin–Madison Norman Bradburn, Tiffany and Margaret Blake Distinguished Service Professor Emeritus, University of Chicago John I. Brauman, J. G. Jackson–C. J. Wood Professor of Chemistry, Emeritus, Stanford University Jonathan R. Cole, John Mitchell Mason Professor of the University, Columbia University (resigned June 2010) Paul W. Holland, Frederic M. Lord Chair in Measurement and Statistics (retired), Educational Testing Service Eric W. Kaler, Provost and Senior Vice President for Academic Affairs, Stony Brook University Earl Lewis, Provost and Executive Vice President for Academic Affairs and Asa Griggs Candler Professor of History and African American Studies, Emory University Joan F. Lorden, Provost and Vice Chancellor for Academic Affairs, University of North Carolina at Charlotte Carol B. Lynch, Dean Emerita of the Graduate School, University of Colorado, Boulder Robert M. Nerem, Parker H. Petit Distinguished Chair for Engineering in Medicine, and former director, Parker H. Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology Suzanne Ortega, Provost and Executive Vice President for Academic Affairs, University of New Mexico Robert J. Spinrad, Vice President (retired), Technology Strategy, Xerox Corporation (resigned January 2008; deceased September 2009) Catharine R. Stimpson, Dean, Graduate School of Arts and Science, and University Professor, New York University Richard P. Wheeler, Vice Provost, University of Illinois at Urbana-Champaign Staff Charlotte V. Kuh, Study Director Peter H. Henderson, Senior Program Officer James A. Voytuk, Senior Program Officer John Sislin, Program Officer Michelle Crosby-Nagy, Research Assistant Kara Murphy, Research Assistant Rae E. Allen, Administrative Coordinator Sabrina E. Hall, Program Associate v

vi A DATA-BASED ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS IN THE U.S. Data Panel for the Assessment of Research-Doctorate Programs Norman M. Bradburn, Ph.D., (Chair), Tiffany and Margaret Black Distinguished Service Professor and Provost Emeritus, University of Chicago Richard Attiyeh, Ph.D., Vice Chancellor for Research, Dean of Graduate Studies, and Professor of Economics Emeritus, University of California, San Diego Scott Bass, Ph.D., Provost, The American University Julie Carpenter-Hubin, M.A., Director of Institutional Research and Planning, The Ohio State University Janet L. Greger, Ph.D., Vice Provost for Strategic Planning, University of Connecticut (retired) Dianne Horgan, Ph.D., Associate Dean of the Graduate School, University of Arizona Marsha Kelman, M.B.A., Associate Vice President, Policy and Analysis, Office of the President, University of California Karen Klomparens, Ph.D., Dean of the Graduate School, Michigan State University Bernard F. Lentz, Ph.D., Vice Provost for Institutional Research, Drexel University Harvey Waterman, Ph.D., Associate Dean for Academic Affairs, Graduate School-New Brunswick, Rutgers, The State University of New Jersey Ami Zusman, Ph.D., Coordinator, Graduate Education Planning & Analysis, Office of the President, University of California (retired)

BOARD ON HIGHER EDUCATION AND WORKFORCE William E. Kirwan, Chair Chancellor University System of Maryland F. King Alexander President California State University Long Beach Susan K. Avery President and Director Woods Hole Oceanographic Institution Jean-Lou Chameau [NAE] President California Institute of Technology Carlos Castillo-Chavez Professor of Biomathematics and Director, Mathematical and Theoretical Biology Institute Department of Mathematics and Statistics Arizona State University Rita Colwell [NAS] Distinguished University Professor University of Maryland College Park and The Johns Hopkins University Bloomberg School of Public Health Peter Ewell Vice President National Center for Higher Education Management Systems Sylvia Hurtado Professor and Director Higher Education Research Institute University of California, Los Angeles William Kelley [IOM] Professor of Medicine, Biochemistry, and Biophysics University of Pennsylvania School of Medicine Earl Lewis Provost, Executive VP for Academic Affairs, and Professor of History vii

viii A DATA-BASED ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS IN THE U.S. Emory University Paula Stephan Professor of Economics Andrew Young School for Policy Studies Georgia State University Staff Charlotte Kuh, Deputy Executive Director, Policy and Global Affairs Peter Henderson, Director, Board on Higher Education and Workforce James Voytuk, Senior Program Officer Mark Regets, Senior Program Officer Christopher Verhoff, Financial Associate Michelle Crosby-Nagy, Research Associate Sabrina E. Hall, Program Associate

FOREWORD This report and its large collection of quantitative data will become in our view an important and transparent instrument for strengthening doctoral education in the United States. The report follows in the tradition of assessments conducted by the National Research Council for almost 30 years, but with important changes. Beyond the traditional, printed document, the data that inform and grow out of this report are being made available electronically to promote widespread use and analysis of many characteristics of doctoral programs. The unparalleled data set covers twenty important variables for an enormous number of programs in 62 major fields. It enables university faculty, administrators, and funders to compare, evaluate and improve programs; it permits students to find those programs best suited to their needs; and it allows for updating important information on a regular basis to permit continuous improvement. Much has been learned from this study, which turned out to be more challenging and to take longer than we originally expected. An enormous effort was contributed by universities to collect and recheck the data, demonstrating their desire to identify comparative strengths and weaknesses and to show accountability. The study committee had to refine and revise its methodology as it sought to provide tools for evaluating and comparing programs. Although the data are based on the 2005-2006 academic year, they permit many useful comparisons of programs across many dimensions. All those interested in graduate education can learn much from studying the data, comparing programs, and drawing lessons for how programs can be improved. The data for many variables can be updated and made current on a regular basis by universities. In order to identify variables most valued by doctoral faculty as well as to avoid using exclusively reputational rankings as was done in earlier graduate doctorate assessments, the committee employed two alternative ranking methods. The first method asked faculty in each field to assign a weight to each of the quantitative variables in the institutional surveys, and the weighted variables could then be used to determine ratings and rankings of programs. The second method was to survey a subset of faculty to ask them to rank a sample of programs in their field, and then to use principal components and regression analyses to obtain the implied weights for the institutional variables that would most closely reproduce the results. The committee initially envisioned combining the results of these two methods into a unified set of rankings. The production of rankings from measures of quantitative data turned out to be more complicated and to have greater uncertainty than originally thought. The committee ultimately concluded that it should present the results of the two approaches separately as illustrations of how individuals can use the data to apply their own values to the quantitative measures to obtain rankings suitable for their own specific purposes. The illustrative rankings, which are provided with ranges to show some of the statistical uncertainties, should not be interpreted as definitive conclusions about the relative quality of doctoral programs. Doctoral programs are valued for a variety of reasons, and their characteristics are valued in different ways by stakeholders; there is no single universal criterion or set of criteria. ix

x A DATA-BASED ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS IN THE U.S. The illustrative rankings and their ranges do provide important insights on how programs can be ranked according to different criteria and what variables are most important to faculty, which typically are variables that measure per capita scholarly output. Faculty generally do not assign great importance to program size when assigning weights directly—but when they rank programs, program size appears to implicitly carry large weight. It is our view that strengthening graduate education will require paying attention to all of the variables in the dataset, not just those most important to faculty. Three additional metrics were presented in the report for each program; these focused separately on research activity, student support and outcomes, and diversity of the academic environment. A major value of the study is that this data set allows all stakeholders to assign weights which they believe to be important and then compare the programs on that basis. If a process of continuous improvement is to result from this exercise, all of the stakeholders interested in graduate education will need to focus upon steps to improve performance across the board. A major commitment by universities will be needed to update the data set on a regular basis, so that programs can continue to be compared and evaluated. If this is done with the updated dataset as an important new tool, and we strive to improve what is already the world’s strongest system of higher education, we believe that American doctoral education can continue to bring enormous benefits to our citizens and remain the envy of the world. Ralph J. Cicerone, President, National Academy of Sciences Charles M. Vest, President, National Academy of Engineering Harvey V. Fineberg, President, Institute of Medicine

Preface and Acknowledgments Doctoral education, a key component of higher education in the United States, is performing well. It educates future professors, researchers, innovators and entrepreneurs. It attracts students and scholars from all over the world and is being emulated globally. This success, however, should not engender complacency. It was the intent of this study to measure characteristics of doctoral programs that are of importance to students, faculty, administrators, and others who care about the quality and effectiveness of doctoral programs in order to permit comparisons among programs in a field of study and to provide a basis for self-improvement within the disciplines. To this end, the Committee on an Assessment of Research-Doctorate Programs collected a large amount of data relating to research productivity, student support and outcomes and program diversity from over 5000 doctoral programs in 62 fields at 212 U.S. universities. Some of these data, such as the percent of entering students who complete in a given time period, the percent of students funded in the first year, and the diversity of program faculty and students have not been collected in earlier studies. These data appear in the online spreadsheets that accompany this report and can easily be selected, downloaded, and compared. The most important benefits of this study will flow from examination and analysis of the data that were collected. In addition to making new data available, the committee addressed the issue of program rankings from an illustrative standpoint. Rankings based on faculty opinions of program quality had been produced in earlier NRC reports in 1982 and 1995. In these studies, the ratings and rankings were derived from surveys in which faculty members were asked to assess the scholarly quality and effectiveness in education of individual doctoral programs in their own fields, i.e. they were based on reputation. There was a widespread reaction, after the completion of the 1995 study, that the one, reputation based, measure was inadequate to represent the many important characteristics that are needed to describe and assess the full range of US doctoral programs. The present NRC study, A Data-Based Assessment of Research-Doctorate Programs, differs significantly from these earlier studies in that it uses objective data to estimate overall quality of doctoral programs using values important to faculty, and does so in two different ways as illustrations. It also creates measures of program strength along three separate dimensions, that is, five separate measures have been developed. Using a much broader range of collected data and information from new surveys, this data-based assessment obtains faculty importance weights for 20 program characteristics, and designs two specific techniques to obtain weights that relate these characteristics to perceived program quality. It has also incorporated the uncertainty that comes from differences in faculty views, variability in program data, and statistical variation to produce ranges of rankings for each program in 59 disciplines. The committee considers these ranges of rankings to be illustrative. Other ranges could have been obtained with different weights. One example of alternative ranges of rankings with weights obtained from the surveys are ranges of rankings along the separate dimensions of 1)research activity, 2)student support and outcomes, and 3)diversity of the academic environment. These dimensional measures are all examples of ways that the data and weights can be combined. Users are encouraged to develop additional measures employing weights that xi

xii A DATA-BASED ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS IN THE U.S. reflect their values. None of these ranges of rankings should be considered NRC- endorsed. The committee believes that the concept of a precise ranking of doctoral programs is mistaken. How doctoral programs are ranked will depend on what raters are chosen, on year-to-year variability in program characteristics, and on the statistical error involved in any estimation. These sources of uncertainty imply that rankings of programs are intrinsically imprecise. The committee has tried to take into account these sources of variation in its illustrative rankings and, in order to convey that variation, has presented ranges of rankings. The two overall measures illustrate that data-based rankings may vary, depending on the weights applied to program characteristics, and that these may vary depending on which raters are chosen and the techniques used to obtain their rankings. As noted earlier, it is the comparison of the program characteristics that will, in the end, be more valuable than any range of rankings. The analysis of these characteristics will help direct faculty in academic programs to areas of potential improvement and will expand what students understand about the programs that interest them. Because some of the data collected for this study had not been collected previously, time had to be spent on data validation and assurance. The statistical techniques also required time to develop and test. As a result, much of the data presented here come from 2005-2006. Programs and faculty may have changed in the intervening period. In the on-line spreadsheets, each program has a url. Users are encouraged to go to these program websites to obtain the latest information about programs of interest. Now that the statistical machinery and the data structure are in place, it should be easier to replicate this study with current data in the near future. ACKNOWLEDGMENT OF REVIEWERS AND SPONSORS This report has been reviewed in draft form by individuals chosen for their diverse perspectives and technical expertise, in accordance with procedures approved by the National Academies’ Report Review Committee. The purpose of this independent review is to provide candid and critical comments that will assist the institution in making its published report as sound as possible and to ensure that the report meets institutional standards for objectivity, evidence, and responsiveness to the study charge. The review comments and draft manuscript remain confidential to protect the integrity of the process. We wish to thank the following individuals for their review of this report and the accompanying data: John Bailar, University of Chicago; Diane Birt, Iowa State University; Craig Calhoun, New York University; Alicia Carriquiry, Iowa State University; Joseph Cerny, University of California, Berkeley; Gill Clarke, University of Bristol; David Donoho, Stanford University; Ronald Ehrenberg, Cornell University; Daniel Fogel, University of Vermont; George Langford, Syracuse University; Risa Palm, The State University of New York; William Press, University of Texas, Austin; Raul Ramos, University of Houston; Lydia Snover, Massachusetts Institute of Technology; Stephen Stigler, University of Chicago; Patrick Stover, Cornell University; Andrew Wachtel, Northwestern University; and John Wiley, University of Wisconsin-Madison.

The review of this report was overseen by Lyle Jones, University of North Carolina, Chapel Hill and Stephen Fienberg, Carnegie Mellon University. Although the reviewers listed above have provided many constructive comments and suggestions, they were not asked to endorse the conclusions or recommendations, nor did they see the final draft of the report before its release. Responsibility for the final content of this report rests entirely with the authoring committee and the institution. In addition to the reviewers, the sponsors of this study deserve both recognition and thanks for their patience and resources. First, thanks to the 212 universities that provided not only significant financial support but also staff time to obtain and validate high quality data. Second, our thanks to the foundations and agencies that provided support to this project: the Andrew W. Mellon Foundation, the Alfred P. Sloan Foundation, the U.S. Department of Energy, the National Institutes of Health, the National Science Foundation, and the National Research Council. As the case with the reviewers, responsibility for the final content of this report rests entirely with the authoring committee and the institution. I would also like to thank our data and statistical contractor, Mathematica Policy Research, and particularly David Edson, who oversaw countless runs and data revisions with calm and good sense. I am also grateful to the Council of Graduate Schools, the American Council of Learned Societies, and the Association of Graduate Schools of the Association of American Universities, which heard progress reports at various stages of the project and offered suggestions that, we hope, have resulted in a better report. Thanks also to the scholarly societies and councils of department chairs who have helped us shape both the data for this report and its presentation. Finally, I would like to thank the staff of the project--Charlotte Kuh, James Voytuk, Michelle Crosby-Nagy, Sabrina Hall, Rae Allen, and Kara Murphy-- for their tireless efforts to bring to completion what has been a lengthy but rewarding project. Jeremiah P. Ostriker, Chair Committee to Assess Research-Doctorate Programs xiii

Contents Summary 1 1 Introduction 9 2 Context and Motivation 17 3 Study Design 27 4 The Methodologies Used to Derive Two Illustrative Rankings 49 5 Faculty Values as Reflected in the Two Illustrative Rankings 65 6 Some Uses of the Data 73 7 The Data and Principal Findings 81 8 Looking Ahead 105 Appendixes A Committee Biographies 111 B Taxonomy of Fields 119 C Participating Institutions 123 D Questionnaires 131 Institutional Questionnaire 131 Program Questionnaire 163 Program Questionnaire for Emerging Fields 194 Survey of Program Quality 198 Faculty Questionnaire 204 Admitted-to-Candidacy Doctoral Student Questionnaire 222 xv

xvi A DATA-BASED ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS IN THE U.S. E List of Variables 241 F R and S Coefficients by Field 247 G Correlation for Median R and S Rankings by Broad Field 279 H Detail for the Rating Study 281 I Count of Ranked and Unranked Programs by Field 283 J. A Technical Discussion of the Process of Rating and Ranking Programs in a Field 285

Boxes, Figures, and Tables BOXES 3-1 Variables Used in Data Cleaning, 46 4-1 Fields for Which the Correlation of the Median R and S Ranking is Less than 0.75, 52 5-1 Characteristics Included in the Faculty Weighting Process, 65 FIGURES 4-1 Steps in Calculating Two Types of Overall Program Rankings, 53 7-1 Minority and Non-minority Ph.D.’s, 1993 and 2006, 86 7-2 Percentage of Underrepresented Minority Faculty and Students by broad field, 2006, 93 7-3 Percentage of Faculty and Students Female by broad field, 2006, 95 TABLES 1-1 Coverage of NRC Studies, 1982–2006, 10 1-2 Study Questionnaires and Response Rates, 12 2-1 Ph.D.’s Awarded to White, Non-Hispanic U.S. Citizens or Permanent Resident Males, in Selected Broad Fields, 1993 and 2006, 24 3-1 Numbers of Programs and Institutions in Each Broad Field, 31 3-2 Fields in 1993 and 2006 Data Collection, 34 3-3 Characteristics Listed in the Online Database, 39 4-1 Summary of Differences Between 1995 and 2006 Studies, 56 5-1 Most Highly Rated Characteristics of Doctoral Programs on R and S Measures, 67 5-2 Faculty Importance Weights by Broad Field, 68 5-2A Average Faculty Importance Weights on Components of Research Activity Dimensional Measure, 69 5-2B Average Faculty Importance Weights on Components of the Student Support and Outcomes Dimensional Measure, 70 5-2C Average Faculty Importance Weights on Components of the Diversity Dimensional Measure, 71 xvii

xviii A DATA-BASED ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS IN THE U.S. 6-1 Ranges of Rankings and Data for Five Mid-Atlantic Chemistry Programs, 74 6-2 An Example of Selected Ranges of Rankings for a Doctoral Program in Biochemistry, 76 6-3 Calculation of the R and S Rankings for a Single Program, 77 7-1 Weighted Measures for Faculty and Students, 1993 and 2006, 82 7-2 Changes in Ph.D.’s, Enrollment, and Gender Composition, Common Programs, 1993–2006, 83 7-3 Average of Total Faculty, All Common Programs, 1993 and 2006, 84 7-4 Percentage Change in Number of Doctoral Recipients, Common Programs, 1993 and 2006, 84 7-5 Number of Postdocs by Broad Field, 2006, 85 7-6 Number of Ph.D's in 2006 NRC Study Compared with Ph.D.'s in NSF Doctorate Record File, 87 7-7 Institutions with 50 Percent of Ph.D.’s in Ranked Programs, by Control (Public or Private), 2002–2006 (average number of Ph.D.’s), 89 7-8 Research Measures and Program Size, 90 7-9 Student Characteristics by Broad Field Average and for the Largest Quartile, 92 7-10 Diversity Measures, 92 7-11 Fields with More than 10 Percent of Enrolled Students from Underrepresented Minority (URM) Groups, 94 7-12 Science and Engineering Fields with More than 15 Percent of Doctoral Faculty Female, 96 7-13 Faculty Data: Selected Measures, 2006, 98 7-14 Response Rates: Student Survey, Five Fields, 2006, 99 7-15 Student Satisfaction: Programs with More Than 10 Students Responding, 2006, 100 7-16 Student Productivity: Programs with More Than 10 Student Responses, 101 7-17 Students: Advising and Academic Support (percent), 102 7-18 Student Career Objectives at Program Entry and at Time of Response (percent), 103

Next: Summary »
A Data-Based Assessment of Research-Doctorate Programs in the United States (with CD) Get This Book
×
Buy Paperback | $100.00 Buy Ebook | $79.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

A Data-Based Assessment of Research-Doctorate Programs in the United States provides an unparalleled dataset that can be used to assess the quality and effectiveness of doctoral programs based on measures important to faculty, students, administrators, funders, and other stakeholders.

The data, collected for the 2005-2006 academic year from more than 5,000 doctoral programs at 212 universities, covers 62 fields. Included for each program are such characteristics as faculty publications, grants, and awards; student GRE scores, financial support, and employment outcomes; and program size, time to degree, and faculty composition. Measures of faculty and student diversity are also included.

The book features analysis of selected findings across six broad fields: agricultural sciences, biological and health sciences, engineering, physical and mathematical sciences, social and behavioral sciences, and humanities, as well as a discussion of trends in doctoral education since the last assessment in 1995, and suggested uses of the data . It also includes a detailed explanation of the methodology used to collect data and calculate ranges of illustrative rankings.

Included with the book is a comprehensive CD-ROM with a data table in Microsoft Excel. In addition to data on the characteristics of individual programs, the data table contains illustrative ranges of rankings for each program, as well as ranges of rankings for three dimensions of program quality: (1) research activity, (2) student support and outcomes, and (3) diversity of the academic environment.

As an aid to users, the data table is offered with demonstrations of some Microsoft Excel features that may enhance the usability of the spreadsheet, such as hiding and unhiding columns, copying and pasting columns to a new worksheet, and filtering and sorting data. Also provided with the data table are a set of scenarios that show how typical users may want to extract data from the spreadsheet.

PhDs.org, an independent website not affiliated with the National Research Council, incorporated data from the research-doctorate assessment into its Graduate School Guide. Users of the Guide can choose the weights assigned to the program characteristics measured by the National Research Council and others, and rank graduate programs according to their own priorities.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!