National Academies Press: OpenBook
« Previous: Executive Summary
Suggested Citation:"1. Introduction." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 9
Suggested Citation:"1. Introduction." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 10
Suggested Citation:"1. Introduction." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 11
Suggested Citation:"1. Introduction." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 12
Suggested Citation:"1. Introduction." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 13
Suggested Citation:"1. Introduction." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 14

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

1 Introduction Assessments of the quality of research-doctorate pro- grams and their faculty are rooted in the desire of programs to improve quality through comparisons with other similar programs. Such comparisons assist them to achieve more effectively their ultimate objective to serve society through the education of students and the production of research. Accompanying this desire to improve is a complementary goal to enhance the effectiveness of doctoral education and, more recently, to provide objective information that would assist potential students and their advisors in comparing pro- grams. The first two goals emerged as graduate education began to grow before World War II and as higher education in the United States was transformed from a predominantly elite enterprise to the widespread and diverse enterprise that it is today. The final goal became especially prominent dur- ing the past two decades as doctoral training expanded beyond training for the professoriate. As we begin a study of methodology for the next assess- ment of research-doctorate programs, we have stepped back to ask some fundamental questions: Why are we doing these rankings? Whom do they serve? How can we improve them? This introduction will also serve to provide a brief history of the assessment of doctoral programs and report on more recent movements to improve doctoral education. A SHORT HISTORY OF THE ASSESSMENT OF RESEARCH-DOCTORATE PROGRAMS The assessment of doctorate programs in the United States has a history of at least 75 years. Its origins may date to 1925, a year in which 1,206 Ph.D. degrees were granted by 61 doctoral institutions in the United States. About two- thirds of these degrees were in the sciences, including the social sciences, and most of the remaining third were in the humanities. Yet, Raymond M. Hughes, president of Miami University of Ohio and president of the Association of American Colleges, said in his 1925 annual report: At the present time every college president in the country is spending a large portion of his time in seeking men to fill vacancies on the staff of his institution, and every man [presi- dent] is confronted with the question of where he can hope to get the best prepared man of the particular type he desires. Hughes conducted a study of 20 to 60 faculty members in each field and asked them to rank about 38 institutions ac- cording to "esteem at the present time for graduate work in your subject." Graduate education continued to expand, and from time to time, reputational studies of graduate programs were carried out. These studies limited themselves to "the best" programs and, increasingly, those programs that were excluded complained about sampling bias. In the 1960s, Allan Cartter, vice president of the Ameri- can Council on Education, pioneered the modern approach for assessing reputation, which was used in the 1982 and 1993 NRC assessments. He sought to include all major uni- versities and, instead of asking raters about the "esteem" in which graduate programs were held, he asked for qualitative judgments of three kinds: 1) the quality of the graduate faculty, 2) the effectiveness of the doctoral program, and 3) the expected change in relative position of a program in the next 5 to 10 years.2 In 1966, when Cartter's first study appeared, slightly over 19,000 Ph.D.s were being produced annually in over 150 institutions. Ten years later, following a replication of the Cartter study by Roose and Anderson in 1970, another look at the methodology to assess doctoral programs was undertaken under the auspices of the Conference Board of Associated Research Councils.3 A conference on assessing doctoral iGoldberger, et al., eds. (1995:10). 2Cartter (1966). 3Consisting of the Social Science Research Council, the American Council of Learned Societies, the American Council on Education, and the National Research Council. 9

10 programs concluded that raters should be given the names of faculty in departments they rate and that "objective measures" of the characteristics of programs should be collected in addition to the reputational measures. These recommenda- tions were followed in the 1982 assessment that was con- ducted by the National Research Council (NRC).4 By this time, over 31,000 doctorates were being produced by over 300 institutions, of which 228 participated in the NRC study. The most recent NRC assessment of doctorates, con- ducted in 1993 and published in 1995, was even more comprehensive. The 1995 Study design tried to maintain continuity with the 1982 measures, but it added and refined quantitative measures. With the help of citation and pub- lication data gathered by the Institute for Scientific Informa- tion (ISI), it expanded the measures of publications and citations. It also included measures of awards and honors for the humanities. It covered 41 fields in 274 institutions, and data were presented for 3,634 doctoral programs. This expansion, however, did not produce a non- controversial set of rankings. It is widely asserted that "halo" effects give high rankings to programs on the basis of recog- nizable names star faculty without considering average program quality. Similarly, there is evidence to support the contention that programs within well-known, larger univer- sities may have been rated higher than equivalent programs in lesser-known, smaller institutions. It is further argued that the reputational rankings favor already prestigious departments, which may be, to put it gently, "past their primes" while de-emphasizing striving programs that are Investing In achieving excellence. Another criticism involves the inability of the study to recognize the excel- lence of "niche" and smaller programs. It is also asserted that, although reputational measures seek to address schol- arly achievement as something separate from educational effectiveness, they do not succeed. The high correlation between these two measures supports this assertion. Finally, and most telling, there is criticism of the entire ranking business. Much of this criticism, directed against rankings published by a national news magazines attacked those annual rankings as derived from capnc~ous cntena constructed from varying weights of changing variables. Fundamentally, the incentives created by any system of rankings were said to induce an emphasis on research pro- ductivity and scholarly ranking of faculty to the detriment of another important objective of doctoral education the train- ing of the next generation of scholars and researchers. Rankings were said to create a "horse race" mentality in which every doctoral program, regardless of its mission, was encouraged to emulate programs in the nation's leading research universities with their emphasis on research and the production of faculty who focused primarily on research. At the same time, a growing share of Ph.D.s were setting off for tones et al. (~982). ASSESSING RESEARCH-DOCTORATE PROGRAMS careers outside research universities and, even when they did take on academic positions, taught in institutions that were not research universities. As Ph.D. destinations changed, the question arose whether the research universi- ties were providing appropriate training. Calls for Reforms in Gracluate Eclucation Although rankings may be under fire from some quarters, this report comes at a time when such an effort can be highly useful for U.S. doctoral education generally. Recently, there have been numerous calls for reform in graduate education. Although based on solid research about selected programs and their graduates, these calls lack a general knowledge base that can inform recommendations about, for example, attrition from doctoral study, time to degree, and comple- tion. Further, individual programs find it difficult to com- pare themselves with similar programs. Some description of the suggested graduate education reforms can help to explain why a database, constructed on uniform definitions and col- lected in the same year, could be helpful both as a baseline from which reform can be measured and as a support for data-based discussions of whether reforms are needed. In the late 1940s, the federal government was concerned with the need for educating a large number of college-bound World War II veterans and created the National Science Foundation to support basic science research at universities and to fund those students interested in pursuing advanced training and education. Competition with the Russians, the battle to win the Cold War, and the sense that greater exper- tise in science and engineering was key to America's inter- ests jumpstarted a new wave of investments in the 1960s, resulting in a tripling of Ph.D.s in science and engineering during that decade. Therefore, for nearly a quarter of a century those calling for change asked universities to expand offerings and capacity in areas of national need, especially in scientific fields.5 By the mid-1970s, a tale of two realities had emerged. The demand for students pursuing doctoral degrees in the sciences and engineering continued unabated. At the same time, the number of students earning doctoral degrees in the humanities and social sciences started a decade-long drop, often encouraged by professional associations worried by gloomy job prospects and life decisions based on reactions to the Vietnam War (for a period graduate school insured military service deferment). Thus, a presumed crisis for doctorates in the humanities and humanistic social sciences was appearing as early as the 1970s. Nonetheless, the over- all number of doctoral recipients quadrupled between 1960 and 1990.6 By the l990s a kind of conversion of perspectives emerged. Rapid change in technologies, broad geopolitical 5Duderstadt (2000); Golde (July 2001 draft). 6Duderstadt (2000: 91); Bowen and Rudenstine (1992:~-12, 20-55).

INTRODUCTION factors, and intense competition for the best minds led scien- tific organizations and bodies to call for the dramatic over- haul of doctoral education in science and engineering. For the first time, we questioned whether we had overproduced Ph.D.s in certain scientific fields. Meanwhile, worry about lengthening times to degree, incomplete information on completion rates, and less-than-desirable job outcomes led to plans to reform practices in the humanities, the arts, and the social sciences. A number of these reform efforts have implications for the present NRC study and should be briefly highlighted. The most significant statement in the area of science and engineering policy came from the Committee on Science, Engineering and Public Policy (COSEPUP), formed by the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine. Cognizant of the career options that students follow (more than half in non- university settings), the COSEPUP report, Reshaping the Graduate Education of Scientists and Engineers (1995J, called for graduate programs to offer more versatile training, recognizing that only a fraction of the doctoral recipients become faculty members. The committee encouraged more training programs to emphasize more and better mentoring relationships. The report called for programs to continue emphasizing quality in the educational experience, monitor time to degree, attract a more diverse domestic pool of students, and make expectations as transparent as possible. The COSEPUP report took on the additional task of seg- menting the graduate pathways. It acknowledged that some students would stop after a master's degree, others would complete a doctorate, and others would complete a doctorate and have significant research careers. The committee suggested different graduate expectations and outcomes for students, depending upon the pathway chosen. To assist this endeavor the committee called for the systematic collection of pertinent data and the establishment of a national policy conversation that included representatives from relevant sectors of society industry, the Academy, government, and research units, among others. The committee signaled the need to pay attention to the plight of postdoctoral fellows, employment opportunities in a variety of fields, and the importance of attracting talented international students.7 Three years later the Pew Charitable Trust funded the first of three examinations of graduate education. Re-envisioning the Ph.D., a project headed by Professor Jody Nyquist and housed at the University of Washington, began by canvass- ing stakeholders students, faculty, employers, funders, and higher education associations. More than 300 were inter- viewed, five focus groups were created, e-mail surveys went to six samples, and a mail survey was distributed. Nyquist and her team brought together representatives of this group for a two-day conference in 2000. Since that meeting the 7Committee On Science, Engineering, and Public Policy (1995). 11 project has continued as an active website for the sharing of best practices. The project began with the question, "How can we re- envision the Ph.D. to meet the societal needs of the 21st century?" It found that representatives from different sec- tors had different emphases. On the whole, however, there was the sense that, while the American-style Ph.D. has great value, attention is needed in several areas. First, time to degree must be shortened. For scientists this means incorpo- rating years as a postdoctoral fellow into an assessment of time to degree.8 Second, the pool of students seeking doctorates needs to be more diverse, especially through the inclusion of more students of color. Third, doctoral students need greater exposure to information technology during their careers. Fourth, students must have a more varied and flex- ible curriculum. Fifth, interdisciplinary research should be emphasized. And sixth, the graduate curriculum should include a broader sense of the global economy and the envi- ronment. The project and call for reforms built on Woodrow Wilson National Fellowship Foundation President Robert Weisbuch's assessment that "when it comes to doctoral edu- cation, nobody is in charge, and that may be the secret of its success. But laissez-faire is less than fair to students and to the social realms that graduate education can benefit." The project concluded with the recommendation that a more self- directed process take place. Or in the words of Weisbuch, "Re-envisioning isn't about tearing down the successfully loose structure but about making it stronger, more particu- larly asking it to see and understand itself."9 The Pew Charitable Trusts also sponsored research that assessed students as well as their concerns and views of doctoral education as another way of spotlighting the need to reform doctoral education. Chris Golde and Timothy Dore surveyed doctoral students in 11 fields at 27 universities, with a response rate of 42.5 percent, yielding nearly 4,200 respondents. The Golde and Dore study (2001), At Cross Purposes, concluded that "the training doctoral students receive is not what they want, nor does it prepare them for the jobs they take." They also found that "many students do not clearly understand what doctoral study entails, how the process works and how to navigate it effectively.''l° A Web-based survey conducted by the National Associa- tion of Graduate and Professional Students (NAGPS) produced similar findings. Students expressed tremendous satisfaction with individual mentoring but some pointed to a mismatch between their graduate school education and the jobs they took after completing their dissertation. Responses, PA study by Joseph Cerny and Maresi Nerad replaced time to degree with time to first tenure and found remarkable overlap between science and non-science graduates of UC Berkeley 10 years after completion of the doctorate. 9Nyquist and Woodford (2000:3). i°Golde and Dore (2001:9).

12 of course, varied from field to field. Most notably, students called for more transparency about the process of earning a doctorate, more focus on individual student assessments, and greater help for students who sought nontraditional jobs. Both the Golde and Dore study and the NAGPS survey asked various constituent groups to reassess their approaches in training doctoral students. Pew concluded its interest in the reform of the research doctorate with support to the Woodrow Wilson National Fellowship Foundation. The Foundation was asked to pro- vide a summary of reforms recommended to date and offer an assessment of what does and could work. The Woodrow Wilson Foundation extended this initial mandate in two significant ways. First, it worked with 14 universities in launching the Responsive Ph.D. project. All 14 institutions agreed to explore best practices in graduate education. To frame the project, participating schools agreed to look at partnerships between graduate schools and others sectors, to diversify the pool of students enrolled in doctoral education, to examine the paradigms for doctoral training, and to revise practices wherever appropriate. Specifically, the project highlighted professional development and pedagogical training as new key practices. The architects of the effort believed that improved professional development would better match student interests and their opportunities. They sensed an inattentiveness to pedagogical training in many programs and believed more attention here would benefit all students. Concerned with the insularity or narrowing decried by many interviewed by the Re-envisioning the Ph.D. project, the Responsive Ph.D. project invited participants concerned with new paradigms to address matters of interdisciplinarity and public engagement. They were encouraged to hire new people to help remedy the relative underrepresentation of students of color in most fields besides education. The project wanted to underscore the problem and encourage imaginative, replicable experiments to improve the recruit- ment, retention, and graduation of domestic minorities. Graduate programs were encouraged to work more closely with representatives of the K-12 sectors, community col- leges, four-year institutions other than research universities, foundations, governmental agencies, and others who hire doctoral students.~3 Second, the Responsive Ph.D. project advertised the suc- cess of various projects through publications and a call for a iiThe National Association of Graduate and Professional Students (2000). i2The 14 participating universities were: University of Colorado, Boulder; University of California, Irvine; University of Michigan; Univer- sity of Pennsylvania; University of Washington; University of Wisconsin, Madison; University of Texas, Austin; Arizona State University; Duke University; Howard University; Indiana University; Princeton University; Washington University, St. Louis; and Yale University. i3See, http://www.woodrow.org/responsivephd/initiative.html. ASSESSING RESEARCH-DOCTORATE PROGRAMS fuller assessment of what works and what does not. Former Council of Graduate Schools (CGS) President Jules LaPidus observed, "Universities exist in a fine balance between being responsive to 'the needs of the time' and being responsible for preserving some vision of learning that transcends time."~4 To find that proper balance the project proposed national studies and projects. By contrast, the Carnegie Initiative, building on the same body of evidence that fueled the directions championed by the Responsive Ph.D. project, centered the possibilities for reform in departments. After a couple of years of review, the initiative settled on a multiyear project at a select number of universities in a select number of disciplines. Project heads, Lee Shulman, George Walker, and Chris Golde, argue that cultural change, so critical to reform, occurs in most research universities in departments. Through a competitive process, departments in chemistry, mathematics, English, and education were selected. Departments of history and neurosciences will be selected to participate in both research and action projects. Focused attempts to expand the professoriate and enrich the doctoral experience, by exposing more doctoral students to teaching opportunities beyond their own campuses, have paralleled these two projects. Guided by leadership at the CGS and the Association of American Colleges and Univer- sities (AAC&U), the Preparing Future Faculty initiative involved hundreds of students and several dozen schools. The program assumed that "for too many individuals, developing the capacity for teaching and learning about fundamental professional concepts and principles remain accidental occurrences. We can and should do a better job of building the faculty the nation's colleges and univer- sities need."~5 In light of recent surveys and studies, the Preparing Future Faculty program is quickly becoming the Preparing Future Professionals program, modeled on pro- grams started at Arizona State University, Virginia Tech, University of Texas, and other universities. Mention should also be made of the Graduate Education Initiative funded by the Andrew W. Mellon Foundation. Between 1990 and 2000, this program gave "approximately $80 million to assist students in 52 departments at 10 leading research universities. These departments were encouraged to review their curricula, examinations, advising, official timetables, and dissertation requirements to facilitate timely degree completion and to reduce attrition, while maintaining or increasing the quality of doctoral training they pro- vided."~6 Although this project will be carefully evaluated, the evaluation has yet to be completed since some of the students have yet to graduate. i4LaPidus (2000). logoff, en al. (2000:X) i6Zuckerman and Meise] (2000).

INTRODUCTION ASSESSMENT OF DOCTORAL PROGRAMS AND ITS RELATION TO CALLS FOR REFORM The calls for reform in doctoral education, although con- firmed by testimony, surveys of graduate deans, and student surveys, do not have a strong underpinning in systematic data collection. With the exception of a study by Golde and Dore, which covered 4,000 students in a limited number of fields and institutions, and another by Cerny and Nerad, who investigated outcomes in 5 fields and 71 institutions, there has been little study at the national level of what doctoral programs provide for their students or of what outcomes they experience after graduation. National data gathering, which must, of necessity, be conducted as part of an assessment of doctoral programs, provides an opportunity for just such an investigation. To date, the calls for reform agree that doctoral education in the United States remains robust, that it is valued at home and abroad, but that it must change if we are to remain an international leader. There is no commonly held view of what should and can be reformed. At the moment there is a variety of both research and action projects. Where agree- ment exists it centers on the need for versatile doctoral programs; on a greater sense of what students expect, receive, and value; on emphasizing the need to know, publi- cize, and control time to degree and degree completion rates 13 as well as on the conclusion that a student's assessment of a program should play a role in the evaluation of that program. This conclusion points to the possibility that a national assessment of doctoral education can contribute to an under- standing of practices and outcomes that goes well beyond the attempts to assess the effectiveness of doctoral educa- tion undertaken in past NRC studies. The exploration of this possibility provided a major challenge to this Committee and presented the promise that, given a solid methodology, the next study could provide an empirical basis for the under- standing of reforms in doctoral education. PLAN OF THE REPORT The previous sections present a picture of the broader context in which the Committee to Examine the Methodol- ogy of Assessing Research-Doctorate Programs approached its work. The rest of the report describes how the Commit- tee went about its task and what conclusions it reached concerning fields to be included in the next study, quantita- tive measures of the correlates of quality, measures of student educational processes and outcomes, the measure- ment of scholarly reputation and how to present data about it, and the general conclusion about whether a new study should be undertaken.

Next: 2. How the Study Was Conducted »
Assessing Research-Doctorate Programs: A Methodology Study Get This Book
×
Buy Paperback | $48.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

How should we assess and present information about the quality of research-doctorate programs? In recommending that the 1995 NRC rankings in Assessing the Quality of Research-Doctorate Programs: Continuity and Change be updated as soon as possible, this study presents an improved approach to doctoral program assessment which will be useful to administrators, faculty, and others with an interest in improving the education of Ph.D.s in the United States. It reviews the methodology of the 1995 NRC rankings and recommends changes, including the collection of new data about Ph.D. students, additional data about faculty, and new techniques to present data on the qualitative assessment of doctoral program reputation. It also recommends revision of the taxonomy of fields from that used in the 1995 rankings.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!