Click for next page ( 176


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 175
x Summary and Discussion Results of the assessment of 639 research-doctorate programs in anthropology, economics, geography, history, political science, psy- chology, and sociology are presented in the preceding seven chapters. Included in each chapter are summary data describing the means and in- tercorrelations of the program measures for each discipline. In this chapter a comparison is made of the summary data reported in the seven disciplines. Also presented here are an analysis of the reliability (consistency) of the reputational survey ratings and an examination of some factors that might possibly have influenced the survey results. This chapter concludes with suggestions for improving studies of this kind--with particular attention given to the types of measures one would like to have available for an assessment of research-doctorate programs. This chapter necessarily involves a detailed discussion of various statistics (means, standard deviations, correlation coefficients) de- scribing the measures. Throughout, the reader should bear in mind that all these statistics and measures are necessarily imperfect attempts to describe the real quality of research-doctorate programs. Quality and some differences in quality are real, but these differences cannot be subsumed completely under any one quantitative measure. For example, no single numerical ranking--by measure 08 or by any weighted average of measures--can rank the quality of different programs with precision. However, the evidence for reliability indicates considerable sta- bility in the assessment of quality. For instance, a program that comes out in the first decile of a ranking is quite unlikely to "really" belong in the third decile, or vice versa. If numerical ranks of pro- grams were replaced by groupings (distinguished, strong, etc.), these groupings again would not fully capture actual differences in quality since there would likely be substantial ambiguity about the borderline Furthermore, any attempt at linear ordering (best, next best, . . .) may also be inaccurate. Programs of roughly comparable quality may be better in different ways, so that there simply is no one best--as will also be indicated in some of the numerical anal- yses. However, these difficulties of formulating ranks should not hide the underlying reality of differences in quality or the importance of high quality for effective doctoral education. between adjacent groups. 175

OCR for page 175
176 SUMMARY OF THE RESULT S Displayed in Table 10.1 are the numbers of programs evaluated (bot- tom line) and the mean values for each measure in the seven social and behavioral science disciplines. As can be seen, the mean values re- ported for individual measures vary considerably among disciplines. The pattern of means on each measure is summarized below, but the reader in- terested in a detailed comparison of the distribution of a measure may wish to refer to the second table in each of the seven preceding chapters.2 Program Size (Measures 01-03~. Based on the information provided to the committee by the study coordinator at each university, psychology pro- grams had' on the average, the largest number of faculty members (29 in December 1980), followed by history (28). Psychology programs graduated the most students (71 Ph.D. recipients in the FY1975-79 period) and had the largest enrollment (102 doctoral students in December 1980). In con- trast' geography programs were reported to have an average of only 13 faculty members, 16 graduates, and 22 doctoral students. Program Graduates (Measures 04-077. The mean fraction of FY1975-79 doctoral recipients who as graduate students had received some national fellowship or training grant support (measure 04) ranges from .21 for graduates of economics programs to .48 for graduates in anthropology. With respect to the median number of years from first enrollment in a graduate program to receipt of the doctorate (measure 05), psychology graduates typically earned their degrees more than a year sooner than graduates in any other discipline. Graduates in geography and history reported the longest median times to the Ph.D. In terms of employment status at graduation (measure 06), an average of 78 percent of the Ph.D. recipients from economics programs reported that they had made firm job commitments by the time they had completed requirements for their degree, contrasted with 56 percent of the program graduates in history. A mean of 33 percent of the sociology graduates indicated that they had made firm commitments to take positions in Ph.D.-granting institutions (measure 07), while only 16 percent of those in history had made such plans. The low averages in history for measures 06 and 07 may be due, in part, to an apparent shortage of faculty openings in this discipline In recent years. Survey Results (Measures 08-117. Differences in the mean ratings de- rived from the reputational survey are not large. The mean rating of scholarly quality of program faculty (measure 08) ranges from 2.3 in has noted in Chapter II, for programs in history, data are not presented for measures 13 and 14; in anthropology and geography, data are not available for measure 14. 2The second table in each of the seven preceding chapters presents the standard deviation and decile values for each measure.

OCR for page 175
177 TABLE 10.1 Mean Values for Each Program Measure, by Discipline Anthro- Political Psych- Soci- pology Economics Geography History Science ology ology Program Size 01 17 23 13 28 23 29 21 02 28 41 16 38 35 71 33 03 51 68 22 51 50 102 49 Program Graduates 04 .48 .21 .26 .26 .28 .39 .38 05 8.3 7.3 8.7 9.2 8.3 6.2 8.2 06 .60 .78 .72 .56 .68 .69 .75 07 .28 .26 .28 .16 .26 .24 .33 Survey Results 08 2.8 2.3 2.8 2.6 2.6 2.5 2.5 09 1.6 1.3 1.6 1.6 1.5 1.6 1.5 10 1.0 1.1 1.0 1.1 1.1 1.1 1.0 11 1.1 .9 1.2 .9 1.0 .7 1,0 University Library 12 .4 .1 .4 .2 .2 .1 .2 Research Support 13 .22 .11 .14 NA . 06 .21 .12 14 NA 832 NA NA 520 1003 790 Publication Records 17 30 52 17 43 43 81 52 18 .61 .63 .51 .58 .63 .66 .66 Total Programs 70 93 49 102 83 150 92

OCR for page 175
178 economics to 2.8 in anthropology and geography, and programs were judged to be, on the average, between "reasonably" (2.0) and "mini- mally" {1.0) effective in educating research scholars/scientists (mea- sure 09~. In the opinions of the survey respondents, there has been "little or no change" (approximately 1.0 on measure 10) in the last five years in the overall average quality of programs. The mean rating of an evaluator's familiarity with the work of program faculty (measure 11) is close to 1.0 ("some familiarity in every discipline except psychology (0.7~--about which more will be said later in this chapter. The reader should be reminded that the distribution of ratings may vary from one discipline to another. If one examines, for example, the top program ratings recorded for measure 08 in each discipliner one finds noticeably higher top ratings in economics (five programs with ratings above 4.7) and history (three programs with ratings above 4.7) than in either anthropology and geography (no programs with ratings above 4.6~. The study committee does not have an explanation of this observation but wishes to emphasize that many differences may be found in the distributions of survey ratings in the various disciplines and that the determinants of these differences are not known. As dis- cussed in Chapter II, the survey ratings reflect each program's stand- ing relative to other programs in the same discipline and provide no basis for making comparisons across disciplines. University Library (Measure 12~. Measure 12, based on a composite in- dex of the sizes of the library in the university in which a program resides, is calculated on a scale from -2.0 to 3.0, with means ranging from .1 in economics and psychology to .4 in anthropology and geogra- phy. These differences may be explained, in large part, by the number of programs evaluated in each discipline. In the disciplines with the fewest doctoral programs (anthropology and geography), the programs included are typically found in the larger institutions, which are likely to have high scores on the library size index. Ph.D. programs in economics and psychology are found in a much broader spectrum of universities that includes the smaller institutions as well as the larger ones. Research Support {Measures 13-14~. Measure 13, the proportion of pro- gram faculty who had received ADAMHA, NIH, OR NSF4 research grant awards during the FY1978-80 period, has mean values ranging from .22 and .21 in anthropology and psychology' respectively, to .06 in politi- cal science. It should be emphasized that this measure does not take 3The index, derived by the Association of Research Libraries, reflects a number of different measures, including number of volumes, fiscal ex- penditures, and other factors relevant to the size of a university li- brary. See the description of this measure presented in Appendix D. Alcohol, Drug Abuse, and Mental Health Administration; National Insti- tutes of Health; and National Science Foundation.

OCR for page 175
179 into account research support that faculty members have received from sources other than these three federal agencies. In terms of total university expenditures for R&D in a particular discipline (measure 14), the mean values are reported to range from $520,000 in political science to $1,003,000 in psychology. (As noted earlier, data are available for programs in only four of the seven disciplines.) The large differences in reported expenditures are likely to be related to three factors: the differential availability of research support in each of the disciplines, the differential average cost of doing re- search, and the differing numbers of individuals involved in the re- search effort. Publication Records (Measures 17 and 18). Considerable diversity is found In the mean number of articles by program faculty (measure 173.5 An average of 81 articles published in the 1978-80 period have been attributed to program faculty members in psychology, contrasted with 17 articles by geography program faculty. These large differences reflect both the average faculty size in a particular discipline and the frequency with which scientists in that discipline publish; it may also depend on the length of a typical paper in a discipline. With re- spect to measure 18, the fraction of faculty who had published at least one article during this three-year period, the differences among the means in the seven disciplines are much smaller. The largest fractions are found in psychology and sociology (.66) and the smallest in geog- raphy (.51~. . _ . . . . C ORRELATIONS AMONG MEASURES Relations among the program measures are of intrinsic are relevant to the issue of validity of the measures as indices __ _ quality of a research-doctorate program. Measures that are logically related to program quality are expected to be related to each other. To the extent that they are, a stronger case might be made for the va- lidity of each as a quality measure. A reasonable index of the relationship between any two measures is the Pearson product-moment correlation coefficient. A table of corre- lation coefficients of all possible pairs of measures is presented in each of the seven preceding chapters. This chapter presents selected correlations to determine the extent to which coefficients are compa- rable in the seven disciplines. Special attention is given to the correlations involving the number of FY1975-79 program graduates (mea- sure 02), survey rating of the scholarly quality of program faculty (measure 08), university R&D expenditures in a particular discipline (measure 14), and the total number of faculty articles (measure 17~. These four measures have been selected because of their relatively high interest and of the ssee Appendix J for two alternative measures of publication records that have been compiled for programs in psychology.

OCR for page 175
180 correlations with several other measures. Readers interested in corre- lations other than those presented in Tables 10.2-10.5 may refer to the third table in each of the preceding chapters. Correlations with Measure 02. Table 10.2 presents the correlations of measure 02 with each of the other measures used in the assessment. As might be expected, correlations of this measure with the other two measures of program size--number of faculty and doctoral student en- rollment--are reasonably high in all seven disciplines. Of greater interest are the strong positive correlations between measure 02 and measures derived from either reputational survey ratings or publication records. The coefficients describing the relationship of measure 02 with measure 17 are greater than .60 in anthropology, economics, his- tory, and sociology and approximately .50 in the other three disci- plines. The correlations with measure 18, the fraction of faculty with one or more articles published during the 1978-80 period, are much smaller. This result is not surprising, of course, since measure 17 reflects the total number of articles by program faculty, while measure 18 reflects the fraction of faculty members who publish (and is not size dependent). The correlations of measure 02 with measures 08, 09, and 11 are also moderately high--.56 or greater in all disciplines ex- cept psychology. It is quite apparent that the programs that received high survey ratings and with which evaluators were more likely to be familiar were also ones that had larger numbers of graduates. The weaker relationship in psychology may be explained, in part, by the fact that some of the programs have produced large numbers of graduates in clinical areas of psychology and may not have distinguished reputa- tions in research. Although the committee gave serious consideration to presenting an alternative set of survey measures that were adjusted for program size, a satisfactory algorithm for making such an adjust- ment was not found. In attempting such an adjustment on the basis of the regression of survey ratings on measures of program size, it was found that some exceptionally large programs appeared to be unfairly penalized and that some very small programs received unjustifiably high adjusted scores. Measure 02 also has positive correlations in most disciplines with measure 12, an index of university library size, and with measures 13 and 14, which pertain to the level of support for research in a pro- gram. Of particular note are the moderately large coefficients in economics for all three of these measures. The correlations of measure 02 with measures 04, 05, 06, and 07 are smaller but still positive in most of the disciplines. From this analysis it is apparent that the number of program graduates tends to be positively correlated with all of the other 15 variables and that the relationship of measure 02 with the other variables tends to be weakest for programs in psychology. Correlations with Measure 08. Table 10.3 shows the correlation coef- ficients for measure 08, the mean rating of the scholarly quality of program faculty, with each of the other variables. The correlations of measure 08 with measures of program size (01, 02, and 03) are .40 or greater for all disciplines except psychology. Not surprisingly, the

OCR for page 175
181 TABLE 10.2 Correlations of the Number of Program Graduates (Measure 02) with Other Measures, by Discipline Anthro- pology Political Psych- Soci- Economics Geography History Science ology ology Program Size 01 .69 .61 .48 .77 .56 .65 .55 03 .68 .63 .52 .83 .82 .81 .68 Program Graduates 04 .23 .36 .09 .34 .23 .10 .31 05 .10 .29 .19 .07 .03 -. 1~ . 18 06 .43 . 32 .13 .09 .06 -.06 .19 07 .35 .33 .30 .43 .20 -.06 .32 Survey Results 08 .71 .75 .60 .74 .60 .31 .72 09 .68 .74 .68 .72 .56 .23 .73 10 -.15 .00 .00 .02 -.04 -.04 -.03 11 .67 .71 .57 .77 .58 .39 .68 On iver s ity Library 12 .68 .57 .30 .73 .66 .36 .54 Research Support 13 .39 .54 .42 N/A .24 - . 04 .46 14 N/A .52 N/A N/A .43 .24 .38 Publication Records 17 .70 .76 .46 .82 .50 .49 .63 18 .24 . 37 . 22 .35 .18 -. 01 . 25

OCR for page 175
182 TABLE 10.3 Correlations of the Survey Ratings of Scholarly Quality of Program Faculty (Measure 08) with Other Measures, by Discipline Anthro- Political pology Economics Geography History Science Psych- Soc i - ology ology Program Size 01 .83 .61 .46 .69 .63 .57 .62 02 .71 .75 .60 .74 .60 .31 .72 03 .65 .56 .42 .66 .47 .20 .60 Program Graduates 04 .49 .42 .36 .63 .64 .64 .51 05 .34 .36 .16 .19 .10 .13 .29 06 .40 .31 .36 .05 .30 .24 .15 07 .50 .48 .51 .54 .52 .74 . .47 Survey Results 09 .96 .98 .98 .98 .98 .97 .98 10 .21 .35 .19 .24 .13 .05 .33 11 .95 .97 .94 .97 .98 .97 .97 University Library 12 .64 .67 .52 .71 .74 .73 .75 Research Suppor t 13 .46 .76 .52 N/A .40 .75 .63 14 N/A .44 N/A N/A .43 49 30 Publication Records 17 .75 .78 .78 .79 .71 .74 .80 18 .26 .47 .59 .53 .44 .57 .49

OCR for page 175
183 larger the program, the more likely its faculty is to be rated high in quality. This relationship is especially strong in anthropology, eco- nomics, history, and sociology. Correlations of measure 08 with measure 04, the fraction of stu- dents with national fellowship awards, are greater than .60 in history, political science, and psychology and range between .36 and .51 in the other four disciplines. In contrast, for programs in the physical sciences and engineering, the corresponding coefficients (reported in earlier volumes) are considerably smaller. The correlation of rated faculty quality with measure 05, the shortness of time from matricula- tion in graduate school to award of the doctorate, is positive but small in each of the social and behavioral science disciplines. Corre- lations of ratings of faculty quality with measure 06, the fraction of program graduates with definite employment plans, are also small but positive in most of the disciplines. In every discipline the correla- tion of measure 08 is higher with measure 07, the fraction of graduates having agreed to employment at a Ph.D.-granting institution. These coefficients are greater than .70 in psychology and range between .47 and .54 in the other six disciplines. The correlations of measure 08 with measure 09, the rated effec- tiveness of doctoral education, are uniformly very high, at or above .96 in every discipline. This finding is consistent with results from the Car~ter and Roose-Andersen studies.6 The coefficients describing the relationship between measure 08 and measure 11, familiarity with the work of program faculty, are also very high, ranging from .94 to .98. In general, evaluators were more likely to have high regard for the quality of faculty in those programs with which they were most fa- miliar. That the correlation coeffients are as large as observed may simply reflect the fact that ~known" programs tend to be those that have earned strong reputations. Correlations of ratings of faculty quality with measure 10, the ratings of perceived improvement in program quality, are below .25 in all disciplines except economics and sociology. One might have ex- pected that a program judged to have improved in quality would have been somewhat more likely to receive high ratings on measure 08 than would a program judged to have declined--thereby imposing a small post tive correlation between these two variables. Correlations ranging from .52 to .75 are observed between measure 08 and measure 12 (university library size). Moderate to high corre- lations also are found between measure 08 and support for research (measures 13 and 14) and publication records (measures 17 and 18~. Of particular note are the strong correlations with measure 17, the total number of published articles by program faculty--ranging from .71 to .80.7 In all disciplines the correlations with measure 17 are appre- 6Roose and Andersen, p. 19. 7 See Appendix J for the correlations of measure 08 with measures 15 and 16 (alternative measures of publication records) in psychology. These coefficients are nearly as high as those found between measures 08 and 17.

OCR for page 175
184 ciably higher than those with measure 18, the fraction of faculty with one or more articles published during the 1978-80 period. Correlations with Measure 14. Correlations of measure 14, reported dollars of support for research and development, with other measures are shown in Table 10.4. (Data on research expenditures in anthropol- ogy, geography, and history are not available.) The pattern of rela- tions is quite similar for programs in economics, political science, psychology, and sociology: moderately high correlations with both mea- sures of program size and reputational survey results (except measure 10) and somewhat higher correlations with measure 17, the total number of faculty articles. In interpreting these relationships one must keep in mind the fact that the research expenditure data have not been ad- justed for the number of faculty and other staff members involved in research in a program. The correlation with measure 13, which has been adjusted for faculty size, ranges from .28 to .38. Correlations with Measure 17. Measure 17 is the number of published articles by program faculty during the 1978-80 period. The correla- tions of this measure with all others appear in Table 10.5. Of partic- ular interest are the high correlations with the reputational survey results {excluding measure 10~. Most of those coefficients exceed .70. Measure 17 is positively related to the measures of program size (01, 02, and 03~; moderately high correlations are also observed between measure 17 and measures 12 and 14. Also of note are the correlations with measure 07, the fraction of graduates with commitments to take positions in Ph.D.-granting universities. These coefficients range from .34 (in anthropology) to .47 (in sociology). For psychology programs, data have also been compiled on two alter- native measures of publication records--measure 15, the total number of 1978-79 articles attributed to faculty and other program staff, and measure 16, the estimated "overall influence" of these articles. The relationship of these two measures with each of the other measures used in the evaluation of psychology programs is reported in Appendix J. Of particular interest is the correlation of measure 15 with measure 17 since these measures were derived from different sources (see Appendix J) and represent independent estimates of total publication productiv- ity for a program. The coefficient describing the relation of these two measures is as high as .78. Despite the appreciable correlations between reputational ratings of quality and program size measures, the functional relations between the two probably are complex. If there is a minimum size for a high- quality program, this size is likely to vary from discipline to disci- pline. Increases in size beyond the minimum may represent more high- quality faculty, or a greater proportion of inactive faculty, or a faculty with heavy teaching responsibilities. In attempting to select among these alternative interpretations, a single correlation coeffi- cient provides insufficient guidance. Nonetheless, certain similari- ties across disciplines may be seen in the correlations among the measures. High correlations consistently appear among measures 08, 09, and 11 from the reputational survey, and these measures also are prom-

OCR for page 175
185 TABLE 10. 4 Correlations of the University Research Expenditures in a Di (Measure 14) with Other Measures, by Discipline Anthro- pology .scipl ine Political Psych- Soci- Economics Geography History Science ology ology Program Size 01 N/A . 49 N/A N/A .55 35 37 0 2 N/A . 5 2 N/A N/A . 4 3 . 2 4 . 3 8 03 N/A .45 N/A N/A .27 .11 .30 Program Graduates 04 N/A .11 N/A N/A .32 . 29 .31 05 N/A .25 N/A N/A -.12 .05 .10 06 N/A .18 N/A N/A .11 . 26- .15 07 N/A . 27 N/A N/A . 20 . 31 . 22 Survey Results 08 N/A .44 N/A N/A .43 .49 .30 0 9 N/A . 4 4 N/A N/A . 3 9 5 3 3 7 10 N/A . 05 N/A N/A . 06 - . 03 - .18 11 N/A .38 N/A N/A .41 .47 .29 University Library 12 N/A .41 N/A N/A .40 .45 .25 Research Suppor t 13 N/A .29 N/A N/A .35 .28 .38 Publ ication Records 17 N/A .54 N/A N/A .59 .53 .45 18 N/A .31 N/A N/A .04 .29 .11

OCR for page 175
202 programs in a limited set of disciplines. In the Roose-Andersen study, programs in the same seven social and behavioral science disciplines were rated: anthropology, economics, geography, history, political science, psychology, and sociology. Finally, in the Roose-Andersen study only one set of ratings was compiled from each institution repre- sented in a discipline, whereas in the committee's survey separate ratings were requested if a university offered more than one research- doctorate program in a given discipline. The consequences of these differences in survey coverage are quite apparent: in the committee's survey, evaluations were requested for a total of 639 research-doctor- ate programs in the social and behavioral sciences, compared with 515 programs in the Roose-Andersen study. Figures 10.1-10.7 plot the mean ratings of scholarly quality of faculty in programs included in both surveys; sets of ratings are graphed for 38 programs in anthropology, 71 in economics, 31 in geog- raphy, 79 in history, 61 in political science, 103 in psychology, and 65 in sociology. Since in the Roose-Andersen study programs were identified by institution and discipline (but not by department), the matching of results from this survey with those from the committee's survey is not precise. For universities represented in the latter survey by more than one program in a particular discipline, the mean rating for the program with the largest number of graduates {measure 02) is the only one plotted here. Although the results of both surveys are reported on identical scales, some caution must be taken in inter- preting differences in mean ratings a program received in the two evaluations. It is impossible to estimate what effect all of the dif- ferences described above may have had on the results of the two sur- veys. Furthermore, one must remember that the reported scores are based on the opinions of different groups of faculty members and were provided at different time periods. In 1969, when the Roose-Andersen survey was conducted, graduate departments in most universities were still expanding and not facing the enrollment and budget reductions that many departments have had to deal with in recent years. Conse- quently, a comparison of the overall findings from the two surveys tells us nothing about how much graduate education has improved {or declined) in the past decade. Nor should the reader place much stock in any small differences in the mean ratings that a particular program may have received in the two surveys. On the other hand, it is of particular interest to note the high correlations between the results of the evaluations. For programs in anthropology, economics, history, political science, and psychology, the correlation coefficients range between .90 and .94; in geography and sociology the coefficients are .79 and .86, respectively. The extraordinarily high correlations found in five of the seven disciplines may suggest to some readers that repu- tational standings of programs in these disciplines have changed very little in the last decade. However, differences are apparent for some institutions. Also, one must keep in mind that the correlations are based on the reputational ratings of only 70 percent of the programs evaluated in this assessment in these disciplines and do not take into account the emergence of many new programs that did not exist or were too small to be rated in the Roose-Andersen study.

OCR for page 175
203 5.0++ + + + + 4.0++ + + 08 * * * * * * * * * * Measure + 3.0++ + + 2.0++ 1 . 0+ * * * * * * * * * * * * * * * * * * * * * * * * * * * r = .90 C.O + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 Roose-Andersen Rating (1970) 4.0 FIGURE 10.1 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--38 programs in anthropology. 5.0

OCR for page 175
s-o++ + + + + 4. 0++ + + + + Measure + 3.0++ 08 + + + + 2.0++ + + 1.0++ + + * * * * * * * * * * 204 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * r s .94 O.0 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 4.0 5.0 Roose-Andersen Rating (1970) FIGURE 10.2 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--71 programs in economics.

OCR for page 175
205 5 . 0++ + + 4. 0++ + + + Measure + 3.0++ 0 8 + + + 2. 0++ 1 . 0++ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * + + + + 0.0 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 4.0 5.0 Roose-Andersen Rating (1970) FIGURE 10.3 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--31 programs in geography.

OCR for page 175
206 s . o _ 4 . 0 _ Measure + 3.0++ 08 2 . 0+ ~ 1 . 0_ * * : * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * ^.0 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 4.0 5.0 Roose-Andersen Rating (1970 ) FIGURE 10.4 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--79 programs in history.

OCR for page 175
207 s.o++ 4.0++ + + Measure + 3.0++ + + 2. 0++ + + + + 1.0++ + + u.O + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 ~ ~ 08 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * r = .93 Roose-Ander sen Rat ing ( 197 0 ) 4.0 FIGURE 10.5 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--61 programs in political science. 5.0

OCR for page 175
208 s . o ++ * * * * * * * + * * + 4.0++ * * + * * * * * + ~ * * * + * * + * * + * * + * * * * * + * * * + * Measure + 3.0++ * 08 + + * * + * * + * * * * + * * + * * * * + * * * * * + * * * + * * 2.0++ * * * + * * * + * * * + * * + * * * + * + * * * + * + * * * + * * * 1.0++ r = .93 t * w.0 + + + + + + + + + + + + + + ~ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 4.0 5.0 Roose-Andersen Rating (1970) FIGURE 10.6 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--103 programs in psychology.

OCR for page 175
209 5 . 0++ . 4.0++ Measure + 3.0++ 08 + + + + + + + * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * + * * 2.0++ * * + * * + * * * * + * + * * * * + + * * * + * + * + 1.0++ * r = .86 + + + + + 0.0 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 4.0 5.0 Roose-Andersen Rating (1970) FIGURE 10.7 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--65 programs in sociology.

OCR for page 175
210 FUTURE STUDIES One of the most important objectives in undertaking this assessment was to test new measures not used extensively in past evaluations of graduate programs. Although the committee believes that it has been successful in this effort, much more needs to be done. First and fore- most, studies of this kind should be extended to cover other types of programs and other disciplines not included in this effort. As a con- sequence of budgeting limitations, the committee had to restrict its study to 32 disciplines, selected on the basis of the number of doc- torates awarded in each. A multidimensional assessment of research- doctorate programs in many important disciplines not included among these 32 should be of great value to the academic community. Consider- ation should also be given to embarking on evaluations of programs of- fering other types of graduate and professional degrees. As a matter of fact, plans for including master 's-degree programs in this assess- ment were originally contemplated, but because of a lack of available information about the resources and graduates of programs at the mas- ter's level, it was decided to focus on programs leading to the re- search doctorate. Perhaps the most debated issue the committee has had to address concerned which measures should be reported in this assessment. In fact, there is still disagreement among some of its members about the relative merits of certain measures, and the committee fully recognizes a need for more reliable and valid indices of the quality of graduate programs. First on a list of needs is more precise and meaningful in- formation about the product of research-doctorate programs--the gradu- ates. For example, what fraction of the program graduates have gone on to be productive investigators--either in the academic setting or in government and industrial laboratories? What fraction have gone on to become outstanding investigators--as measured by receipt of major prizes, membership in academies, and other such distinctions? How do program graduates compare with regard to their publication records? Also desired might be measures of the quality of the students applying for admittance to a graduate program (e.g., Graduate Record Examination scores, undergraduate grade point averages). If reliable data of this sort were made available, they might provide a useful index of the rep- utational standings of programs, from the perspective of graduate stu- dents. A number of alternative measures relevant to the quality-of program faculty were considered by the committee but not included in the as- sessment because of the associated difficulties and costs of compiling the necessary data. For example, what fraction of the program faculty were invited to present papers at national meetings? What fraction had been elected to prestigious organizations/groups in their field? What fraction had received senior fellowships and other awards of distinc- tion? In addition, it would be highly desirable to supplement the data presented on NSF, NIH, and ADAMHA research grant awards (measure 13) with data on awards from other federal agencies as well as from major private foundations.

OCR for page 175
211 As described in the preceding pages, the committee was able to make several changes in the survey design and procedures, but further imp provements could be made. Of highest priority in this regard is the expansion of the survey sample to include evaluators from outside the academic setting. To add evaluators from nonacademic sectors would require a major effort in identifying the survey population from which a sample could be selected. Although such an effort is likely to in- volve considerable costs in both time and financial resources, the come mittee believes that the addition of evaluators from nonacademic set- tings would be of value in providing a different perspective to the reputational assessment and that comparisons between the ratings sup- plied by academic and nonacademic evaluators would be of particular interest.

OCR for page 175