National Academies Press: OpenBook
« Previous: VIII. Statistics/Biostatistics Programs
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 159
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 160
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 161
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 162
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 163
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 164
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 165
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 166
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 167
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 168
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 169
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 170
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 171
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 172
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 173
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 174
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 175
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 176
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 177
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 178
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 179
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 180
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 181
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 182
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 183
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 184
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 185
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 186
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 187
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 188
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 189
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 190
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 191
Suggested Citation:"IX. Summary and Discussion." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/9730.
×
Page 192

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

lx Hungary and Discussion In the six preceding chapters results are presented of the assessment of 596 research-doctorate programs in chemistry, computer sciences, geosciences, mathematics, physics, and statistics/bio- statistics. Included in each chapter are summary data describing the means and intercorrelations of the program measures in a . . . particular a~sc~p.,ne. In this chapter a comparison is made of the summary data reported for the six disciplines. Also presented are an analysis of the reliability (consistency) of the reputational survey ratings and an examination of some factors that might possibly have influenced the survey results. '~ ~r ~u~'uu~ we ~uyy=~`ur~ studies of this kind--with particular attention given to measures one would like to have available for an assessment of research-doctorate programs. This chapter necessarily involves a detailed discussion of various statistics (means, standard deviations, correlation coefficients) describing the measures. Throughout, the reader should bear in mind that all these statistics and measures are necessarily imperfect attempts to describe the real quality of research-doctorate programs. Quality and some differences in quality are real, but these differ- ences cannot be subsumed completely under any one quantitative measure. For example, no single numerical ranking--by measure 08 or by any weighted average of measures--can rank the quality of different programs with precision. However, the evidence for reliability indicates considerable stability in the assessment of quality. For instance, a program that comes out in the first decile of a ranking is quite unlikely to "really" belong in the third decile, or vice versa. If numerical ranks of programs were replaced by groupings (distinguished, strong, etc.), these groupings again would not fully capture actual differences in quality since there would likely be substantial ambiguity about the borderline between adjacent groups. Furthermore, any attempt at linear ordering (best, next best, . . .) also may be inaccurate. Programs of roughly comparable quality may be better in different ways, so that there simply is no one best program--as will also be indicated in some of the numerical analyses. However, these difficulties of formulating ranks should not hide the underlying me _ ~ a__ _~ ~~ -* ~~~ for improving the types of 159

160 reality of differences in quality or the importance of high quality for effective doctoral education. SUMMARY OF THE RESULTS Displayed in Table 9.1 are the numbers of programs evaluated (bottom line) and the mean values for each measure in the six mathe- matical and physical science disciplines. As can be seen, the mean values reported for individual measures vary considerably among disciplines. The pattern of means on each measure is summarized below, but the reader interested in a detailed comparison of the distribution of a measure should refer to the second table in each of the preceding six chapters ·2 Program Size (Measures 01-031. Based on the information provided to the committee by the study coordinator at each university, mathematics programs had, on the average, the largest number of faculty members {33 in December 1980), followed by physics (28) and chemistry (23~. Chemistry programs graduated the most students (51 Ph.D. recipients in the FY1975-79 period) and had the largest enrollment (75 doctoral students in December 1980~. In contrast, statistics and biostatistics programs were reported to have an average of only 12 faculty members, 15 graduates, and 22 doctoral students. Program Graduates (Measures 04-07~. The mean fraction of FY1975-79 doctoral recipients who as graduate students had received some national fellowship or training grant support (measure 04) ranges from .17 for graduates of computer science programs to .32 for graduates in statistics/biostatistics. (The relatively high figure for the latter group may be explained by the availability of National Institutes of Health (NIH) training grant support for students in biostatistics.) With respect to the median number of years from first enrollment in a graduate program to receipt of the doctorate (measure 051, chemistry graduates typically earned their degrees more than half a year sooner than graduates in any of the other disciplines. Graduates in physics and geosciences report the longest median times to the Ph.D. In terms of employment status at graduation (measure 06), an average of 80 percent of the Ph.D. recipients from computer science programs reported that they had made firm job commitments by the time they had completed the requirements for their degrees, contrasted with 61 percent of the program graduates in mathematics. A mean of 43 percent of the statistics/biostatistics graduates reported that they had made Means for measure 16, "influences of publication, are omitted since arbitrary scaling of this measure prevents meaningful comparisons across disciplines. 2 The second table in each of the six preceding chapters presents the standard deviation and decile values for each measure.

161 TABLE 9.1 Mean Values for Each Program Measure, by Discipline Computer Geo Chemistry Sciences sciences Math Statistics/ Physics Biostat. Program Size 012316163328 12 025120192435 15 037541253556 22 Program Graduates 04.23.17.26.25.26 .32 055.96.57.06.67.1 6.7 06.76.80.77.61.66 .78 07.33.38.22.25.26 .43 Survey Results 082.52.52.92.72.7 2.8 091.61.51.81.61.7 1.6 101.11.11.11.21.1 1.1 11.9.9.9.8.7 .9 University Library 12.1.4.4.1.1 .5 Research Support 13.48.36.47.32.36 .25 141788117139966162943 NA Publication Records 1578344439106 12 Total Programs1455891115123 64

162 firm commitments to take positions in Ph.D.-granting institutions (measure 07), while only 22 percent of those in the geosciences had made such plans. This difference may be due, to a great extent, to the availability of employment opportunities for geoscientists outside the academic sector. Survey Results (Measures 08-111. Differences in the mean ratings derived from the reputational survey are small. In all six disciplines the mean rating of scholarly quality of program faculty (measure 08) is slightly below 3.0 ("goods), and programs were judged to be, on the average, a bit below Moderately effective (2.0) in educating research scholars/scientists {measure 091. In the opinions of the survey respondents, there has been "little or no change" (approximately 1.0 on measure 10) in the last five years in the overall average quality of programs. The mean rating of an evaluator's familiarity with the work of program faculty (measure 11) is close to 1.0 ("some familiarity in every discipline--about which more will be said later in this chapter. University Library (Measure 121. Measure 12, based on a composite index of the sizes of the library at the university in which a program resides, is calculated on a scale from -2.0 to 3.0, with means ranging from .1 in chemistry, mathematics, and physics to .4 in computer sciences and geosciences, and .5 in statistics/biostatistics. These differences may be explained, in large part, by the number of programs evaluated in each discipline. In the disciplines with the fewest doctoral programs (statistics/biostatistics, computer sciences, and geosciences), programs included are typically found in the larger institutions, which are likely to have high scores on the library size index. Ph.D. programs in chemistry, physics, and mathematics are found in a much broader spectrum of universities that includes the smaller institutions as well as the larger ones. Research Support (Measures 13-141. Measure (13), the proportion of program faculty who had received NSF, NIH, or ADAMHA4 research grant awards during the FY1978-80 period, has mean values ranging from as high as .48 and .47 in chemistry and geosciences, respectively, to .2S in statistics/biostatistics. It should be emphasized that this measure does not take into account research support that faculty members have received from sources other than these three federal 3The index, derived by the Association of Research Libraries, reflects a number of different measures, including number of volumes, fiscal expenditures, and other factors relevant to the size of a university library. See the description of this measure presented in Appendix D. Avery few faculty members in mathematical and physical science programs received any research support from the Alcohol, Drug Abuse, and Mental Health Administration.

163 agencies. In terms of total university expenditures for R&D in a particular discipline (measure 14), the mean values are reported to range from $616 , Coo in mathematics to 33, 996 , coo in the geosciences. {R&D expenditure data are not available for statistics/biostatistics.) The large differences in reported expenditures are likely to be related to three factors: the differential availability of research support in the six disciplines, the differential average cost of doing research, and the differing numbers of individuals involved in a research effort. Publication Records (Measures 15 and 161. Considerable diversity is found in the mean number of articles associated with a research- doctorate program (measure 15~. An average of 106 articles published in the 1978-79 period is reported for programs in physics and 75 articles for programs in chemistry; in each of the other four disciplines the mean number of articles is fewer than 40. These large differences reflect both the program size in a particular discipline (i.e., the total number of faculty and other staff members involved in research) and the frequency with which scientists in that discipline publish; it may also depend on the length of a typical paper in a discipline. Mean scores are not reported on measure 16, the estimated "overall influence" of the articles attributed to a program. Since this measure is calculated from an average of journal influence weights, 5 normalized for the journals covered in a particular discipline, mean differences among disciplines are uninterpretable. Correlations with Measure 02. Relations among the program measures . . are of intrinsic interest and are relevant to the issue of validity of the measures as indices of the quality of a research-doctorate program. Measures that are logically related to program quality are expected to be related to each other. To the extent that they are, a stronger case might be made for the validity of each as a quality measure. A reasonable index of the relationship between any two measures is the Pearson product-moment correlation coefficient. A table of correlation coefficients between all possible pairs of measures has been presented in each of the six preceding chapters. In this chapter selected correlations to determine the extent to which coefficients are comparable in the six disciplines are presented. Special attention is given to the correlations involving the number of FY1975-79 program graduates (measure 02), the survey rating of the scholarly quality of program faculty (measure 08), university R&D expenditures in a particular discipline (measure 14), and the influence-weighted number of publications (measure 16~. Table 9.2 presents the correlations of measure 02 with each of the other measures used in the assessment. As might be expected, correlations of this measure with the other two measures of program size--number of faculty (01) and doctoral student enrollment (03~--are ssee Appendix F for a description of the derivation of this measure.

164 TABLE 9.2 Correlations of the Number of Program Graduates (Measure 02) with Other Measures, by Discipline Computer Geo Chemistry Sciences sciences Math Statistics/ Physics Biostat. Program Size 01 .68.62.42 .50.77.53 03 .92.52.72 .85.92.48 Program Graduates 04 .02.05-.01 .08-.02.00 05 .38-.07.29 .31.32.04 06 .23.12.05 .18.40.00 07 .13-.05.36 .46.41-.03 Survey Results 08 .83.66.64 .70.76.55 09 .81.68.67 .68.73.63 10 .23-.02.06 .01-.17.17 11 .83.61.67 .72.78.59 University Library 12 .61.44.43 .45.47.11 Research Support 13 .57.34.40 .35.13.06 14 .72.58.25 .41.66N/A Publication Records 15 .83.85.73 .75.85.52 16 .86.84.74 .81.86.48

165 quite high in all six disciplines. Of greater interest are the strong positive correlations between measure 02 and measures derived from either reputational survey ratings or publication records. The coefficients describing the relationship of measure 02 with measures 15 and 16 are greater than .70 in all disciplines except statistics/ biostatistics. This result is not surprising, of course, since both of the publication measures reflect total productivity and have not been adjusted for program size. The correlations of measure 02 with measures 08, 09, and 11 are almost as strong. It is quite apparent that the programs that received high survey ratings and with which evaluators were more likely to be familiar were also ones that had larger numbers of graduates. Although the committee gave serious consideration to presenting an alternative set of survey measures that were adjusted for program size, a satisfactory algorithm for making such an adjustment was not found. In attempting such an adjustment on the basis of the regression of survey ratings on measures of program size, it was found that some exceptionally large programs appeared to be unfairly penalized and that some very small programs received unjustifiably high adjusted scores. Measure 02 also has positive correlations in most disciplines with measure 12, an index of university library size, and with measures 13 and 14, which pertain to the level of support for research in a program. Of particular note are the moderately large coefficients--in disciplines other than statistics/biostatistics and physics--for measure 13, the fraction of faculty members receiving federal research grants. Unlike measure 14, this measure has been adjusted for the number of program faculty. The correlations of measure 02 with measures 05, 06, and 07 are smaller but still positive in most of the disciplines. From this analysis it is apparent that the number of program graduates tends to be positively correlated with all other variables except measure 04--the fraction of students with national fellowship support. It is also apparent that the relationship of measure 02 with the other variables tends to be weakest for programs in statistics/biostatistics. CoECelatio~s wit) Tee pp. Table 9.3 shows the correlation coefficients for measure 08, the mean rating of the scholarly quality of program faculty, with each of the other variables. The correlations of measure 08 with measures of program size (01, 02, and 03) are .40 or greater for all six disciplines. Not surprisingly, the larger the program, the more likely its faculty is to be rated high in quality. However, it is interesting to note that in all disciplines except statistics/biostatistics the correlation with the number of program graduates (measure 02) is larger than that with the number of faculty or the number of enrolled students. Correlations of measure 08 with measure 04, the fraction of students with national fellowship awards, are positive but close to zero in all disciplines except computer sciences and mathematics. For programs in the biological and social sciences, the corresponding coefficients (not reported in this volume) are found to be greater, typically in the range of .40 to .70. Perhaps in the mathematical and

166 TABLE 9.3 Correlations of the Survey Ratings of Scholarly Quality of Program Faculty (Measure 08) with Other Measures, by Discipline Computer Geo Chemistry Sciences sciences Math Statistics/ Physics Biostat. Program Size 01 .64 .54 .45.48.68 .63 02 .83 .66 .64.70.76 .55 03 .81 .50 .61.64.75 .40 Program Graduates ' 04 .11 .35 .08.30.15 .19 05 .47 .14 .50.57.42 .32 06 .28 .21 .24.19.42 .15 07 .30 .17 .58.63.58 .25 Survey Results 09 .98 .98 .97.98.96 .95 10 .35 .29 .29-.01-.15 .30 11 .96 .97 .87.96.96 .93 University Library 12 .66 .58 .58.65.67 .53 Research Support 13 .77 .59 .72.70.24 .53 14 .79 .63 .27.42.61 N/A Publication Records 15 .80 .70 .75.75.85 .70 16 .86 .77 .77.83.86 .67 1

167 physical sciences, the departments with highly regarded faculty are more likely to provide support to doctoral students as teaching assistants or research assistants on faculty research grants--thereby reducing dependency on national fellowships. (m e low correlation of rated faculty quality with the fraction of students with national fellowships is not, of course, inconsistent with the thesis that programs with large numbers of students are programs with large numbers of fellowship holders.) Correlations of rated faculty quality with measure 05, shortness of time from matriculation in graduate school to award of the doctorate, are notably high for programs in mathematics, geosciences, and chemistry and still sizeable for physics and statistics/bio- statistics programs. Thus, those programs producing graduates in shorter periods of time tended to receive higher survey ratings. This finding is surprising in view of the smaller correlations in these disciplines between measures of program size and shortness of time-to-Ph.D. It seems there is a tendency for programs that produce doctoral graduates in a shorter time to have more highly rated faculty, and this tendency is relatively independent of the number of faculty members. Correlations of ratings of faculty quality with measure 06, the fraction of program graduates with definite employment plans, are moderately high in physics and somewhat lower, but still positive, in the other disciplines. In every discipline except computer sciences the correlation of measure 08 is higher with measure 07, the fraction of graduates having agreed to employment at a Ph.D.-granting institution. These coefficients are greater than .50 in mathematics, geosciences, and physics. The correlations of measure 08 with measure 09, the rated effectiveness of doctoral education, are uniformly very high, at or above .95 in every discipline. This finding is consistent with results from the Cartter and Roose-Andersen studies .6 The coefficients describing the relationship between measure 08 and measure 11, familiarity with the work of program faculty, are also very high, ranging from .87 to .97. In general, evaluators were more likely to have high regard for the quality of faculty in those programs with which they were most familiar. That the correlation coefficients are as large as observed may simply reflect the fact that "known programs tend to be those that have earned strong reputations. Correlations of ratings of faculty quality with measure 10, the ratings of perceived improvement in program quality, are near zero for mathematics and physics programs and range from .29 to .35 in other disciplines. One might have expected that a program judged to have improved in quality would have been somewhat more likely to receive high ratings on measure 08 than would a program judged to have declined--thereby imposing a small positive correlation between these two variables. 6 Roose and Andersen, p. 19.

168 Moderately high correlations are observed in most disciplines between measure 08 and university library size (measure 12), support for research (measures 13 and 14), and publication records (measures 15 and 16). With few exceptions these coefficients are .50 or greater , in all disciplines. Of particular note are the strong correlations with the two publication measures--ranging from .70 to .86. In all disciplines except statistics/biostatistics the correlations with measure 16 are higher than those with measure 15; the "weighted influences of journals in which articles are published yields an index that tends to relate more closely to faculty reputation than does an unadjusted count of the number of articles published. Although the observed differences between the coefficients for measures 15 and 16 are not large, this result is consistent with earlier findings of Anderson et al.7 Correlations with Measure 14. Correlations of measure 14, reported . dollars of support for R&D, with other measures are shown in Table 9.4. (Data on research expenditures in statistics/biostatistics are not available.) The pattern of relations is quite similar for programs in chemistry, computer sciences, and physics: moderately high correlations with measures of program size and somewhat higher correlations with both reputational survey results (except measure 10) and publication measures. For programs in mathematics many of these relations are positive but not as strong. For geoscience programs, measure 14 is related more closely to faculty size (measure 01) than to any other measure, and the correlations with rated quality of faculty and program effectiveness are lower than in any other discipline. In interpreting these relationships one must keep in mind the fact that the research expenditure data have not been adjusted for the number of faculty and other staff members involved in research in a program. Correlations with Measure 16. Measure 16 is the number of published articles attributed to a program and adjusted for the "average influence" of the journals in which the articles appear. The correlations of this measure with all others appear in Table 9.~. Of particular interest are the high correlations with all three measures of program size and with the reputational survey results (excluding measure 10~. Most of those coefficients exceed .70, although for programs in statistics/biostatistics they are below this level. Moderately high correlations are also observed between measure 16 and measures 12, 13, and 14. With the exception of computer science programs, the correlations between the adjusted publication measure and measure 05, time-to-Ph.D., range from .31 to .41. It should be pointed out that the exceptionally large coefficients reported for measure 15 result from the fact that the two publication measures are empirically as well as logically interdependent. 7Anderson et al., p. 95.

169 TABLE 9.4 Correlations of the University Research Expenditures in a Discipline (Measure 14) with Other Measures, by Discipline Computer Geo Chemistry Sciences sciences Math Statistics/ Physics Biostat. Program Size 01 .43.44.61 .18.54 N/A 02 .72.58.25 .41.66 N/A 03 .66.43.28 .44.68 N/A Program Graduates 04 .18.22.22 .29.04 N/A 05 .35-.21-.05 .17.31 N/A 06 .31-.03-.04 .23.25 N/A 07 .20-.16.06 .22.31 N/A Survey Results 08 .79.63.27 .42.61 N/A on .74.61.25 .42.61 N/A 10 .14-.02.13 -.12-.08 N/A 11 .77.64.18 .43.58 N/A University Library 12 .45.16.33 .33.33 N/A Research Support 13 .55.10.20 .18.07 N/A Publication Records 15 .70.66.42 .35.80 N/A 16 .78.73.35 .42.80 N/A r

170 TABLE 9.5 Correlations of the Influence-Weighted Number of Publications (Measure 16) with Other Measures, by Discipline Computer Geo Chemistry Sciences sciences Math Statistics/ Physics Biostat. Program Size 01 .65.61 .36 .63 .72.49 02 .86.84 .74 .81 .86.48 03 .84.52 .64 .78 .85.50 Program Graduates 04 .03.20 .07 .15 .05-.29 05 .41-.04 .31 .40 .38.37 06 .22.14 .00 .16 .43.11 07 .23-.01 .39 .50 .48.30 Survey Results 08 .86.77 .77 .83 .86.67 09 .82.75 .75 .80 .82.63 10 .33.05 .09 .05 -.14.15 11 .88.74 .70 .83 .86.66 University Library 12 .56.52 .66 .59 .61.36 Research Suppor t 13 .60.35 .51 .51 .21.56 14 .78.73 .35 .42 .80N/A Publication Records 15 .95.98 .97 .90 .99.98

171 Despite the appreciable correlations between reputational ratings of quality and program size measures, the functional relations between the two probably are complex. If there is a minimum size for a high quality program, this size is likely to vary from discipline to discipline. Increases in size beyond the minimum may represent more high quality faculty, or a greater proportion of inactive faculty, or a faculty with heavy teaching responsibilities. In attempting to select among these alternative interpretations, a single correlation coefficient provides insufficient guidance. Nonetheless, certain similarities may be seen in the pattern of correlations among the measures. High correlations consistently appear among measures 08, 09, and 11 from the reputational survey, and these measures also are prominently related to program size (measures 01, 02, and 03), to publication productivity (measures 15 and 16), to R&D expenditures (measure 14), and to library size {measure 12~. These results show that for all disciplines the reputational rating measures (08, 09, and 11) tend to be associated with program size and with other correlates of size--publication volume, R&D expenditures, and library size. Furthermore, for most disciplines the reputational measures 08, 09, and 11 tend to be positively related to shortness of time-to-Ph.D. (measure 05), to employment prospects of program graduates (measures 06 and 07), and to fraction of faculty holding research grants (measure 13~. These latter measures are not consistently correlated highly with the size measures or with any other measures besides reputational ratings. ANALYSIS OF THE SURVEY RESPONSE Measures 08-11, derived from the reputational survey, may be of particular interest~to many readers since measures of this type have been the most widely used (and frequently criticized) indices of quality of graduate programs. In designing the survey instrument for this assessment the committee made several changes in the form that had been used in the Roose-Andersen study. The modifications served two purposes: to provide the evaluators with a clearer understanding of the programs that they were asked to judge and to provide the committee with supplemental information for the analysis of the survey response. One change was to restrict to 50 the number of programs that any individual was asked to evaluate. Probably the most important change was the inclusion of lists of names and ranks of individual faculty members involved in the research-doctorate programs to be evaluated on the survey form, together with the number of doctoral degrees awarded in the previous five years. Ninety percent of the evaluators were sent forms with faculty names and numbers of degrees awarded; the remaining ten percent were given forms without this information so that an analysis could be made of the effect of this modification on survey results. Another change was the addition of a question concerning an evaluator's familiarity with each of the programs. In addition to providing an index of program recognition (measure 11), the inclusion of this question permits a comparison of the ratings furnished by individuals who had considerable familiarity

172 TABLE 9.6 Distribution of Responses to Each Survey Item, by Discipline Chem- Computer Geo- Statistics/ Survey MeasureTotalistry Sciences science"Math Physics Biostat. 08 SCHOLARLY QUALITY OF PROGRAM FACULTY Distinguished 7.2 6.37.56.5 7.7 7.98.3 Strong 15.9 15.112.519.1 15.5 13.620.3 Good 21.2 22.420.422.8 19.2 19.622.7 Adequate 16.3 19.519.413.4 14.5 14.616.2 Marginal 7.8 10.49.84.7 6.9 6.97.3 Not Suf f ic lent for Doctoral Education 2.2 3.03.0.8 2.5 1.32.7 Don ' t Know Well Enough to Evaluate 29.4 23.327.432.7 33.8 36.122.4 TOTAL 100.0 100.0100.0100.0 100.0 100.0100.0 O 9 EFFECTIVENESS OF PROGRAM IN EDUCATING SCIENTISTS Extremely Effective 8.0 B.77.98.3 7.4 7.87.2 Reasonably Effective 28.7 32.525.734.1 22.1 27.029.0 Minimally Effective 13.2 15.015.712.1 11.3 11.115.1 Not Effective 3.1 3.64.61.7 3.4 2.03.8 Don't Know Well Enough to Evaluate 47.0 40.246.143.8 55.8 52.145.0 TOTAL 100.0 100.0100.0100.0 100.0 100.0100.0 10 CHANGE IN PROGRAM QUALITY IN LAST FIVE YEARS Better 11.5 12.715.714.2 9.3 9.29.2 Little or No Change 29.4 33.925.927.1 25.8 28.432.5 Poorer 6.2 8.48.26.6 3.5 5.15.1 Don't Know Well Enough to Evaluate 52.9 44.950.152.1 61.5 57.353.2 TOTAL 100.0 100.0100.0100.0 100.0 100.0100.0 11 FAMILIARITY WITH WORK OF PROGRAM FACULTY Considerable 20.0 20.920.222.3 17.9 16.324.0 Some 41.1 43.142.840.7 38.8 38.243.6 Little or None 37.2 34.634.635.4 41.8 43.031.1 No Response 1.7 1.42.31.6 1.5 2.51.3 TOTAL 100.0 100.0100.0100.0 100.0 100.0100.0 NOTE: For survey measures 08, 09, and 10 the "don't known category includes a small number of cases for which the respondents provided no response to the survey item.

173 with a particular program and the ratings by those not as familiar with the program. Each evaluator was also asked to identify his or her own institution of highest degree and current field of special- ization. This information enables the committee to compare, for each program, the ratings furnished by alumni of a particular institution with the ratings by other evaluators as well as to examine differences in the ratings supplied by evaluators in certain specialty fields. Before examining factors that may have influenced the survey results, some mention should be made of the distributions of responses to the four survey items and the reliability (consistency) of the ratings. As Table 9.6 shows, the response distribution for each survey item does not vary greatly from discipline to discipline. For example, in judging the scholarly quality of faculty (measure 08), survey respondents in each discipline rated between 6 and 8 percent of the programs as being "distinguished n and between 1 and 3 percent as "not sufficient for doctoral education. n In evaluating the effectiveness in educating research scholars/scientists, 7 to 9 percent of the programs were rated as being "extremely effective" and approximated 2 to 5 percent as snot effective." Of particular interest in this table are the frequencies with which evaluators failed to provide responses on survey measures 08, 09, and 10. Approximately 30 percent of the total number of evaluations requested for measure 08 were not furnished because survey respondents in the mathematical and physical sciences felt that they were not familiar enough with a particular program to evaluate it. The corresponding percentages of don't know" responses for measures 09 and 10 are considerably larger--47 and 53 percent, respectively--suggesting that survey respondents found it more difficult (or were less willing) to judge program effectiveness and change than to judge the scholarly quality of program faculty. The large fractions of "don't know" responses are a matter of some concern. However, given the broad coverage of research-doctorate programs, it is not surprising that faculty members would be unfamiliar with many of the less distinguished programs. As shown in Table 9.7, survey respondents in each discipline were much more likely to furnish evaluations for programs with high reputational standings than they were for programs of lesser distinction. For example, for mathematical and physical science programs that received mean ratings of 4.0 or higher on measure 08, almost 95 percent of the evaluations requested on measure 08 were provided; 85 and 77 percent were provided on measures 09 and 10. In contrast, the corresponding response rates for programs with mean ratings below 2.0 are much lower--52, 35, and 28 percent response on measures 08, 09, and 10, respectively. Of great importance to the interpretation of the survey results is the reliability of the response. How much confidence can one have in the reliability of a mean rating reported for a particular program? In the first table in each of the preceding six chapters, estimated standard errors associated with the mean ratings of every program are presented for all four survey items (measures 08-11~. While there is some variation in the magnitude of the standard errors reported in every discipline, they rarely exceed .15 for any of the four measures and typically range from .05 to .10. For programs with higher mean

174 TARr.F. 9.7 Survey Item Response Rates, by Discipline and Mean Rating on Measure 08 Chest Computer Geoff Statistics/ Survey Measure Total istry Sciences sciences Math Physics Biostat. 0 8 SCHOLARLY QUALITY OF PROGRAM FACULTY Mean Rating on Measure 08 4.0 or Higher 94.7 98.0 99.4 87.6 95.9 93.7 97.0 3.0- 3.9 85.9 91.9 91.8 76.3 83.6 83.6 91.3 2.0 - 2.9 67.7 77.4 76.4 60.7 62.5 61.5 72.3 Less than 2.0 51.7 61.2 51.6 40.1 45.8 39.9 58.5 O 9 EFFECTIVENESS OF PROGRAM IN EDUCATING SCIENTISTS Mean Rating on Measure 08 4.0 or Higher 85.2 92.6 90.9 79.4 80.9 85.4 85.4 3.0 - 3.9 68.1 77.1 72.4 66.2 56.3 65.6 68.8 2.0 - 2.9 47.5 57.4 53.1 47.6 37.8 42.8 47.1 Less than 2. 0 34.9 42.9 36.8 31.6 28. 5 25.7 36 . 3 10 CHANGE IN PROGRAM QUALITY IN LAST FIVE YEARS Mean Rating on Measure 08 4.0 or Higher 76.8 88.3 85.6 69.0 70.0 76.7 74.7 3.0 - 3.9 62.3 74.0 67.5 56.8 51.9 61.9 60.3 2.0 - 2.9 43.1 54.4 52.2 40.9 33.7 38.5 39.9 Less than 2. 0 27.7 35.5 29.1 22.7 22.0 19.9 27 .7 ratings the estimated errors associated with these means are generally smaller--a finding consistent with the fact that survey respondents were more likely to furnish evaluations for programs with high reputational standing. The n split-half n correlations. presented in Table 9.8 give an indication of the overall reliability of the survey results in each discipline and for each measure. In the derivation of these correlations, individual ratings of each program were randomly divided into two groups (A and B), and a separate mean rating was computed for each group. The last column in Table 9.8 reports the correlations between the mean program ratings of the two groups and is not corrected for the fact that the mean ratings of each group are based on only half rather than a full set of the responses.9 As the reader will note, the coefficients reported for measure 08, the scholarly quality of program faculty, are in the range of .96 to For a discussion of the interpretation of "split-half" coefficients, see Robert L. Thorndike and Elizabeth Hagan, Measurement and Evaluation in Psychology and Education, John Wiley & Sons, New York, 1969, pp. 182-185. 9To compensate for the smaller sample size the "split-half" coefficient may be adjusted using the Spearman-Brown formula: r' = 2r/~1 + r). This adjustment would have the effect of increasing a correlation of .70, for example, to .82; a correlation of .80 to .89; a correlation of .90 to .95; and a correlation of .95 to .97.

175 TABLE 9.8 Correlations Between Two Sets of Average Ratings from Two Randomly Selected Groups of Evaluators in the Mathematical and Physical Sciences MEASURE 08: SCHOLARLY QUALITY OF PROGRAM FACULTY Discipline Mean Rating Std. Deviation Correlation Group A Group B Group A Group B N r Chemistry 2.55 2.53 1.00 1.00 145 .99 Computer Sciences 2.51 2.50 .97 1.00 57 .96 Geosciences 2.92 2.93 .83 .82 91 .97 Mathematics 2.64 2.66 1.03 1.00 114 .98 Physics 2.66 2.63 .99 1.01 122 .96 Statistics/Biostat. 2.80 2.79 .94 .97 63 . 98 MEASURE 09: EFFECTIVENESS OF PROGRAM IN EDUCATING SCHOLARS Discipline Mean Rating Std. Deviation Correlation Group A Group B Group A Group B N r Chemistry 1.63 1.64 .54 .54 145 .95 Computer Sciences 1.52 1.50 .56 .56 57 .95 Geosciences 1.74 1.76 .44 .45 91 .94 Mathematics 1.54 1.55 .57 .59 114 .91 Physics 1.63 1.65 .52 .51 122 .89 Statistics/Biostat. 1.55 1.57 .54 .53 63 .97 MEASURE 10: IMPROVEMENT TN PROGRAM IN LAST FIVE YEARS Discipline Mean Rating Std. Deviation Correlation ~Group A Group B Group A Group B N r Chemistry 1.05 1.06 .22 .23 145 .76 Computer Sciences 1.14 1.11 .28 .29 57 .82 Geosciences 1.15 1.13 .28 .30 91 .77 Mathematics 1.12 1.14 .22 .22 114 .62 Physics 1.10 1.11 .26 .25 122 .64 Statistics/Biostat. 1.06 1.07 .28 .27 63 .85 MEASURE 11: FAMILIARITY WITH WORK OF PROGRAM FACULTY Discipline Mean Rating Std. Deviation Correlation Group A Group B Group A Group B N r Chemistry .86 .86 .43 .41 145 .95 Computer Sciences .84 .86 .42 .45 57 .94 Geosciences .87 .86 .36 .37 91 .93 Mathematics .75 .76 .39 .40 114 .95 Physics .71 .73 .42 .42 122 .96 Statistics/Biostat. . 92 .94 .42 .40 63 .95

176 TABLE 9.9 Comparison of Mean Ratings for 11 Mathematics Programs Included in Two Separate Survey Administrations Survey All Evaluators Measure First N ~ X Second N X Evaluators Rating the Same Program in Both Surveys First Second N X N X Program A 08100 4.9114 4.950 4.950 4.9 0990 2.7100 2.842 2.743 2.7 1074 1.283 1.238 I.134 1.2 11100 1.6115 1.650 1.550 1.6 Program 8 0894 4.6115 4.648 4.650 4.5 0981 2.691 2.540 2.639 2.5 1069 1.082 1.037 1.036 0.9 1198 1.4116 1.450 1.550 1.5 Program C 0886 3.4103 3.642 3.444 3.5 0956 2.066 2.128 2.129 2.0 1055 1.162 1.330 1.227 1.4 1199 1.0116 1.150 1.150 1.0 Program D 0874 3.093 3.037 2.838 2.9 0950 1.848 1.627 1.716 1.6 1046 1.452 1.524 1.423 1.5 1190 1.0113 0.946 1.046 0.9 Program E 0869 3.095 3.139 3.046 3.1 0940 1.860 1.925 1.830 1.8 1036 0.858 0.924 0.829 0.9 1196 0.8115 0.952 0.952 1.0 Program F 0863 2.990 3.026 3.032 3.1 0935 1.846 1.710 1.613 1.8 1032 1.143 1.111 1.312 1.2 1195 0.7115 0.843 0.744 0.7 Program G 0869 2.792 2.839 2.739 3.0 0935 1.745 1.617 1.719 1.7 1036 1.143 1.217 1.119 1.2 1185 0.9116 0.846 0.946 0.9 Program H 0858 2.273 2.536 2.237 2.4 0932 1.343 1.322 1.219 1.3 1030 1.539 1.520 1.717 1.4 1190 0.7116 0.651 0.752 0.6 Program I 0855 2.074 1.930 1.930 2.0 0933 1.041 0.919 1.018 0.8 1027 1.231 1.115 1.113 1.2 1199 0.5115 0.550 0.550 0.5 Program J 0851 1.567 1.526 1.428 1.4 0931 0.836 0.714 0.614 0.7 1026 1.223 1.114 1.212 1.3 1196 0.5113 0.349 0.448 0.4 0833 1.248 1.217 1.121 1.4 0919 0.821 0.511 0.68 0.4 1012 0.815 0.95 1.05 0.8 1199 0.2114 0.248 0.247 0.2

177 .98--indicating a high degree of consistency in evaluators' judgments. The correlations reported for measures 09 and 11, the rated effectiveness of a program and evaluators' familiarity with a program, are somewhat lower but still at a level of .92 or higher in each discipline. Not surprisingly, the reliability coefficients for ratings of change in program quality in the last five years (measure 10) are considerably lower, ranging from .67 to .88 in the six mathematical and physical science disciplines. While these coefficients represent tolerable reliability, it is quite evident that the responses to measure 10 are not as reliable as the responses to the other three items. Further evidence of the reliability of the survey responses is presented in Table 9.9. As mentioned in Chapter VI, 11 mathematics programs, selected at random, were included on a second form sent to 178 survey respondents in this discipline, and 116 individuals (65 percent) furnished responses to the second survey. A comparison of the overall results of the two survey administrations {columns 2 and 4 in Table 9.9) demonstrates the consistency of the ratings provided for each of the 11 programs. The average, absolute observed difference in the two sets of mean ratings is less than 0.1 for each measure. Columns 6 and 8 in this table report the results based on the responses of only those evaluators who had been asked to consider a particular program in both administrations of the survey. (For a given program approximately 40-4S percent of the 116 respondents to the second survey were asked to evaluate that program in the prior survey.) It is not surprising to find comparable small differences in the mean ratings provided by this subgroup of evaluators. Critics of past reputational studies have expressed concern about the credibility of reputational assessments when evaluators provide judgments of programs about which they may know very little. As already mentioned, survey participants in this study were offered the explicit alternative, "Don't know well enough to evaluate.. This response option was quite liberally used for measures 08, 09, and 10, as shown in Table 9.6. In addition, evaluators were asked to indicate their degree of familiarity with each program. Respondents reported "considerable" familiarity with an average of only one program in every five. While this finding supports the conjecture that many program ratings are based on limited information, the availability of reported familiarity permits us to analyze how ratings vary as a function of familiarity. This issue can be addressed in more than one way. It is evident from the data reported in Table 9.10 that mean ratings of the scholarly quality of program faculty tend to be higher if the evaluator has considerable familiarity with the program. There is nothing surprising or, for that matter, disconcerting about such an association. When a particular program fails to provoke more than vague images in the evaluator's mind, he or she is likely to take this as some indication that the program is not an extremely lustrous one on the national scene. While visibility and quality are scarcely the same, the world of research in higher education is structured to encourage high quality to achieve high visibility, so that any association of the two is far from spurious.

178 TABLE 9.10 Mean Ratings of Scholarly Quality of Program Faculty, by Evaluator's Familiarity with Work of Faculty MEAN RATINGS CORRELATION Cons id- Some/ erable Little r N Chemistry 2.81 2.46 .93145 Computer Sciences 2. 83 2.47 . 8955 Geosciences 3.24 2.80 .8991 Mathematics 3.05 2.55 .92114 Physics 3.00 2.64 .87116 Statistics/Biostat. 2.99 2.69 .9463 NOTE: N reported in last column represents the number of programs with a rating from at least one evaluator in each of the two groups. From the data presented in Table 9.10 it is evident that if mean ratings were computed on the basis of the responses of only those most familiar with programs, the values reported for individual programs would be increased. A largely independent question is whether a restriction of this kind would substantially change our sense of the relative standings of programs on this measure. Quite naturally, the answer depends to some degree on the nature of the restriction imposed. For example, if we exclude evaluations provided by those who confessed "little or non familiarity with particular programs, then the revised mean ratings would be correlated at a level of at least .99 with the mean ratings computed using all of the data.~° (This similarity arises, in part, because only a small fraction of evaluations are given on the basis of no more than ~little" familiarity with the program.) The third column in Table 9.10 presents the correlation in each discipline between the array of mean ratings supplied by respondents claiming Considerable familiarity and the mean ratings of those indicating isomer or "little or non familiarity with particular programs. This coefficient is a rather conservative estimate of agreement since there is not a sufficient number of ratings from those with "considerable" familiarity to provide highly stable means. Were more such ratings available, one might expect the correlations to be higher. However, even in the form presented, the correlations, which are at least .92 in all six disciplines, are high enough to suggest that the relative standing of programs on measure 08 is not greatly affected by the admixtures of ratings from evaluators who recognize that their knowledge of a given program is limited. As mentioned previously, 90 percent of the survey sample members were supplied the names of faculty members associated with each program to be evaluated, along with the reported number of program I these correlations, not reported here, were found to exceed .995 for program ratings in chemistry, geosciences, mathematics, and statistics/biostatistics.

179 TABLE 9.11 Item Response Rate on Measure 08, by Selected Characteristics of Survey Evaluators in the Mathematical and Physical Sciences Chem- Computer Geoff Statistics/ Total istry Sciences sciences Math Physics Biostat. EVALUATOR ' S FAMILIARITY WITH PROGRAM Considerable 100.0 100.0 100.0 100.0 100.0 100.0 100.0 Some 98.2 98.8 97.2 98.1 98.0 98.4 98.2 Little or None 26.4 36.6 29.2 13.5 23.6 22.0 33.3 TYPE OF SURVEY FORM Names 70.6 77.0 72.4 67.9 65.1 63.3 78.7 No Names 70.8 73.6 74.2 62.6 74.7 69.3 69.8 INSTITUTION OF HIGHEST DEGREE Alumni 98.0 98.1 100.0 95.1 98.8 100.0 97.1 Nonalumni 70.4 76.5 72.3 67.0 65.9 63.6 77.3 EVALUATOR ' S PROXIMITY TO PROGRAM Same Region 81.8 87.7 79.9 81.8 77.2 78.5 83.2 Outside Region 69.0 75.1 71.4 65.3 64.5 61.8 76.7 NOTE: The item response rate is the percentage of the total ratings requested f ram survey participants that included a response other than ndon' t know. n graduates (Ph.D. or equivalent degrees) in the previous five years. Since earlier reputational surveys had not provided such information, 10 percent of the sample members, randomly selected, were given forms without faculty names or doctoral data, as a "control group." Although one might expect that those given faculty names would have been more likely than other survey respondents to provide evaluations of the scholarly quality of program faculty, no appreciable differences were found (Table 9.11) between the two groups in their frequency of response to this survey item. (m e reader may recall that the provision of faculty names apparently had little effect on survey sample members' willingness to complete and return their questionnaires. The mean ratings provided by the group furnished faculty names are lower than the mean ratings supplied by other respondents (Table 9.12~. Although the differences are small, they attract attention because they are reasonably consistent from discipline to discipline and because the direction of the differences was not anticipated. After all, those programs more familiar to evaluators tended to receive higher ratings, yet when steps were taken to enhance the evaluator's familiarity, the resulting ratings are somewhat lower. One post hoc interpretation of this finding is that a program may be considered to have distinguished faculty if even only a few of its resee Table 2.3.

180 members are considered by the evaluator to be outstanding in their field. However, when a full list of program faculty is provided, the evaluator may be influenced by the number of individuals whom he or she could not consider to be distinguished. Thus, the presentation of these additional, unfamiliar names may occasionally result in a lower rating of program faculty. However interesting these effects may be, one should not lose sight of the fact that they are small at best and that their existence does not necessarily imply that a program's relative standing on measure 08 would differ much whichever type of survey form were used. Since only about 1 in 10 ratings was supplied without the benefit of faculty names, it is hard to estate' ish any very stable picture of relative mean ratings of individual programs. However, the correlations between the mean ratings supplied by the two groups are reasonably high--ranging from .85 to .94 in the six disciplines (Table 9.12~. Were these coefficients adjusted for the fact that the group furnished forms without names constituted only about 10 percent of the survey respondents, they would be substantially larger. From this result it seems reasonable to conclude that differences in the alternative survey forms used are not likely to be responsible for any large-scale reshuffling in the reputational ranking of programs on measure 08. It also suggests that the inclusion of faculty names in the committee's assessment need not prevent comparisons of the results with those obtained from the Roose-Andersen survey. Another factor that might be thought to influence an evaluator's judgment about a particular program is the geographic proximity of that program to the evaluator. There is enough regional traffic in academic life that one might expect proximate programs to be better known than those in distant regions of the country. This hypothesis may apply especially to the smaller and less visible programs and is TABLE 9.12 Mean Ratings of Scholarly Quality of Program Faculty, by Type of Survey Form Provided to Evaluator MEAN RATINGS CORRELATION Names No Names r N Chemistry 2.53 2.66 .93 145 Computer Sciences 2.49 2.61 .93 57 Geosciences 2.93 3.01 .88 90 Mathematics 2.62 2.72 .94 113 Physics 2.62 2.88 .85 122 Statistics/Biostat. 2.79 2.85 .92 63 NOTE: N reported in last column represents the number of programs with a rating from at least one evaluator in each of the two groups.

181 TABLE 9.13 Mean Ratings of Scholarly Quality of Program Faculty, by Evaluator's Proximity to Region of Program MEAN RATINGS Nearby Outside r N CORRELATION Chemistry 2.59 2.54 .95144 Computer Sciences 2.51 2.52 .9555 Geosciences 3.00 2.94 .9387 Mathematics 2.74 2.64 .94114 Physics 2.75 2.65 .88120 Statistics/Biostat. 2.96 2.77 .9462 NOTE: N reported in last column represents the number of programs with a rating from at least one evaluator in each of the two groups. confirmed by the survey results. For purposes of analysis, programs were assigned to one of nine geographic regionsl2 in the United States, and ratings of programs within an evaluator's own region are categorized in Table 9.13 as "nearby." Ratings of programs in any of the other eight regions were put in the "outsides group. Findings reported elsewhere in this chapter confirm that evaluators were more likely to provide ratings if a program was within their own region of the country, 3 and it is reasonable to imagine that the smaller and the less visible programs received a disproportionate share of their ratings either from evaluators within their own region or from others who for one reason or another were particularly familiar with programs in that region. Although the data in Table 9.13 suggest that "nearby" programs were given higher ratings than those outside the evaluator's region, the differences in reported means are quite small and probably represent no more than a secondary effect that might be expected because, as we have already seen, evaluators tended to rate higher those programs with which they were more familiar. Furthermore, the high correlations found between the mean ratings of the two groups indicate that the relative standings of programs are not dramatically influenced by the geographic proximity of those evaluating it. Another consideration that troubles some critics is that large programs may be unfairly favored in a faculty survey because they are likely to have more alumni contributing to their ratings who, it would stand to reason, would be generous in the evaluations of their alma 2 See Appendix I for a list of the states included in each region. 3 See Table 9.11.

182 TABLE 9.14 Mean Ratings of Scholarly Quality of Program Faculty, by Evaluator's Institution of Highest Degree MEAN RATINGS Alumni Nonalumni NUMBER OF PROGRAMS WITH ALUMNI RATINGS N Chemistry 3.88 3.60 37 Computer Sciences 3.56 3.02 26 Geosciences 3.83 3.51 34 Mathematics 3.73 3.41 37 Physics 4.11 3.87 27 Statistics/Biostat. 3.90 3.32 35 NOTE: The pairs of means reported in each discipline are computed for a subset of programs with a rating from at least one alumnus and are substantially greater than the mean ratings for the full set of programs in each discipline. maters. Information collected in the survey on each evaluator's institution of highest degree enables us to investigate this concern. m e findings presented in Table 9.14 support the hypothesis that alumni provided generous ratings--with differences in the mean ratings (for measure 08) of alumni and nonalumni ranging from .24 to .58 in the six disciplines. It is interesting to note that the largest differences are found in statistics/biostatistics and computer sciences, the disciplines with the fewest programs. Given the appreciable differences between the ratings furnished by program alumni and other evaluators, one might ask how much effect this has had on the overall results of the survey. The answer is "very little. As shown in the table, in chemistry and physics only one program in every four received ratings from any alumnus; in statistics/biostatistics slightly more than half of the programs were evaluated by one or more alumni.l4 Ace- ;~ She 1~- ~;Q-;~1;~^ ~. e ~ Novell `~1 ~= l"~=L U`~1C, however, the fraction of alumni providing ratings of a program is always quite small and should have had minimal impact on the overall mean rating of any program. To be certain that this was the case, mean ratings of the scholarly quality of faculty were recalculated for every mathematical and physical science program--with the evaluations provided by alumni excluded. The results were compared with the mean scores based on a full set of evaluations. Out of the 592 mathemat- ical and physical science programs evaluated in the survey, only 1 ~4 Because of the small number of alumni ratings in every discipline, the mean ratings for this group are unstable and therefore the correla- tions between alumni and nonalumni mean ratings are not reported.

183 program (in geosciences) had an observed difference as large as 0.2, and for 562 programs <95 percent) their mean ratings remain unchanged (to the nearest tenth of a unit). On the basis of these findings the committee saw no reason to exclude alumni ratings in the calculation of program means. Another concern that some critics have is that a survey evaluation may be affected by the interaction of the research interests of the evaluator and the areais) of focus of the research-doctorate program to be rated. It is said, for example, that some narrowly focused programs may be strong in a particular area of research but that this strength may not be recognized by a large fraction of evaluators who happen to be acknowledgeable in this area. This is a concern more difficult to address than those discussed in the preceding pages since little or no information is available about the areas of focus of the programs being evaluated {although in certain disciplines the title of a department or academic unit may provide a clue). To obtain a better understanding of the extent to which an evaluator's field of specialty ~J _ may nave 1nrluencea one ratings ne or sne has provided, evaluators in physics and in statistics/biostatistics were separated into groups according to their specialty fields (as reported on the survey questionnaire). In physics, Group A includes those specializing in elementary particles and nuclear structure, and Group B is made up of those in all other areas of physics. In statistics/biostatistics, Group A consists of evaluators who designated biostatistics or biomathematics as their specialty and Group B of those in all other specialty areas of statistics. The mean ratings of the two groups in each discipline are reported in Table 9.15. The program ratings TABLE 9.15 Mean Ratings of Scholarly Quality of Program Faculty, by Evaluator's Field of Specialty Within Physics or Statistics/Biostatistics PHYSICS: Group A includes evaluators in elementary particles and nuclear structure; Group B includes those in atomic/ molecular, solid state, and other fields of physics. STATISTICS/BIOSTATISTICS: Group A includes evaluators in bio- statistics, biometrics, and epidemiology; Group B includes those in all other fields of statistics. MEAN RATINGS Group A Group B CORRELATION r N Physics 2.58 2.68 .95 122 Statistics/Biostat. 3.13 2.73 .93 63 NOTE: N reported in last column represents the number of programs with a rating from at least one evaluator in each of the two groups.

184 supplied by evaluators in elementary particles and nuclear structure are, on the average, slightly below those provided by other physicists. The mean ratings of the biostatistics group are typically higher than those of other statisticians. Despite these differences there is a high degree of correlation in the mean ratings provided by the two groups in each discipline. Although the differences in the mean ratings of biostatisticians (Group A} and other statisticians (Group B) are comparatively large, a detailed inspection of the individual ratings reveals that biomedical evaluators rated programs appreciably higher regardless of whether a program was located in a department of biostatistics (and related fields) or in a department outside the biomedical area. Although one cannot conclude from these findings that an evaluator's specialty field has no bearing on how he or she rates a program, these findings do suggest that the relative standings of programs in physics and statistics/biostatistics would not be greatly altered if the ratings by either group were discarded. INTERPRETATION OF REPUTATIONAL SURVEY RATINGS It is not hard to foresee that results from this survey will receive considerable attention through enthusiastic and uncritical reporting in some quarters and sharp castigation in others. The study committee understands the grounds for both sides of this polarized response but finds that both tend to be excessive. It is important to make clear how we view these ratings as fitting into the larger study of which they are a part. The reputational results are likely to receive a disproportionate degree of attention for several reasons, including the fact that they reflect the opinions of a large group of faculty colleagues and that they form a bridge with earlier studies of graduate programs. But the results will also receive emphasis because they alone, among all of the measures, seem to address quality in an overall or global fashion. While most recognize that "objective" program characteristics (i.e., publication productivity, research funding, or library size) have some bearing on program quality, probably no one would contend that a single one of these measures encompasses all that need be known about the quality of research-doctorate programs. Each is obviously no more than an indicator of some aspect of program quality. In contrast, the reputational ratings are global from the start because the respondents are asked to take into account many objective characteristics and to arrive at a general assessment of the quality of the faculty and effectiveness of the program. This generality has self-evident appeal. On the other hand, it is wise to keep in mind that these reputational ratings are measures of Perceived program quality rather than of "quality" in some ideal or absolute sense. What this means is that, just as for all of the more objective measures, the reputational

185 ratings represent only a partial view of what most of us would con- sider quality to be; hence, they must be kept in careful perspective. Some critics may argue that such ratings are positively misleading because of a variety of methodological artifacts or because they are supplied by "judges" who often know very little about the programs they are rating. The committee has conducted the survey in a way that permits the empirical examination of a number of the alleged artifacts and, although our analysis is by no means exhaustive, the general conclusion is that their effects are slight. Among the criticisms of reputational ratings from prior studies are some that represent a perspective that may be misguided. This perspective assumes that one asks for ratings in order to find out what quality really is and that to the degree that the ratings miss the mark of "quintessential quality, they are unreal, although the quality that they attempt to measure is real. What this perspective misses is the reality of quality and the fact that impressions of quality, if widely shared, have an imposing reality of their own and therefore are worth knowing about in their own right. After all, these perceptions govern a large-scale system of traffic around the nation's graduate institutions--for example, when undergraduate students seek the advice of professors concerning graduate programs that they might attend. It is possible that some professors put in this position disqualify themselves on grounds that they are not well informed about the relative merits of the programs being considered. Most faculty members, however, surely attempt to be helpful on the basis of impressions gleaned from their professional experience, and these assessments are likely to have major impact on student decision-making. In short, the impressions are real and have very real effects not only on students shopping for graduate schools but also on other flows, such as job-seeking young faculty and the distribution of research resources. At the very least, the survey results provide a snapshot of these impressions from discipline to discipline. Although these impressions may be far from ideally informed, they certainly show a strong degree of consensus within each discipline, and it seems safe to assume that they are more than passingly related to what a majority of keen observers might agree program quality is all about. COMPARISON WITH RESULTS OF THE ROOSE-ANDERSEN STUDY An analysis of the response to the committee's survey would not be complete without comparing the results with those obtained in the survey by Roose and Andersen 12 years earlier. Although there are obvious similarities in the two surveys, there are also some important differences that should be kept in mind in examining individual program ratings of the scholarly quality of faculty. Already mentioned in this chapter is the inclusion, on the form sent to 90 percent of the sample members in the committee's survey, of the names and academic ranks of faculty and the numbers of doctoral graduates in the previous

186 . five years. Other significant changes in the committee's form are the identification of the university department or academic unit in which each program may be found, the restriction of requesting evaluators to make judgments about no more than 50 research-doctorate programs in their discipline, and the presentation of these programs in random sequence on each survey form. The sampling frames used in the two surveys also differ. The sample selected in the earlier study included only individuals who had been nominated by the participating universities, while more than one-fourth of the sample in the committee's survey were chosen at random from full faculty lists. (Except for this difference the samples were quite similar--i.e., in terms of number of evaluators in each discipline and the fraction of senior scholars.~5) Several dissimilarities in the coverage of the Roose-Andersen and this committee's reputational assessments should be mentioned. The former included a total of 130 institutions that had awarded at least 100 doctoral degrees in two or more disciplines during the FY1958-67 period. The institutional coverage in the committee's assessment was based on the number of doctorates awarded in each discipline (as described in Chapter I) and covered a total population of 228 universities. Most of the universities represented in the present study but not the earlier one are institutions that offered research-doctorate programs in a limited set of disciplines. In the Roose-Andersen study, programs in five mathematical and physical science disciplines were rated: astronomy, chemistry, geology, mathematics, and physics. In the committee's assessment, two disciplines were added to this list--computer sciences and statistics/biostatistics--and programs in astronomy were not evaluated (for reasons explained in Chapter I). Finally, in the Roose-Andersen study only one set of ratings was compiled from each institution represented in a discipline, whereas in the committee's survey, separate ratings were requested if a university offered more than one research-doctorate program in a given discipline. The consequences of these differences in survey coverage are quite apparent: in the committee's survey, evaluations were requested for a total of 593 research-doctorate programs in the mathematical and physical sciences, compared with 444 programs in the Roose-Andersen study. Figures 9.1-9.4 plot the mean ratings of scholarly quality of faculty in programs included in both surveys; sets of ratings are graphed for 103 programs in chemistry, 57 in geosciences, 86 in mathematics, and 90 in physics. Since in the Roose-Andersen study programs were identified by institution and discipline (but not by department), the matching of results from this survey with those from . . _ . . . · . ~ For a description of the sample group used in the earlier study, see Roose and Andersen, pp. 28-31. ~ 6 It should be emphasized that the committee's assessment of geoscience programs encompasses--in addition to geology--geochemistry, geophysics, and other earth sciences.

s · o ++ 4 . 0++ + + + Measure + 3. 0++ 08 + + + + 2.0++ + * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 187 * * ** * * * * * * * * * * * * * * * * * * * * * * * * * * * + + + + + 1 . 0++ + + + + O.0 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 4.0 5.0 Roose-Andersen Rating (1970) FIGURE 9.1 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--103 programs in chemistry.

188 5.0++ + 4 . 0++ t Measure + 3. 0++ 0 8 + 2 . 0++ . , 1. 0++ * * * * * * * * * ** * * * : * * * * * * * * * * * * * * * * * * * * * * * * * * * r - .85 0.0 + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 4.0 5.0 Roose-Andersen Rating ( 1970) FIGURE 9.2 Mean rating of ~cholarly quality of faculty (measure 08) versus mean rating of faculty in the Roo~e-Andersen study--57 programs in geosciences.

189 5. 0++ 4 . 0++ Measure + 3.0++ 08 + 2 . 0 1.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * ** * * * * * * * *:* * * * * * * * * * * * r - .94 + 0.0 + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 4.0 S.O Roose-Andersen Rating ( 1970) F$GURE 9.3 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--86 programs in mathematics.

190 5 . 0+ + 4 . 0+ Measure + 3.0++ 08 + 2 . 0 _ 1.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * ** * * * * * * * * * * * * * * * * * * * * * * * * * * r - .96 O.0 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 4.0 5.0 Roos - Andersen Rating (1970) FIGURE 9.4 Mean rating of acholarly quality of faculty (measure 08) versus mean rating of faculty in the Roos - Andersen study--90 programs in physics.

191 the committee's survey is not precise. For universities represented in the latter survey by more than one program in a particular discipline, the mean rating for the program with the largest number of graduates (measure 02) is the only one plotted here. Although the results of both surveys are reported on identical scales, some caution must be taken in interpreting differences in the mean ratings a program received in the two evaluations. It is impossible to estimate what effect all of the differences described above may have had on the results of the two surveys. Furthermore, one must remember that the reported scores are based on the opinions of different groups of faculty members and were provided at different time periods. In 1969, when the Roose-Andersen survey was conducted, graduate departments in most universities were still expanding and not facing the enrollment and budget reductions that many departments have had to deal with in recent years. Consequently, a comparison of the overall findings from the two surveys reveals nothing about how much the quality of graduate education has improved (or declined) in the past decade. Nor should the reader place much stock in any small differences in the mean ratings that a particular program may have received in the two surveys. On the other hand, it is of particular interest to note the high correlations between the results of the evaluations. For programs in chemistry, mathematics, and physics the correlation coefficients range between .93 and .96; in the geosciences the coefficient is .85. The lower coefficient in geosciences may be explained, in part, by the difference, described in footnote 16, in the field coverage of the two surveys. m e extraordinarily high correlations found in chemistry, mathematics, and physics may suggest to some readers that reputational standings of programs in these disciplines have changed very little in the last decade. However, differences are apparent for some institutions. Also, one must keep in mind that the correlations are based on the reputational ratings of only three-fourths of the programs evaluated in this assessment in these disciplines and do not take into account the emergence of many new programs that did not exist or were too small to be rated in the Roose-Andersen study. FUTURE STUDIES One of the most important objectives in undertaking this assessment was to test new measures not used extensively in past evaluations of graduate programs. Although the committee believes that it has been successful in this effort, much more needs to be done. First and foremost, studies of this kind should be extended to cover other types of programs and other disciplines not included in this effort. As a consequence of budgeting limitations, the committee had to restrict its study to 32 disciplines, selected on the basis of the number of doctorates awarded in each. Among those omitted were programs in astronomy, which was included in the Roose-Andersen study; a multidimensional assessment of research-doctorate programs in this and many other important disciplines would be of value. Consideration should also be given to embarking on evaluations of programs offering other types of graduate and professional degrees. As a matter of

192 fact, plans for including master 's-degree programs in this assessment were originally contemplated, but because of a lack of available information about the resources and graduates of programs at the master's level, it was decided to focus on programs leading to the research doctorate. Perhaps the most debated issue the committee has had to address concerned which measures should be reported in this assessment. In fact, there is still disagreement among some of its members about the relative merits of certain measures, and the committee fully recognizes a need for more reliable and valid indices of the quality of graduate programs. First on a list of needs is more precise and meaningful information about the product of research-doctorate programs--the graduates. For example, what fraction of the program graduates have gone on to be productive investigators--either in the academic setting or in government and industrial laboratories? What fraction have gone on to become outstanding investigators--as measured by receipt of major prizes, membership in academies, and other such distinctions? How do program graduates compare with regard to their publication records? Also desired might be measures of the quality of the students applying for admittance to a graduate program (e.g., Graduate Record Examination scores, undergraduate grade point averages). If reliable data of this sort were made available, they might provide a useful index of the reputational standings of programs, from the perspective of graduate students. A number of alternative measures relevant to the quality of program faculty were considered by the committee but not included in the assessment because of the associated difficulties and costs of compiling the necessary data. For example, what fraction of the program faculty were invited to present papers at national meetings? What fraction had been elected to prestigious organizations/groups in their field? What fraction had received senior fellowships and other awards of distinction? In addition, it would be highly desirable to supplement the data presented on NSF, NIH, and ADAMHA research grant awards (measure 13) with data on awards from other federal agencies (e.g., Department of Defense, Department of Energy, National Aeronautics and Space Administration) as well as from major private foundations. As described in the preceding pages, the committee was able to make several changes in the survey design and procedures, but further improvements could be made. Of highest priority in this regard is the expansion of the survey sample to include evaluators from outside the academic setting (in particular, those in government and industrial laboratories who regularly employ graduates of the programs to be evaluated). To add evaluators from these sectors would require a major effort in identifying the survey population from which a sample could be selected. Although such an effort is likely to involve considerable costs in both time and financial resources, the committee believes that the addition of evaluators from the government and industrial settings would be of value in providing a different perspective to the reputational assessment and that comparisons between the ratings supplied by academic and nonacademic evaluators would be of particular interest.

Next: Minority Statement »
An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences Get This Book
×
 An Assessment of Research-Doctorate Programs in the United States: Mathematical and Physical Sciences
Buy Paperback | $60.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!