Click for next page ( 164


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 163
1X Summary and Discussion In the six preceding chapters results are presented of the assess- ment of 616 research-doctorate programs in biochemistry, botany, cellu- lular/molecular biology, microbiology, physiology, and zoology. In- cluded in each chapter are summary data describing the means and intercorrelations of the program measures in a particular discipline. In this chapter a comparison is made of the summary data reported in the six disciplines. Also presented here are an analysis of the reli- ability (consistency) of the reputational survey ratings and an exami- nation of some factors that might possibly have influenced the survey results. This chapter concludes with suggestions for improving studies of this kind--with particular attention given to the types of measures one would-like to have available for an assessment of reseaxch-doctor- ate programs. This chapter necessarily involves a detailed discussion of various statistics (means, standard deviations, correlation coefficients) de- scribing the measures. Throughout, the reader should bear in mind that all these statistics and measures are necessarily imperfect attempts to describe the real quality of research-doctorate programs. Quality and some differences in quality are real, but these differences cannot be subsumed completely under any one quantitative measure. For example, no single numerical ranking--by measure 08 or by any weighted average of measures--can rank the quality of different programs with precision. However, the evidence for reliability indicates considerable sta- bility in the assessment of quality. For instance, a program that comes out in the first decile of a ranking is quite unlikely to "really" belong in the third decile, or vice versa. If numerical ranks of programs were replaced by groupings (distinguished, strong, etc.), these groupings again would not fully capture actual differences in quality since there would likely be substantial ambiguity about the borderline between adjacent groups. Furthermore, any attempt at linear ordering (best, next best, . . .) may also be inaccurate. Programs of roughly comparable quality may be better in different ways, so that there simply is no one best program--as will also be indicated in some analyses. However, these difficulties of formulating ranks should not hide the underlying reality of differences in quality or the importance of high quality for effective doctoral education. of the numerical 163

OCR for page 163
164 SUMMARY OF THE RESULT S Displayed in Table 9.1 are the numbers of programs evaluated (bot- tom line) and the mean values for each measure in the six biological science disciplines. As can be seen, the mean values reported for individual measures vary considerably among disciplines. The pattern of means on each measure is summarized below, but the reader interested in a detailed comparison of the distribution of a measure should refer to the second table in each of the preceding chapters 2 Program Size (Measures 01-03~. Based on the information provided to the committee by the study coordinator at each university, cellular/ molecular biology programs had, on the average, the largest number of faculty members {26 in December 1980), followed by zoology (211. Zo- ology programs graduated the most students (26 Ph.D. recipients in the FY1975-79 period), and cellular/molecular programs had the largest en- rollment (34 doctoral students in December 1980~. In contrast, physi- ology programs were reported to have an average of 19 faculty members, 14 graduates, and 16 doctoral students. Program Graduates (Measures 04-07~. The mean fraction of FY1975-79 doctoral recipients who as graduate students had received some national fellowship or training grant support (measure 04) ranges from .21 for graduates of botany programs to .59 for graduates in cellular/molecular biology. The relatively high fraction of support in cellular/molecular biology, biochemistry, microbiology' and physiology may be explained by the availability of National Institutes of Health (AIR) training grant support for graduate students in these disciplines. With respect to the median number of years from first enrollment in a graduate pro- gram to receipt of the doctorate (measure 05), graduates in biochemis- try, cellular/molecular biology, microbiology, and physiology typically earned their degrees approximately a year sooner than graduates in zo- ology. In terms of employment status at graduation (measure 06), an average of 80 percent or more of the Ph.D. recipients in these same four disciplines reported that they had made firm job commitments by the time they had completed requirements for their degrees, contrasted with 65 percent of the program graduates in zoology and 67 percent in botany. A mean of 56 percent (or higher) of the program graduates in biochemistry, cellular/molecular biology, and physiology reported that they had made firm commitments to take positions in Ph.D.-granting in- stitutions (measure 07), while only 30 percent of those in zoology and 32 percent in botany had made such plans. t Means for measure 16, publication "influence," are omitted since arbitrary scaling of this measure prevents meaningful comparisons across disciplines. 2 The second table in each of the six earlier chapters presents the standard deviation and decile values for each measure.

OCR for page 163
165 TABLE 9.1 Mean Values for Each Program Measure, by Discipline Bio- Cell./Molec. Micro- Physi- chemistry Botany Biology biology ology Zoology Program Size 01 19 19 26 16 19 21 02 20 19 23 16 14 26 03 25 20 34 20 16 33 Program Graduates 04 .47 .21 .59 .48 .53 .33 05 6.0 6.5 6.1 6.1 6.2 7.1 06 .81 .67 .80 .80 .80 .65 07 .56 .32 .56 .48 .57 .30 Survey Results 08 - 2.6 3.0 2.9 2.8 3.Q 2.7 09 1.7 1.8 1.9 1.8 1.9 1.7 10 1.2 1.2 1.2 1.2 1.2 1.2 11 .7 .7 .7 .6 .6 .7 University Library 12 .2 .3 .3 .3 .3 .3 Research Support 13 .63 .28 .64 .45 .52 .36 14 8419 8406 10243 8449 8687 8806 Publication Records 15 92 60 133 46 17 16 Total Programs 139 83 89 134 101 70

OCR for page 163
166 Survey Results {Measures 08-11~. Differences in the mean ratings de- rived from the reputational survey are small. In all-six disciplines the mean rating of scholarly quality of program faculty (measure 08) is at or slightly below 3.0 {"good"), and programs were judged to be, on the average, "reasonably" (2.0) to "moderately" (1.0) effective in educating research scholars/scientists (measure 09~. In the opinions of the survey respondents, there has been "slight improvement" (an av- erage of 1.2 in each discipline on measure 10) in the last five years in the overall average quality of programs. The mean rating of an evaluator's familiarity with the work of program faculty (measure 11) falls below 1.0 ("some familiarity") in every discipline--about which more will be said later in this chapter. The reader should be reminded that the distribution of ratings may vary from one discipline to another. If one examines, for example, the top program ratings recorded for measure 08 in each discipline, one finds noticeably higher top ratings in biochemistry (two programs with ratings above 4.8) and cellular/molecular biology (three programs with ratings above 4.7} than in either botany or physiology (no pro- grams with ratings above 4.5~. The study committee does not have an explanation of this observation but wishes to emphasize that many differences may be found in the distributions of survey ratings in the various disciplines and that the determinants of these differences are not known. As discussed in Chapter II, the survey ratings reflect each program's standing relative to other programs in the same discipline and provide no basis for making comparisons across disciplines. University Library (Measure 12~. Measure 12, based on a composite in- _ dex of the sizes of the library in the university in which a program resides, is calculated on a scale from -2.0 to 3.0, with means ranging from .2 in biochemistry to .3 in the other five disciplines. As men- tioned earlier in this report, these data are not available for many of the smaller universities. Were the program coverage complete for this measure, it is likely that the reported means would be signifi- cantly lower. Research Support (Measures 13-14~. Measure 13, the proportion of pro- gram faculty who had received NIH, NSF, or ADAMHA4 research grant awards during the FY1978-80 period, has mean values ranging from as high as .64 and .63 in cellular/molecular biology and biochemistry, respectively, to .28 in botany. It should be emphasized that this measure does not take into account research support that faculty mem- 3 The index, derived by the Association of Research Libraries, re- flects a number of different measures, including number of volumes, fiscal expenditures, and other factors relevant to the size of a uni- versity library. See the description of this measure presented in Ap- pendix D. National Institutes of Health; National Science Foundation; and A1- cohol, Drug Abuse, and Mental Health Administratione

OCR for page 163
167 hers have received from sources other than these three federal agen- cies. In terms of total university expenditures for R&D in-the biological sciences (measure 14), the mean values are reported to range from $8,406,000 in botany to $10,243,000 in cellular/molecular biology. In considering this measure it must be remembered that it reflects the overall university expenditures in the biological sciences and not expenditures in individual disciplines within the biological sciences. Publication Records (Measures 15 and 16~. Considerable diversity is also found in the mean number of articles associated with a research- doctorate program (measure 15~. An average of 133 articles published in the 1978-79 period is reported for programs in cellular/molecular biology and 92 articles for programs in biochemistry; in physiology and zoology the mean number of articles is fewer than 20. These large differences reflect several factors, including the program size in a particular discipline (i.e., the total number of faculty and other staff members involved in research), the frequency with which scien- tists in that discipline publish, and the length of a typical paper in a discipline. Another important factor is the fact that journals in biochemistry and cellular/molecular biology far outnumber those in the other four biological disciplines. (As decribed in Chapter II, data on published articles were identified by field of journal--not by matching the names of program faculty with authors' names.) Mean scores are not reported on measure 16, the estimated "overall influ- ence" of the articles attributed to a program. Since this measure is calculated from an average of journal influence weights,5 normalized for the journals covered in a particular discipline, mean differences between disciplines are uninterpretable. CORRELATIONS AMONG MEASURES Relations among the program measures are of intrinsic interest and are relevant to the issue of validity of the measures as indices of the quality of a research-doctorate program. Measures that are logically related to program quality are expected to be related to each other. To the extent that they are, a stronger case might be made for the validity of each as a quality measure. A reasonable index of the relationship between any two measures is the Pearson product-moment correlation coefficient. A table of corre- lation coefficients of all possible pairs of measures is presented in each of the six preceding chapters. This chapter presents selected correlations to determine the extent to which coefficients are compa- rable in the six disciplines. Special attention is given to the corre- lations involving the number of FY1975-79 program graduates (measure 02), survey rating of the scholarly quality of program faculty (measure ssee Appendix F for a description of the derivation of this measure.

OCR for page 163
168 08), university R&D expenditures in a particular discipline (measure 14), and the influence-weighted number of publications-(measure 16~. These four measures have been selected because of their relatively high correlations with several other measures. Readers interested in corre- lations other than those presented in Tables 9.2-9.S may refer to the third table in each of the preceding six chapters. Correlations with Measure 02. Table 9.2 presents the correlations of measure 02 with each of the other measures used in the assessment. As might be expected, correlations of this measure with the other two mea- sures of program size--the number of faculty {01) and doctoral student enrollment (03~--are moderately high in all six disciplines. Of greater interest are the strong positive correlations between measure 02 and measures derived from the reputational survey ratings (08, 09, and 111. In biochemistry and zoology these coefficients are all above .60; in cellular/molecular biology, microbiology, and physiology most of the coefficients are above .40. In botany the correlations are somewhat lower. It is quite apparent that the programs that received high survey ratings and with which evaluators were more likely to be familiar were also ones that had larger numbers of graduates. Although the committee gave serious consideration to presenting an alternative set of survey measures that were adjusted for program size, a satisfac- tory algorithm for making such an adjustment was not found. In at- tempting such an adjustment on the basis of the regression of survey ratings on measures of program size, it was found that some exception- ally large programs appeared to be unfairly penalized and that some very small programs received unjustifiably high adjusted scores. Measure 02 also has positive correlations in most disciplines with measure 12, an index of university library size, with measures 13 and 14, which pertain to the level of support for research in a program, and with measures 15 and 16, which reflect publication productivity. Of particular note are the moderately large coefficients--in disci- plines other than botany and physiology--for the latter two measures. The relation of the number of program graduates and the publication records of that program is especially strong in biochemistry and zool- ogy. The correlations of measure 02 with measures 04, 05, 06, and 07 are below .20 in all disciplines except biochemistry and zoology. From this analysis it is apparent that the number of program graduates tends to be positively correlated with all other variables except those per- taining to recent program graduates (04-07~. It is also apparent that the relationship of measure 02 with the other variables tends to be strongest for programs in biochemistry and zoology. Correlations with Measure 08. Table 9.3 shows the correlation coeffi- cients for measure 08, the mean rating of the scholarly quality of pro- gram faculty, with each of the other variables. The correlations of measure 08 with measures of program size (01, 02, and 03) are typically greater than .40. Not surprisingly, the larger the program, the more likely its faculty is to be rated high in quality. Correlations of measure 08 with measure 04, the fraction of stu- dents with national fellowship or traineeship awards, are .47 or

OCR for page 163
169 TABLE 9.2 Correlations of the Number of Program Graduates (Measure 02) with Other Measures, by Discipline Sio- Cell./Mo~ec. Micro- Physi- chemistry Botany Biology biology ology Zoology Program Size 01 .48 .42 .54 .61 .43 .66 03 .72 .55 .83 .80 .54 .78 Program Graduates 04 .47 -.14 .16 .18 -.09 .27 05 .16 .04 -.11 .02 .19 .2S 06 .23 -.06 .12 .07 .14 .19 07 .27 -.07 -.02 .12 .15 .44 Survey Results 08 .63 .29 .42 .48 .46 .66 09 .67 .34 .47 .55 .S1 .68 10 .14 .11 .09 .19 .19 .09 11 .63 .27 .40 .58 .32 .62 University Library 12 .46 -.01 .19 .27 .2S .60 Research Support 13 .35 .03 .18 .20 .04 .50 14 .46 .09 .02 .18 .30 .42 Publication Records 15 .66 .23 .32 .44 .16 .56 16 .65 .24 .34 .29 .17 .59

OCR for page 163
170 TABLE 9.3 Correlations of the Survey Ratings of Scholarly Quality of Program Faculty (Measure 08) with Other Measures, by Discipline Bio- chemistry Botany Cell./Molec. Micro- Biology biology Pbysi- ology Zoology Program Size 01 .58 .56 .39 .50 .64 .53 02 .63 .29 .42 .48 .46 .66 03 .60 .51 .43 .42 .42 .52 Program Graduates 04 .70 .31 .58 .57 .47 .58 05 .15 .26 .38 .08 -.11 .39 06 .24 .36 .33 .23 .27 .25 07 .35 .59 .41 .38 .30 .63 Survey Results 09 .96 .97 .96 .96 .95 .98 10 .21 .29 .33 .46 .38 .19 11 .96 .83 .94 .91 .89 .95 University Library 12 .63 .66 .47 .54 .49 .78 Research Support 13 .62 .49 .58 .64 .57 .72 14 .69 .62 .57 .68 .51 .65 Publication Records 15 .83 .60 .69 .72 .69 .59 16 .83 .62 .71 .75 .71 .64

OCR for page 163
171 greater in all disciplines except botany. For programs in the mathe- matical and physical sciences and in engineering, the corresponding coefficients (reported in earlier volumes) are found to be considerably lower, typically in the range of .10 to .30. In the biological sci- ences (especially in the biomedical fields), there is a far greater re- liance on training grant and fellowship support, and fewer graduate students are supported by research assistantships or teaching assist- antships. Correlations of rated faculty quality with measure 05, the shortness of time from matriculation in graduate school to award of the doctorate, are small but positive for programs in botany, cellular/ molecular biology, and zoology and close to zero for programs in the other three disciplines. Correlations of ratings of faculty quality with measure 06, the fraction of program graduates with definite employment plans, range be- tween .23 and .36 in the six biological disciplines. In every disci- pline the correlation of measure 08 is higher with measure 07, the fraction of graduates having agreed to employment at a Ph.D.-granting institution. These coefficients are approximately .60 in botany and zoology and .30 or above in the other four disciplines. Thus, those programs with the larger fractions of graduates intending to take aca- demic positions tended to receive higher survey ratings. The correlations of measure 08 with measure 09, the rated effec- tiveness of doctoral education, are uniformly very high, at or above .95 in every discipline. This finding is consistent with results from the Cartter and Roose-Andersen studies .6 The coefficients describing the relationship between measure 08 and measure 11, the familiarity with the work of program faculty, are also very high, ranging from .83 to .96. In general, evaluators were more likely to have high regard for the quality of faculty in those programs with which they were most familiar. That the correlation coeffients are as large as observed may simply reflect the fact that "known" programs tend to-be those that have earned strong reputations. Correlations of ratings of faculty quality with measure 10, the ratings of perceived improvement in program quality, are much smaller-- ranging from .19 in zoology to .46 in microbiology. One might have expected that a program judged to have improved in quality would have been somewhat more likely to receive high ratings on measure 08 than would a program judged to have declined--thereby imposing a small posi- tive correlation between these two variables. Moderately high correlations are observed in most disciplines be- tween measure 08 and university library size (measure 12), support for research (measures 13 and 14), and publication records (measures 15 and 16~. With few exceptions these coefficients are .S0 or greater in all disciplines. Of particular note are the strong correlations with the two publication measures--as high as .83 in biochemistry. In all dis- ciplines the correlations with measure 16 are as high as or slightly higher than those with measure 15; the "weighted influence" of journals 6 Roose and Andersen, p. 19.

OCR for page 163
172 in which articles are published yields an index that tends to relate more closely to faculty reputation than does an unadjusted count of the number of articles published. Although the observed differences be- tween the coefficients for measures 15 and 16 are not large, this re- sult is consistent with earlier findings of Anderson et al.7 Correlations with Measure 14. Correlations of measure 14, the reported dollars of support for R&D, with other measures are shown in Table 9.4. The reader is reminded that this measure reflects total university ex- penditures in the biological sciences and not expenditures in the six separate biological science disciplines. The pattern of relations is quite similar for programs in all six disciplines: moderately high correlations with reputational survey results {except measure 10), uni- versity library size, and publication measures. Measure 14 is also positively correlated with measures of program size {01-03), the frac- tion of recent graduates with fellowship/traineeship support (04), the fraction with definite commitments for employment in Ph.D.-granting universities (07), and the fraction of program faculty with federal research grants (13~. In interpreting these relationships one must keep in mind the fact that the research expenditure data have not been adjusted for the number of faculty and other staff members involved in research in a program. Correlations with Measure 16. Measure 16 is the number of published articles attributed to a program and adjusted for the "average influ- ence" of the journals in which the articles appear. The correlations of this measure with all others appear in Table 9.5. Of particular interest are the high correlations with the reputational survey results (excluding measure 10~. All of those coefficients exceed .60, and for biochemistry programs the coefficients are approximately .80. Moder- ately high correlations are also observed between measure 16 and mea- sures 12, 13, and 14; with few exceptions these correlations are .40 or higher. It should be pointed out that the exceptionally large coef- ficients reported for measure 15 result from the fact that the two pub- lication measures are logically as well as empirically interdependent. Despite the appreciable correlations between reputational ratings of quality and program size measures, the functional relations between the two probably are complex. If there is a minimum size for a high- quality program, this size is likely to vary from discipline to disci- pline. Increases in size beyond the minimum may represent more high- quality faculty, or a greater proportion of inactive faculty, or a faculty with heavy teaching responsibilities. In attempting to select among these alternative interpretations, a single correlation coeffi- cient provides insufficient guidance. Nonetheless, certain similari- ties across disciplines may be seen in the correlations among the measures. High correlations consistently appear among measures 08, 09, and 11 from the reputational survey, and these measures also are prom- 7Anderson, Narin, and McAllister, p. 95.

OCR for page 163
173 TABLE 9.4 Correlations of the University Research Expenditures in a Discipline (Measure 14) with Other Measures, by Discipline Bio- Cell./Molec. Micro- Physi- chemistry Botany Biology biology ology Zoology Program Size 01 .33 .31 .16 .24 .36 .20 02 .46 .09 .02 .18 .30 .42 03 .27 .17 .02 .08 .37 .28 Program Graduates 04 .50 .20 .30 .43 .31 .21 05 .03 .12 .25 .05 -.16 .08 06 .12 .04 .18 .12 .14 -.04 07 .29 .28 .20 .27 .23 .19 Survey Results 08 .69 .62 .57 .68 .51 .65 09 .64 .61 .47 .65 .51 .63 10 .03 -.07 -.16 .12 -.01 -.16 11 .69 .48 .58 .50 .46 .62 University Library 12 .56 .SS .42 .48 .S0 .63 Research Support 13 .30 .36 .24 .29 .28 .37 Publication Records 15 .71 .47 .58 .73 .43 .43 16 .71 .52 .59 .78 .48 .46

OCR for page 163
188 TABLE 9.15 Mean Ratings of Scholarly Quality of Program Faculty, by School in Which Evaluator's Own Program IS Located MEAN RATINGS CORRELATION Medical School Other r N Biochemistry 2.61 2.67 .96 138 Cellular/Molecular Biology 2.80 2.91 .94 88 Microbiology 2.89 2.77 .93 134 Physiology 2.87 3.01 .83 101 NOTE: N reported in last column represents the number of programs with a rating from at least one evaluator in each of the two groups. INTERPRETATION OF REPUTATIONAL SURVEY RATINGS It is not hard to foresee that results from this survey will re- ceive considerable attention through enthusiastic and uncritical re- porting in some quarters and sharp castigation in others. The study committee understands the grounds for both sides of this polarized re- sponse but finds that both tend to be excessive. It is important to make clear how we view these ratings as fitting into the larger study of which they are a part. The reputational results are likely to receive a disproportionate degree of attention for several reasons, including the fact that they reflect the opinions of a large group of faculty colleagues and that they form a bridge with earlier studies of graduate programs. But the results will also receive emphasis because they alone, among all of the measures, seem to address quality in an overall or global fashion. While most recognize that "objectives program characteristics (i.e., publication productivity, research funding, or library size) have some bearing on program quality, probably no one would contend that a single one of these measures encompasses all that need be known about the quality of research-doctorate programs. Each is obviously no more than an indicator of some aspect of program quality. In contrast, the repu- tational ratings are global from the start because the respondents are asked to take into account many objective characteristics and to arrive at a general assessment of the quality of the faculty and the effec- tiveness of the program. his generality has self-evident appeal. On the other hand, it is wise to keep in mind that these reputa- tional ratings are measures of perceived program quality rather than of "quality" in some ideal or absolute sense. What this means is that, just as for all of the more objective measures, the reputational rat- ings represent only a partial view of what most of us would consider quality t; be; hence, they must be kept in careful perspective.

OCR for page 163
189 Some critics may argue that such ratings are positively misleading because of a variety of methodological artifacts or because they are supplied by "judges" who often know very little about the programs they are rating. The committee has conducted the survey in a way that per- mits the empirical examination of a number of the alleged artifacts and, although our analysis is by no means exhaustive, the general con- clusion is that their effects are slight. Among the criticisms of reputational ratings from prior studies are some that represent a perspective that may be misguided. This perspec- tive assumes that one asks for ratings in order to find out what ~qual- ity" really is and that to the degree that the ratings miss the mark of "quintessential quality, n they are unreal, although the quality that they attempt to measure is real. What this perspective misses is the reality of quality and the fact that impressions of quality, if widely shared, have an imposing reality of their own and therefore are worth knowing about in their own right. After all, these perceptions govern a large-scale system of traffic around the nation's graduate institu- tions--for example, when undergraduate students seek the advice of pro- fessors concerning graduate programs that they might attend. It is possible that some professors put in this position disqualify theme selves on grounds that they are not well informed about the relative merits of the programs being considered. Most faculty members, how- ever, surely attempt to be helpful on the basis of impressions gleaned from their professional experience, and these assessments are likely to have major impact on student decision-making. In short, the impres- sions are real and have very real effects not only on students shopping for graduate schools but also on other flows, such as job-seeking young faculty and the distribution of research resources. At the very least, the survey results provide a snapshot of these impressions from disci- pline to discipline. Although these impressions may be far from ideally informed, they certainly show a strong degree of consensus within each discipline, and it seems safe to assume that they are more than passingly related to what a majority of keen observers might agree program quality is all about. COMPARI SON WITH RESULTS OF THE ROOSEANDERSEN STUDY An analysis of the response to the committee's survey would not be complete without comparing the results with those obtained in the sur- vey by Roose and Andersen 12 years earlier. Although there are obvious similarities in the two surveys, there are also some important differ- ences that should be kept in mind in examining individual program rat- ings of the scholarly quality of facultye Already mentioned in this chapter is the inclusion, on the form sent to 90 percent of the sample members in the committee's survey, of the names and academic ranks of faculty and the numbers of doctoral graduates in the previous five years. Other significant changes in the committee's form are the identification of the university department or academic unit in which each program may be found, the restriction of requesting evaluators to

OCR for page 163
190 make judgments about no more than 50 research-doctorate programs in their discipline, and the presentation of these programs in random se- quence on each survey form. The sampling frames used in the two sur- veys also differ. The sample selected in the earlier study included only individuals who had been nominated by the participating universi- ties, while more than one-fourth of the sample in the committee's sur- vey were chosen at random from full faculty lists. (Except for this difference the samples were quite similar--i.e., in terms of the number of evaluators in each discipline and the fraction of senior scholars.~5) Several dissimilarities in the coverage of the Roose-Andersen study and this committee's reputational assessments should be mentioned. m e former included a total of 130 institutions that had awarded at least 100 doctoral degrees in two or more disciplines during the FY1958-67 period. The institutional coverage in the committee's assessment was based on the number of doctorates awarded in each discipline (as de- scribed in Chapter I) and covered a total population of 228 universi- ties. Most of the universities represented in the present study but not the earlier one are institutions that offered research-doctorate programs in a limited set of disciplines. In the Roose-Andersen study, programs in 10 biological science disciplines were rated: biochemis- try, botany, developmental biology, entomology, microbiology, molecular biology, pharmacology, physiology, population biology, and zoology. For reasons explained in Chapter I, the committee in its assessment was able to include programs in the six disiplines with the largest numbers of doctoral awards in recent years. 6 Programs in the other four disciplines--developmental biology, entomology, pharmacology, and popu- lation biology--were not evaluated in the present assessment. Finally, in the Roose-Andersen study only one set of ratings was compiled from each institution represented in a discipline, whereas in the commit- tee's survey separate ratings were requested if a university offered more than one research-doctorate program in a given discipline. In the committee's survey, evaluations were requested for a total of 616 re- search-doctorate programs in the six biological science disciplines, compared with 618 programs in these same disciplines in the Roose- Andersen study. Although the total numbers of programs included in the studies are nearly equal, there are many differences in the program coverage in each discipline. Figures 9.1-9.6 plot the mean ratings of scholarly quality of fac- ulty in programs included in both surveys; sets of ratings are graphed for 90 programs in biochemistry, 52 programs in botany, 53 programs in cellular/molecular biology, 82 programs in microbiology, 63 programs in physiology, and 48 programs in zoology. Since in the Roose-Andersen for a description of the sample group used in the earlier study, see Roose and Andersen, pp. 28-31. wit should be noted that the ''molecular biology" category used in the Roose-Andersen study was expanded in the committee's assessment to include cellular and molecular biology programs.

OCR for page 163
191 s . o++ * * * * 4.0++ + + + + + + + + Measure + 3.0++ 08 + + + 2. 0++ + + 1.0++ + + + + * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * r = .92 0.0 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 4.0 5.0 Roose-Andersen Rating (1970) FIGURE 9.1 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--90 programs in biochemistry.

OCR for page 163
192 s . o++ + + + 4.0++ . Measure + 3.0++ 0 8 + + + + + 2. 0++ + + + + 1. 0++ + + + * * * * * * * * * * * * * * * *** : * * * * * * * * * * * 0.0 + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 Roose-Andersen Rating (1970) * * * * * * * * * * * + + + + + + + + t + + + + + + + + + + + + + + 4.0 5.0 FIGURE 9.2 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--52 programs in botany.

OCR for page 163
193 s.o++ + + + + + 4. 0++ + + + Measure + 3.0++ 08 + + + + + 2. 0++ + 1.0++ + + + + + 0.0 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 4.0 5.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * r = .87 * * * Roose-Andersen Rating (1970) FIGURE 9.3 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--53 programs in cellular/molecular biology.

OCR for page 163
194 Measure 5 . 0++ 4.0+ 3 . 0++ + 2. 0++ + 1. 0++ * * * * * * * * * * * 0.0 + + + + + + + + + + + + + + 1.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * + * * * * * r = .87 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 2.0 3.0 4.0 5.0 Roose-Andersen Rating (1970 ) FIGURE 9.4 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--82 programs in microbiology.

OCR for page 163
195 s.o++ + + + + + 4.0++ + Measure + 3.0++ 08 + + + 2.0++ + + + 1 .0++ + * * * * * * * * t * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 0.0 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 4.0 5.0 Roose-Andersen Rating (1970) FIGURE 9.5 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--63 programs in physiology.

OCR for page 163
196 s. o++ 4. 0++ + Measure + 3.0++ 08 + + + + 2.0+ 1 . 0+ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * r = .83 0.0 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 4.0 5.0 Roose-Andersen Rating ( 197 0 ) FIGURE 9.6 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--48 programs in zoology.

OCR for page 163
197 study programs were identified by institution and discipline (but not by department), the matching of results from this survey with those from the committee's survey is not precise. For universities repre- sented in the latter survey by more than one program in a particular discipline, the mean rating for the program with the largest number of graduates (measure 02) is the only one plotted here. Although the re- sults of both surveys are reported on identical scales, some caution must be taken in interpreting differences in mean ratings a program re- ceived in the two evaluations. It is impossible to estimate what ef- fect all of the differences described above may have had on the results of the two surveys. Furthermore, one must remember that the reported scores are based on the opinions of different groups of faculty members and were provided at different time periods. In 1969, when the Roose- Andersen survey was conducted, graduate departments in most universi- ties were still expanding and not facing the enrollment and budget re- ductions that many departments have had to deal with in recent years. Consequently, a comparison of the overall findings from the two surveys tells us nothing about how much graduate education has improved (or de- clined) in the past decade. Nor should the reader place much stock in any small differences in the mean ratings that a particular program may have received in the two surveys. On the other hand, it is of particu- lar interest to note the high correlations between the results of the evaluations. For programs in biochemistry, cellular/molecular biology, and microbiology, the correlation coefficients range between .87 and .92; in botany, physiology, and zoology, the coefficients range between .78 and .83. The high correlations found here may suggest to some readers that reputational standings of programs in these disciplines have changed very little in the last decade. However, differences are apparent for some institutions. Also, one must keep in mind that the correlations are based on the reputational ratings of only approxi- mately two-thirds of the programs evaluated in this assessment in these disciplines and do not take into account the emergence of many new pro- grams that did not exist or were too small to be rated in the Roose- Andersen study. FUTURE STUDIES One of the most important objectives in undertaking this assessment was to test new measures not used extensively in past evaluations of graduate programs. Although the committee believes that it has been successful in this effort, much more needs to be done. First and fore- most, studies of this kind should be extended to cover other types of programs and other disciplines not included in this effort. AS a con- sequence of budgeting limitations, the committee had to restrict its study to 32 disciplines, selected on the basis of the number of doctor- ates awarded in each. Among those omitted were programs in develop- mental biology, entomology, pharmacology, and population biology, all of which were included in the Roose-Andersen study; a multidimensional assessment of research-doctorate programs in these and many other im- portant disciplines would be of value. Consideration should also be given to embarking on evaluations of programs offering other types of

OCR for page 163
198 graduate and professional degrees. AS a matter of fact, plans for in- cluding master 's-degree programs in this assessment were originally contemplated, but because of a lack of available information about the resources and graduates of programs at the master's level, it was de- cided to focus on programs leading to the research doctorate. Perhaps the most debated issue the committee has had to address concerned which measures should be reported in this assessment. In fact, there is still disagreement among some of its members about the relative merits of certain measures, and the committee fully recognizes a need for more reliable and valid indices of the quality of graduate programs. First on a list of needs is more precise and meaningful in- formation about the product of research-doctorate programs--the gradu- ates. For example, what fraction of the program graduates have gone on to be productive investigators--either in the academic setting or in government and industrial laboratories? What fraction have gone on to become outstanding investigators--as measured by receipt of major prizes, membership in academies, and other such distinctions? How do program graduates compare with regard to their publication records? Also desired might be measures of the quality of the students applying for admittance to a graduate program (e.g., Graduate Record Examination scores, undergraduate grade point averages). If reliable data of this sort were made available, they might provide a useful index of the reputational standings of programs, from the perspective of graduate students. A number of alternative measures relevant to the quality of program faculty were considered by the committee but not included in the as- sessment because of the associated difficulties and costs of compiling the necessary data. For example, what fraction of the program faculty were invited to present papers at national meetings? What fraction had been elected to prestigious organizations/groups in their field? What fraction had received senior fellowships and other awards of distinc- tion? In addition, it would be highly desirable to supplement the data presented on NSF, NIH, and ADAMHA research grant awards (measure 13) with data on awards from other federal agencies (e.g., Department of Agriculture, Department of Defense, Department of Energy, National Aeronautics and Space Administration) as well as from major private foundations. As described in the preceding pages, the committee was able to make several changes in the survey design and procedures, but further im- provements could be made. Of highest priority in this regard is the expansion of the survey sample to include evaluators from outside the academic setting (in particular, those in government and industrial laboratories who regularly employ graduates of the programs to be evaluated). To add evaluators from these sectors would require a major effort in identifying the survey population from which a sample could be selected. Although such an effort is likely to involve considerable costs in both time and financial resources, the committee believes that the addition of evaluators from the government and industrial settings would be of value in providing a different perspective to the reputa- tional assessment and that comparisons between the ratings supplied by academic and nonacademic evaluators would be of particular interest.