Click for next page ( 110


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 109
VII Seminary and Discussion In the four preceding chapters results are presented of the assess- ment of 326 research-doctorate programs in chemical engineering, civil engineering, electrical engineering, and mechanical engineering. In- cluded in each chapter are summary data describing the means and intercorrelations of the program measures in a particular discipline. In this chapter a comparison is made of the summary data reported in the four disciplines. Also presented here are an analysis of the reliability (consistency) of the reputational survey ratings and an examination of some factors that might nossiblv have influenced the ~ ~.. ~ ~ _ let= _ ~ ~ _ ~ _ ~ .~ ~ ~ ! _ _ A_ FOUL V=y L=::iul~ . -1-~11~ ~lld~=L ~Ull~lUQ=~ WlUl1 ~uyye~ClOn~ 1~` 1m~rOvlNg studies of this kind--with particular attention given to the types of measures one would like to have available for an assessment of research-doctorate programs. This chapter necessarily involves a detailed discussion of various statistics (means, standard deviations, correlation coefficients) describing the measures. Throughout, the reader should bear in mind that all these statistics and measures are necessarily imperfect at- tempts to describe the real quality of research-doctorate programs. Quality and some differences in quality are real, but these differences cannot be subsumed completely under any one quantitative measure. For example, no single numerical ranking--by measure 08 or by any weighted average of measures--can rank the quality of different programs with precision. However, the evidence for reliability indicates considerable sta- bility in the assessment of quality. For instance, a program that comes out in the first decile of a ranking is quite unlikely to "really" belong in the third docile, or vice versa. If numerical ranks of programs were replaced by groupings (distinguished, strong, etc.), these groupings again would not fully capture actual differences in quality since there would likely be substantial ambiguity about the borderline between adjacent groups. Furthermore, any attempt at linear ordering (best, next best, . . .) may also be inaccurate. Programs of roughly comparable quality may be better in different ways, so that there simply is no one best--as will also be indicated in some of the numerical analyses. However, these difficulties of formulating ranks should not hide the underlying reality of differences in quality or the importance of high quality for effective doctoral education. 109

OCR for page 109
110 SUM ARY OF BRIE RESULTS Displayed in Table 7.1 are the numbers of programs-evaluated (bot- tom line) and the mean values for each measure in the four engineering disciplines. As can be seen, the mean values reported for individual measures vary considerably among disciplines. The pattern of means on each measure is summarized below, but the reader interested in a de- tailed comparison of the distribution of a measure may wish to refer to the second table in each of the four preceding chapters. 2 Program Size (Measures 01-03~. Based on the information provided to the committee by the study coordinator at each university, electrical engineering programs had, on the average, the largest number of faculty members (23 in December 1980), followed by civil (20) and mechanical engineering (20~. Electrical engineering programs also graduated the most students (32 Ph.D. recipients in the FY1975-79 period) and had the largest enrollment {49 doctoral students in December 1980~. In con- trast, chemical engineering programs were reported to have an average of only 12 faculty members, 18 graduates, and 24 doctoral students. Proaram Graduated (Measures 04-071. The mean fraction of FY1975-79 doctoral recipients who as graduate students had received some national fellowship or training grant support (measure 04) ranges from .13 for graduates of civil engineering programs to .25 for graduates in chem- ical engineering. With respect to the median number of years from first enrollment in a graduate program to receipt of the doctorate (measure 05), chemical engineering graduates typically earned their degrees almost a full year sooner than graduates in any other disci- pline. In terms of employment status at graduation (measure 06), an average of 78 percent of the Ph.D. recipients from chemical engineering programs reported that they had made firm job commitments by the time they had completed requirements for their degree, contrasted with 69-71 percent of the program graduates in the other engineering disciplines. A mean of only 15-19 percent of the graduates in the four engineering disciplines reported that they had made firm commitments to take posi- tions in Ph.D.-granting institutions (measure 071. This low percentage {compared with the humanities and many of the science disciplines) reflects the availability of employment opportunities for engineers outside the academic sector. Survey Results (Measures 08-11) Differences in the mean ratings de- . rived from the reputational survey are small. In all four disciplines the mean rating of scholarly quality of program faculty (measure 08) 1 Means for measure 16, ~influence" of publication, are omitted since arbitrary scaling of this measure prevents meaningful comparisons across disciplines. 2 The second table in each of the four preceding chapters presents the standard deviation and decile values for each measure.

OCR for page 109
111 TABLE 7.1 Mean Values for Each Program Measure, by Discipline Chemical Civil Electrical Mechanical Engin. Engin. Engin. Engin. Program Size 01 12 20 23 20 02 18 22 32 21 03 24 35 49 29 Program Graduates 04 .25 .13 .19 .21 05 5.9 6.9 6.7 7.0 06 .78 .69 .70 .71 07 .15 .19 .17 .17 Survey Results 08 2.7 2.7 2.6 2.7 09 1.6 1.6 1.6 1.7 10 1.1 1.0 1.1 I.0 11 9 University Librar`~- 12 Research Support 13 .37 .20 .27 .22 14 7819 7998 7679 7893 Publication Recoros 15 lu '~ zz ' t Total Programs 79 74 91 82

OCR for page 109
112 is slightly below 3.0 ("good"), and programs were judged to be, on the average, a bit below "moderately" effective (2.0) in educating research scholars/scientists (measure 091. In the opinions of--the survey re- spondents, there has been "little or no changes (approximately 1.0 on measure 10) in the last five years in the overall average quality of programs. The mean rating of an evaluator's familiarity with the work of program faculty (measure 11) is below 1.0 ("some familiarity in every discipline--about which more will be said later in this chapter. University Library (Measure 12~. Measure 12, based on a composite index of the sizes of the library at the university in which a pro- gram resides, is calculated on a scale from -2.0 to 3.0, with a mean of .2 in each of the four engineering disciplines. In considering this measure it must be remembered that the index reflects the overall size of the university library and that data are unavailable for some of the smaller universities. Research Support (Measures 13-14~. Measure 13, the proportion of pro- gram faculty who had received NSF, NIH, or ADAMHA4 research grant awards during the FY1978-80 period, has mean values ranging from as high as .37 in chemical engineering to .20 in civil engineering. It should be emphasized that this measure does not take into account re- search support that faculty members have received from sources other than these three federal agencies. AS mentioned in Chapter II, a sig- nificant fraction of the engineering faculty receive support from DOD, NASA, DOE, and other federal agencies. In terms of total university expenditures for R&D in engineering (measure 14), the mean value re- ported in each discipline is slightly less than $8,000,000. It should be emphasized that these figures represent university expenditures in engineering In toto and that data are not available on expenditures in individual engineering disciplines. m us, the small differences re- ported here reflect variations in the sets of universities covered in the assessment in the four disciplines. Publication Records (Measures 15 and 161. Some diversity is found in the mean number of articles associated with a research-doctorate program (measure 15~. An average of 22 articles published in the 1978-79 period is reported for programs in electrical engineering; in each of the other three disciplines the mean number of articles ranges from .10 to .12. This difference reflects both the program size in a 3The index, derived by the Association of Research Libraries, re- flects a number of different measures, including number of volumes, fiscal expenditures, and other factors relevant to the size of a uni- versity library. See the description of this measure presented in Appendix D. Every few faculty members in engineering programs received any re- search support from the Alcohol, Drug Abuse, and Mental Health Adminis- tration.

OCR for page 109
113 particular discipline (i.e., the total number of faculty and other staff members involved in research) and the frequency with which engi neers in that discipline publish; it may also depend on the-length of a typical paper in a discipline. Mean scores are not reported on measure 16, the estimated "overall influence" of the articles attri- buted to a program. Since this measure is calculated from an average of journal influence weights,5 normalized for the journals covered in a particular discipline, mean differences among disciplines are uninterpretable. CORRELATIONS AMONG MEASURES Relations among the program measures are of intrinsic interest and are relevant to the issue of validity of the measures as indices of the quality of a research-doctorate program. Measures that are logically related to program quality are expected to be related to each other. To the extent that they are, a stronger case might be made for the validity of each as a quality measure. A reasonable index of the relationship between any two measures is the Pearson product-moment correlation coefficient. A table of corre- lation coefficients of all possible pairs of measures is presented in each of the four preceding chapters. This chapter presents selected correlations to determine the extent to which coefficients are compa- rable in the four disciplines. Special attention is given to the cor- relations involving the number of FY1975-79 program graduates (measure 02), survey rating of the scholarly quality of program faculty (measure 08), university R&D expenditures in a particular discipline (measure 14), and influence-weighted number of publications (measure 16~. These four measures have been selected because of their relatively high cor- relations with several other measures. Readers interested in correla- tions other than those presented in Tables 7.2, 7.3, 7.4, and 7.5 may refer to the third table in each of the preceding four chapters. Correlations with Measure 02. Table 7.2 presents the correlations of measure 02 with each of the other measures used in the assessment. As might be expected, correlations of this measure with the other two measures of program size--number of faculty (01) and doctoral student enrollment (03~--are quite high in all four disciplines. Of greater interest are the strong positive correlations between measure 02 and measures derived from either reputational survey ratings or publication records. The coefficients describing the relationship of measure 02 with measures 15 and 16 are greater than .60 in all disciplines except mechanical engineering. This result is not surprising, of course, since both of the publication measures reflect total productivity and have not been adjusted for program size. The correlations of measure 02 with measures 08, 09, and 11 are equally as strong. It is quite s see Appendix F for a description of the derivation of this measure.

OCR for page 109
114 TABLE 7.2 Correlations of the Number of Program Graduates (Measure 02) with Other Measures, by Discipline Chemical Civil Electrical Mechanical Engin. Engin. Engin. Engin. Program Size 01 .53 .83 .78 .66 03 .82 .71 .82 .78 Program Graduates 04 .00 .01 -.09 -.11 05 .32 .16 .18 .17 06 .22 -.11 .10 -.01 07 .14 .05 .06 .12 Survey Results ~ 08 .83 .72 .76 .67 09 .83 .73 .75 .68 10 .07 .18 .11 .17 11 .79 .75 .81 .70 University Library 12 .40 .39 .47 .32 Research Suppor 13 .45 .39 .39 .33 14 .42 .51 .58 .58 Publication Records 15 .66 .73 .84 .69 16 .69 .65 .85 .52

OCR for page 109
115 apparent that the programs that received high survey ratings and with which evaluators were more likely to be familiar were also ones that had larger numbers of graduates. Although the committee gave serious consideration to presenting an alternative set of survey measures that were adjusted for program size, a satisfactory algorithm for making such an adjustment was not found. In attempting such an adjustment on the basis of the regression of survey ratings on measures of program size, it was found that some exceptionally large programs appeared to be unfairly penalized and that some very small programs received un- justifiably high adjusted scores. Measure 02 also has positive correlations with measure 12, an index of university library size, and with measures 13 and 14, which pertain to the level of support for research in a program. Of particular note are the moderately large coefficients for measure 14, university R&D expenditures in engineering--in all disciplines but chemical engineer- ing they are above .50. m e correlations of measure 02 with measures 04, 05, 06, and 07 are below .20 in all disciplines except chemical engineering. Correlations with Measure 08. Table 7.3 shows the correlation coeffi- cients for measure 08, the mean rating of the scholarly quality of program faculty, with each of the other variables. m e correlations of measure 08 with measures of program size {01, 02, and 03) are .50 or greater for all four disciplines. Not surprisingly, the larger the program, the more likely its faculty is to be rated high in quality. Correlations of measure 08 with measure-04, the fraction of students with national fellowship awards, are .20 or smaller in each of the engineering disciplines. For programs in the biological and social sciences, the corresponding coefficients (to be presented in subsequent volumes of the committee's report) are found to be greater, typically in the range .40 to .70. Perhaps in engineering, departments with highly regarded faculty are more likely to provide support to doctoral students as teaching assistants or research assistants on faculty research grants--thereby reducing dependency on national fellowships. (The low correlation of rated faculty quality with the fraction of students with national fellowships is not, of course, inconsistent with the thesis that programs with large numbers of students are programs with large numbers of fellowship holders.) Correlations of rated faculty quality with measure 05, shortness of time from matriculation in graduate school to award of the doctor- ate, are notably higher for programs in chemical and mechanical engi- neering than for programs in the other two disciplines. Although the coefficients are by no means as large as many of those discussed above, it is evident that programs producing graduates in shorter periods of time tended to receive higher survey ratings. Correlations of ratings of faculty quality with measure 06, the fraction of program graduates with definite employment plans, and with measure 07, the fraction with plans for employment in Ph.D.-granting institutions, are positive, but quite low for each of the engineering disciplines.

OCR for page 109
116 TABLE 7.3 Correlations of the Survey Ratings of Scholarly Quality of Program Faculty (Measure 08) with Other Measures, by Discipline Chemical Civil Engin. Engin. Program Size 01 .51 .73 .73 02 .83 .72 .76 03 .77 .57 .68 Electrical Mechanical Engin. Engin. Program Graduates 04 .20 .18 -.03 .08 05 .43 .25 .21 .37 06 .27 .05 .13 .03 - 07 .25 .21 .12 .19 Survey Results 09 .99 .98 .98 .97 10 .31 .35 .23 .14 11 .96 .94 .94 .95 University Library 12 .41 .54 .56 .52 Research Support 13 .62 .62 .56 .52 14 .42 .57 .59 .52 Publication Records 15 .65 .62 .78 .70 16 .65 .57 .80 .57

OCR for page 109
117 The correlations of measure 08 with measure 09, rated effective- ness of doctoral education, are uniformly very high, at or above .97 in every discipline. This finding is consistent with results from the Car tter and Roose-Andersen studies .6 The coefficients describing the relationship between measure 08 and measure 11, familiarity with the work of program faculty, are also very high, ranging from .94 to .96. In general, evaluators were more likely to have high regard for the quality of faculty in those programs with which they were most famil- iar. That the correlation coefficients are as large as observed may simply reflect the fact that "known programs tend to be those that have earned strong reputations. Correlations of ratings of faculty quality with measure 10, ratings of perceived improvement in program quality, range from .14 in mechan- ical engineering to .35 in civil engineering. One might have expected that a program judged to have improved in quality would have been somewhat more likely to receive high ratings on measure 08 than would a program judged to have declined--thereby imposing a small positive correlation between these two variables. Moderate to high correlations are observed in all four disciplines between measure 08 and university library size (measure 12), support for research (measures 13 and 14), and publication records {measures 15 and 16~. With few exceptions these coefficients are .50 or greater. Of particular note are the strong correlations with the two publication measures for electrical engineering programs. It is interesting to note that the correlations with measure 16 are generally no higher than those with measure 15--i.e., the "weighted influence" of journals in which articles are published yields an index that tends to relate no more closely to faculty reputation than does an unadjusted count of the number of articles published. m is finding is inconsistent with the findings of Anderson et al.7 and with the committee's findings in the mathematical and physical sciences. Correlations with Measure 14. Correlations of measure 14, reported dollars of support for research and development, with other measures are shown in Table 7.4. m e reader is reminded that this measure re- flects total university expenditures in engineering and not expendi- tures in the four separate engineering disciplines. The pattern of relations is quite similar for programs in all four engineering disci- plines: moderately high correlations with measures of program size and reputational survey results (except measure 10), and slightly lower correlations with publication measures. For programs in electrical engineering some of these relations are stronger than in the other engineering disciplines. Of particular note is strong correlation in electrical engineering between measure 14 and~each of the publication measures (15 and 161. In interpreting these relationships one must keep in mind the fact that the research expenditure data have not been 6 Roose and Andersen, p. 19. 7Anderson et al., p. 95.

OCR for page 109
118 TABLE 7.4 Correlations of the University Research Expenditures in a Discipline (Measure 14) with Other Measures, by Discipline Chemical Civil Electrical Mechanical Engin. Engin. Engin. Engin. Program Size 01 .48 .56 .63 .60 02 .42 .51 .58 .58 03 .50 .38 .62 .46 Program Graduates 04 -.02 .16 .20 .04 05 .00 -.02 .12 .19 06 .07 -.02 .08 .03 07 .02 .21 .30 ~ .13 Survey Results 08 .42 .57 .59 .52 09 .41 .55 .57 .52 10 .18 .04 .09 .07 11 .41 .58 .62 .61 University Library 12 .21 .26 .21 .20 Research Suppor t 13 .09 .29 .21 .16 Publication Records 15 16 .39 .44 .61 .49 .35 .36 .65 .42

OCR for page 109
119 adjusted for the number of faculty and other staff members involved in research in a program. Cb~re[~ic~ Bold Measurer lo. Measure 16 is the number of published articles attributed to a program and adjusted for the "average influ- ence~ of the journals in which the articles appear. The correlations of this measure with all others appear in Table 7.5. Of particular interest are the moderately high correlations with all three measures of program size and with the reputational survey results (excluding measure 10~. Most of those coefficients exceed .60 and are generally somewhat larger for programs in electrical engineering. In this dis- cipline moderately high correlations are also observed between measure 16 and measures 12, 13, and 14. It should be pointed out that the exceptionally large coefficients reported for measure 15 result from the fact that the two publication measures are logically as well as empirically interdependent. Despite the appreciable correlations between reputational ratings of quality and program size measures, the functional relations between the two probably are complex. If there is a minimum size for a high- quality program, this size is likely to vary from discipline to disci- pline. Increases in size beyond the minimum may represent more high- quality faculty, or a greater proportion of inactive faculty, or ~ faculty with heavy teaching responsibilities. In attempting to select among these alternative interpretations, a single correlation coeff~- cient provides insufficient guidance. Nonetheless, certain similari- ties across disciplines may be seen in correlations among the measures. High correlations consistently appear among measures 08, 09, and 11 from the-reputational survey, and these measures also are prominently related to program size (measures 01, 02, and 03), to publication pro- ductivity (measures 15 and 16), to R&D expenditures {measure 14), and to library size (measure 12~. These results show that for all dis- ciplines the reputational rating measures (08, 09, and 11) tend to be associated with program size and with other correlates of size: pub- lication volume, R&D expenditures, and library size. Also, the reputational measures 08, 09, and 11 tend to be positively related to shortness of time-to-Ph.D. (measure 05) and to the fraction of faculty holding research grants {measure 13~. ANALYST S OF THE SURVEY RESPONSE Measures 08-11, derived from the reputational survey, may be of particular interest to many readers since measures of this type have been the most widely used (and frequently criticized) indices of qual- ity of graduate education. In designing the survey instrument for this assessment the committee made several changes in the form that had been used in the Roose-Andersen study. The modifications served two pur- poses: to provide the evaluators with a clearer understanding of the programs that they were asked to judge and to provide the committee with supplemental information for the analysis of the survey response. One change was to restrict to 50 the number of programs that any

OCR for page 109
132 TABLE 7.14 Mean Ratings of Scholarly Quality of Program Faculty, by Evaluator's Institution of Highest Degree MEAN RATINGS NUMBER OF PROGRAMS WITH ALUMNI RATINGS N Alumni Nonalumni Chemical Engin. 3.77 3.35 32 Civil Engin. 3.96 3.23 27 Electrical Engin. 3.76 3.33 29 Mechanical Engin. 3.99 3.38 27 NOTE: The pairs of means reported in each discipline are computed for a subset of programs with a rating from at least one alumnus and are substantially greater than the mean ratings for the full set of programs in each discipline. measure 08) of alumni and nonalumni ranging from .42 to .73 in the disciplines. Given the appreciable differences between the ratings furnished by program alumni and other evaluators, one might ask how much effect this has had on the overall results of the survey. m e answer is "very little." As shown in the table, only about one program in every three received ratings from any alumnus.'4 Moreover, the fraction of alumni providing ratings of a program is always quite small and should have had minimal impact on the overall mean rating of any program. To be certain that this was the case, mean ratings of the scholarly quality of faculty were recalculated for every engineering program--with the evaluations provided by alumni excluded. The results were compared with the mean scores based on a full set of evaluations. Out of the 324 engineering programs evaluated in the survey, only 1 program {in civil engineering) had an observed difference as large as 0.2, and for 306 programs (94 percent) their mean ratings remain un- changed (to the nearest tenth of a unit). On the basis of these find- ings the committee saw no reason to exclude alumni ratings in the cal- culation of program means. Another concern that some critics have is that a survey evaluation may be affected by the interaction of the research interests of the evaluator and the area ts) of focus of the research-doctorate program to be rated. It is said, for example, that some narrowly focused pro- grams may be strong in a particular area of research but that this because of the small number of alumni ratings in every discipline, the mean ratings for this group are unstable and therefore the correla- tions between alumni and nonalumni mean ratings are not reported.

OCR for page 109
133 strength may not be recognized by a large fraction of evaluators who happen to be unknowledgeable in this area. This is a concern more difficult to address than those discussed in the preceding pages since little or no information is available about the areas of focus of the programs being evaluated (although in certain disciplines the title of a department or academic unit may provide a clue). To obtain a better understanding of the extent to which an evaluator's field of specialty may have influenced the ratings he or she has provided, an analysis was made of ratings provided by evaluators in physics and statistics/ biostatistics. In each discipline the survey participants were divided into two groups according to specialty field (as reported on the survey questionnaire). The results of the analysis, which are presented in the mathematical and physical sciences volume of the committee's re- port, indicate that there is a high degree of correlation in the mean ratings provided by those in differing specialty fields within these two disciplines. Although one cannot conclude from these findings that an evaluator's specialty field has no bearing on how he or she rates a program, these findings do suggest that the relative standings of programs in physics and statistics/biostatistics would not be greatly altered if the ratings by either group were discarded. INTERPRETATION OF REPUTATIONAL SURVEY RATINGS It is not hard to foresee that results from this survey will receive considerable attention through enthusiastic and uncritical reporting in some quarters and sharp castigation in others. The study committee understands the grounds for both sides of this polarized response but finds that both tend to be excessive. It is important to make clear how we view these ratings as fitting into the larger study of which they are a part. The reputational results are likely to receive a disproportionate degree of attention for several reasons, including the fact that they reflect the opinions of a large group of faculty colleagues and that they form a bridge with earlier studies of graduate programs. But the results will also receive emphasis because they alone, among all of the measures, seem to address quality in an overall or global fashion. While most recognize that "objective" program characteristics (i.e., publication productivity, research funding, or library size) have some bearing on program quality, probably no one would contend that a single one of these measures encompasses all that need be known about the quality of research-doctorate programs. Each is obviously no more than an indicator of some aspect of program quality. In contrast, the rep- utational ratings are global from the start because the respondents are asked to take into account many objective characteristics and to arrive at a general assessment of the quality of the faculty and effectiveness of the program. m is generality has self-evident appeal. On the other hand, it is wise to keep in mind that these reputation- al ratings are measures of perceived program quality rather than of "quality" in some ideal or absolute sense. What this means is that,

OCR for page 109
134 just as for all of the more objective measures, the reputational rat- ings represent only a partial view of what most of us would consider quality to be; hence, they must be kept in careful perspective. Some critics may argue that such ratings are positively misleading because of a variety of methodological artifacts or because they are supplied by "judges who often know very little about the programs they are rating. m e committee has conducted the survey in a way that per- mits the empirical examination of a number of the alleged artifacts and, although our analysis is by no means exhaustive, the general con- clusion is that their effects are slight. At the same time, criticisms of reputational ratings from prior studies represent a perspective that may be misguided. mis perspec- tive assumes that one asks for ratings in order to find out what "quality" really is and that to the degree that the ratings miss the mark of "quintessential quality, n they are unreal, although the quality that they attempt to measure is real. What this perspective misses is the reality of quality and the fact that impressions of quality, if widely shared, have an imposing reality of their own and therefore are worth knowing about in their own right. After all, these perceptions govern a large-scale system of traffic around the nation's graduate institutions--for example, when undergraduate students seek the advice of their professor concerning graduate programs that they might attend. It is possible that some professors put in this position disqualify themselves on grounds that they are not well informed about the rela- tive merits of the programs being considered. Most faculty members, however, surely attempt to be helpful on the basis of impressions gleaned from their professional experience, and these assessments are likely to have major impact on student decision-making. In short, the impressions are real and have very real effects not only on students shopping for graduate schools but also on other flows, such as job- seeking young faculty and the distribution of research resources. At the very least, the survey results provide a snapshot of these impres- sions from discipline to discipline. Although these impressions may be far from ideally informed, they certainly show a strong degree of consensus within each discipline, and it seems safe to assume that they are more than passingly related to what a majority of keen observers might agree program quality is all about. COMPARISON WITH RESULTS OF THE ROOSEANDERSEN STUDY An analysis of the response to the committee's survey would not be complete without comparing the results with those obtained in the sur- vey by Roose and Andersen 12 years earlier. Although there are obvious similarities in the two surveys, there are also some important differ- ences that should be kept in mind in examining individual program rat- ings of the scholarly quality of faculty. Already mentioned in this chapter is the inclusion, on the form sent to 90 percent of the sample members in the committee's survey, of the names and academic ranks of faculty and the numbers of doctoral graduates in the previous five years. Other significant changes in the committee's form are the

OCR for page 109
135 identification of the university department or academic unit in which each program may be found, the restriction of requesting evaluators to make judgments about no more than 50 research-doctorate programs in their discipline, and the presentation of these programs in random se- quence on the survey form. The sampling frames used in the two surveys also differ. m e sample selected in the earlier study included only individuals who had been nominated by the participating universities, while more than one-fourth of the sample in the committee's survey were chosen at random from full faculty lists. (Except for this difference the samples were quite similar--i.e., in terms of number of evaluators in each discipline and the fraction of senior scholars.~5) Several dissimilarities in the coverage of the Roose-Andersen and this committee's reputational assessments should be mentioned. The former included a total of 130 institutions that had awarded at least 100 doctoral degrees in two or more disciplines during the FY1958-67 period. m e institutional coverage in the committee's assessment was based on the number of doctorates awarded in each discipline (as described in Chapter I) and covered a total population of 228 univer- sities. Most of the universities represented in the later study but not the earlier one are institutions that offered research-doctorate programs in a limited set of disciplines. Finally, in the Roose- Andersen study, ratings were compiled on only one program from each institution represented in a discipline, whereas in the committee's survey separate ratings were requested if a university offered more than one research-doctorate program in a given discipline. The conse- quences of these differences in survey coverage are quite apparent: in the committee's survey, evaluations were requested for a total of 326 research-doctorate programs in chemical, civil, electrical, and mechanical engineering compared with 287 programs in the Roose- Andersen study. Figures 7.1-7.4 plot the mean ratings of scholarly quality of fac- ulty in programs included in both surveys; sets of ratings are graphed for 61 programs in chemical engineering, 57 in civil engineering, 66 in electrical engineering, and 61 in mechanical engineering. Since in the Roose-Andersen study programs were identified by institution and discipline (but not by department) the matching of results from this survey with those from the committee's survey is not precise. For universities represented in the latter survey by more than one program in a particular discipline, the mean rating for the program with the largest number of graduates (measure 02) is the only one plotted here. Although the results of both surveys are reported on identical scales, some caution must be taken in interpreting differences in the mean ratings a program received in the two evaluations. It is impossible to estimate what effect all of the differences described above may have had on the results of the two surveys. Furthermore, one must Moor a description of the sample group used in the earlier study, see Roose and Andersen, pp. 28-31.

OCR for page 109
136 5. 0++ 4 . 0+ Measure + 3.0++ + + 08 * * * * * * + - 2.0 1.0 * * * * * * * * . * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * r = .89 0.0 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1.0 2.0 3.0 4.0 5.0 Roose-Andersen Rating ( 1970) FIGURE 7.1 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--61 programs in chemical engineering.

OCR for page 109
137 s. o++ + + + t 4. 0++ + + Measure + 3.0++ 08 + + + + + + 2. 0++ 1 . 0++ + 0.0 + 1.0 2.0 3.0 4.0 5.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * r = . 91 Roose-Andersen Rating (1970) FIGURE 7.2 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--57 programs in civil engineering.

OCR for page 109
138 5 . 0++ 4.0++ * * * * + + + + + + + Measure + 3.0++ 08 + + + + + 2. 0++ + + + 1 . 0 ++ + + + O.0 + 1.0 2.0 3.0 4.0 5.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * r = .92 Roose-Andersen Rating (1970) FIGURE 7.3 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in the Roose-Andersen study--66 programs in electrical engineering.

OCR for page 109
139 5. 0++ 4.0++ + + + + + Measure + 3.0++ 08 + + + + + 2.0++ + 1.0~+ + O. O + + + + + + + + + + + + + + + + + + + + + + + + ~ 1.0 2.0 3.0 4.0 5.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Roose-Andersen Rating (1970) FIGURE 7.4 Mean rating of scholarly quality of faculty (measure 08) versus mean rating of faculty in tne Roose-Andersen study--61 programs in mechanical engineering.

OCR for page 109
140 remember that the reported scores are based on the opinions of differ- ent groups of faculty members and were provided at different time periods. In 1969, when the Roose-Andersen survey was-conducted, gradu- ate departments in most universities were still expanding and not fac- ing the enrollment and budget reductions that many departments have had to deal with in recent years. Consequently, a comparison of the over- all findings from the two surveys tells us nothing about how much grad- uate education has improved (or declined) in the past decade. Nor should the reader place much stock in any small differences in the mean ratings that a particular program may have received in the two surveys. On the other hand, it is of particular interest to note the high cor- relations between the results of the evaluations. For programs in the four engineering disciplines the correlation coefficients range between .89 (chemical) and .93 (mechanical). The extraordinarily high correla- tions found in all four disciplines may suggest to some readers that reputational standings of programs in these disciplines have changed very little in the last decade. However, one must keep in mind that the correlations are based on the reputational ratings of only three- fourchs of the programs evaluated in this assessment in these disci- plines and do not take into account the emergence of many new programs that did not exist or were too small to be rated in the Roose-Andersen study. FUTURE STUDIES One of the most important objectives in undertaking this assessment was to test new measures not used extensively in past evaluations of graduate programs. Although the committee believes that it has been successful in this effort, much more needs to be done. First and foremost, studies of this kind should be extended to cover other types of programs and other disciplines not included in this effort. As a consequence of budgeting limitations, the committee had to restrict its study to 32 disciplines, selected on the basis of the number of doctor- ates awarded in each. A multidimensional assessment of research- doctorate programs in many important disciplines not included among these 32 should be of great value to the academic community. Consid- eration should also be given to embarking on evaluations of programs offering other types of graduate and professional degrees. As a matter of fact, plant for including master 's-degree programs in this assess- ment were originally contemplated, but because of a lack of available information about the resources and graduates of programs at the master's level, it was decided to focus on programs leading to the research doctorate. Perhaps the most debated issue the committee has had to address concerned which measures should be reported in this assessment. In fact, there is still disagreement among some of its members about the relative merits of certain measures, and the committee fully recognizes a need for more reliable and valid indices of the quality of graduate programs. First on a list of needs is more precise and meaningful information about the product of research-doctorate programs--the

OCR for page 109
141 graduates. For example, what fraction of the program graduates have gone on to be productive investigators--either in the academic setting or in government and industrial laboratories? What fraction have gone on to become outstanding investigators--as measured by receipt of major prizes, membership in academies, and other such distinctions? How do program graduates compare with regard to their publication records? Also desired might be measures of the quality of the students applying for admittance to a graduate program (e.g., Graduate Record Examination - scores, undergraduate grade point averages). If reliable data of this sort were made available, they might provide a useful index of the reputational standings of programs, from the perspective of graduate students. A number of alternative measures relevant to the quality of program faculty were considered by the committee but not included in the assessment because of the associated difficulties and costs of come piling the necessary data. For example, what fraction of the program faculty were invited to present papers at national meetings? What fraction had been elected to prestigious organizations/groups in their field? What fraction had received senior fellowships and other awards of distinction? In addition, it would be highly desirable to supple- ment the data presented on NSF, NIH, and ADAMHA research grant awards (measure 13) with data on awards from other federal agencies (e.g., Department of Defense, Department of Energy, National Aeronautics and Space Administration) as well as from major private foundations. AS described in the preceding pages, the committee was able to make several changes in the survey design and procedures, but further imp provements could be made. Of highest priority in this regard is the expansion of the survey sample to include evaluators from outside the academic setting (in particular, those in government and industrial laboratories who regularly employ graduates of the programs to be evaluated). To add evaluators from these sectors would require a major effort in identifying the survey population from which a sample could be selected. Although such an effort is likely to involve considerable costs in both time and financial resources, the committee believes that the addition of evaluators from the government and industrial settings would be of value in providing a different perspective to the reputa- tional assessment and that comparisons between the ratings supplied by academic and nonacademic evaluators would be of particular interest.

OCR for page 109