National Academies Press: OpenBook
« Previous: I. Origins of Study and Selection of Programs
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 13
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 14
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 15
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 16
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 17
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 18
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 19
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 20
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 21
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 22
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 23
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 24
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 25
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 26
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 27
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 28
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 29
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 30
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 31
Suggested Citation:"II. Methodology." National Research Council. 1982. An Assessment of Research-Doctorate Programs in the United States: Engineering. Washington, DC: The National Academies Press. doi: 10.17226/9780.
×
Page 32

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Methodology Quality . . . you know what it is, yet you don't know what it is. But that's self-contradictory. But some things are better than others, that is, they have more quality. But when you try to say what the quality is, apart from the things that have it, it all goes poof' There's nothing to talk about. But if you can't say what Quality is, how do you know what it is, or how do you know that it even exists? If no one knows what it is, then for all practical purposes it doesn't exist at all. But for all practical purposes it really does exist. What else are the grades based on? Why else would people pay fortunes for some things and throw others in the trash pile? Obviously some things are -better than others . . . but what's the "betterness~? . . . So round and round you go, spinning mental wheels and nowhere finding anyplace to get traction. What the hell is Quality? What is it? Robert M. Pirsig Zen and the Art of Motorcycle Maintenance Both the planning committee and our own study committee have given careful consideration to the types of measures to be employed in the assessment of research-doctorate programs. m e committees recognized that any of the measures that might be used is open to criticism and that no single measure could be expected to provide an entirely satis- factory index of the quality of graduate education. With respect to the use of multiple criteria in educational assessment, one critic has commented: MA description of the measures considered may be found in the third chapter of the planning committee's report, along with a discussion of the relative merits of each measure. 13

14 At best each is a partial measure encompassing a frac- tion of the large concept. On occasion its link to the real [world] is problematic and tenuous. Moreover, each measure [may contain] a load of irrelevant super- fluities, "extra baggage" unrelated to the outcomes under study. By the use of a number of such measures, each contributing a different facet of information, we can limit the effect of irrelevancies and develop a more rounded and truer picture of program outcomes .2 Although the use of multiple measures alleviates the criticisms directed at a single dimension or measure, it certainly will not sat- isfy those who believe that the quality of graduate programs cannot be represented by quantitative estimates no matter how many dimensions they may be intended to represent. Furthermore, the usefulness of the assessment is dependent on the validity and reliability of the criteria on which programs are evaluated. m e decision concerning which mea- sures to adopt in the study was made primarily on the basis of two factors: (1) the extent to which a measure was judged to be related to the quality of research-doctorate pro- grams and (2) the feasibility of compiling reliable data for making national comparisons of programs in par- ticular disciplines. Only measures that were applicable to a majority of the disciplines to be covered were considered. In reaching a final decision the study committee found the ETS study,3 in which 27 separate variables were examined, especially helpful, even though it was recognized that many of the measures feasible in institutional self-studies would not be available in a national study. The committee was aided by the many suggestions received from university administrators and others within the academic community. Although the initial design called for an assessment based on approximately six measures, the committee concluded that it would be highly desirable to expand this effort. A total of 16 measures (listed in Table 2.1) have been utilized in the assessment of research-doctor- ate programs in chemical engineering, civil engineering, electrical engineering, and mechanical engineering. For nine of the measures data are available describing most, if not all, of the engineering pro- grams included in the assessment. For seven measures the coverage is less complete but encompasses at least a majority of the programs in 2C. H. Weiss, Evaluation Research: Methods of Assessing Program Effectiveness, Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1972, p. 56. 3See M. J. Clark et al. {1976) for a description of these variables.

15 TABLE 2.1 Measures Compiled on Individual Research-Doctorate Programs in Engineering Program Size1 01 Reported number of faculty members in the program, December 1980. 02 Reported number of program graduates in last five years (July 1975 through June 1980). 03 Reported total number of full-time and part-time graduate students enrolled in the program who intend to earn doctorates, December 1980. Characteristics of Graduates2 04 Fraction of FY1975-79 program graduates who had received some national fellowship or training grant support during their graduate education. 05 Median number of years from first enrollment in graduate school to receipt of the doctorate--FY1975-79 program graduates.3 06 Fraction of FY1975-79 program graduates who at the time they completed requirements for the doctorate reported that they had made definite commitments for postgraduation employment. Fraction of FY1975-79 program graduates who at the time they completed requirements for the doctorate reported that they had made definite com- mitments for postgraduation employment in Ph.D.-granting universities. Reputational Survey Results4 08 Mean rating of the scholarly quality of program faculty. 09 Mean rating of the effectiveness of the program in educating research scholars/scientists. 10 Mean rating of the improvement in program quality in the last five years. 11 Mean rating of the evaluators' familiarity with the work of the program's faculty. University Library Sizes 12 Composite index describing the library size in the university in which the program is located, 1979-80. Research Support 13 Fraction of program faculty members holding research grants from the National Science Foundation, National Institutes of Health, or the Alcohol, Drug Abuse, and Mental Health Administration at any time during the FY1978-80 period .6 14 Total expenditures (in thousands of dollars) reported by the university for research and development activities in a specified field, FY1979.7 Publication Records8 15 Number of published articles attributed to the program, 1978-79. 16 Estimated "overall influence" of published articles attributed to the program, 1978-79. iBased on information provided to the committee by the participating . · ~ universities. 2 Based on data compiled in the NRC's Survey of Earned Doctorates. 3 In reporting standardized scores and correlations with other variables, a shorter time-to-Ph.D. is assigned a higher score. abased on responses to the committee's survey conducted in April 1981. s Based on data compiled by the Association of Research Libraries. 6 Based on matching faculty names provided by institutional coordinators with the names of research grant awardees from the three federal agencies. 7 Based on data provided to the National Science Foundation by universities. abased on data compiled by the Institute for Scientific Information and developed by Computer Horizons, Inc.

16 every discipline. The actual number of programs evaluated on every measure is reported in the second table in each of the next four chap- ters. ~~~ m e 16 measures describe a variety of aspects important to the operation and function of research-doctorate programs--and thus are relevant to the quality and effectiveness of programs in educating engineers for careers in research. However, not all of the measures may be viewed as "global indices of quality." Some, such as those relating to program size, are best characterized as "program descrip- tors" that, although not dimensions of quality per se, are thought to have a significant influence on the effectiveness of programs. Other measures, such as those relating to university library size and support for research and training, describe some of the resources generally recognized as being important in maintaining a vibrant program in graduate education. Measures derived from surveys of faculty peers or from the publication records of faculty members, on the other hand, have traditionally been regarded as indices of the overall quality of graduate programs. Yet these too are not true measures of quality. We often settle for an easy-to-gather statistic, per- fectly legitimate for its own limited purposes, and then forget that we haven't measured what we want to talk about. Consider, for instance, the reputation approach of ranking graduate departments: We ask a sample of physics professors (say) which the best physics departments are and then tabulate and report the results. The "best" departments are those that our respondents say are the best. Clearly it's use- ful to know which are the highly regarded departments in a given field, but prestige (which is what we are measuring here) isn't exactly the same as qualitY.4 To be sure, each of the 16 measures reported in this assessment has its own set of limitations. In the sections that follow an explanation is provided of how each measure has been derived and its particular limitations as a descriptor of research-doctorate programs. PROGRAM SIZE Information was collected from the study coordinators at each university on the names and ranks of program faculty, doctoral student enrollment, and number of Ph.D. graduates in each of the past five years (FY1976-801. Each coordinator was instructed to include on the faculty list those individuals who, as of December 1, 1980, held academic appointments (typically at the rank of assistant, associate, 4John Shelton Reed, "How Not to Measure What a University Does," The Chronicle of Higher Education, Vol. 22, No. 12, May 11, 1981, p. 56.

17 and full professor) and who participated significantly in doctoral education. Emeritus and adjunct members generally were not to be included. Measure 01 represents the number of faculty identified in a program. Measure 02 is the reported number of graduates who earned Ph.D. or equivalent research doctorates in a program during the period from July 1, 1975, through June 30, 1980. Measure 03 represents the total number of full-time and part-time students reported to be enrolled in a program in the fall of 1980, who intended to earn research doctorates. All three of these measures describe different aspects of program size. In previous studies program size has been shown to be highly correlated with the reputational ratings of a program, and this relationship is examined in detail in this report. It should be noted that since the information was provided by the institutions participating in the study, the data may be influenced by the subjective decisions made by the individuals completing the forms. For example, some institutional coordinators may be far less restric- tive than others in deciding who should be included on the list of program faculty. To minimize variation in interpretation, detailed instructions were provided to those filling out the forms.S Measure 03 is of particular concern in this regard since the coordinators at some institutions may not have known how many of the students currently enrolled in graduate study intended to earn doctoral degrees. CHARACTERISTICS OF GRADUATES One of the most meaningful measures of the success of a research- doctorate program is the performance of its graduates. How many go on to lead productive careers in research and/or other activity for which the Ph.D. provides entry? Unfortunately, reliable information on the subsequent employment and career achievements of the graduates of in- dividual programs is not available. In the absence of this directly relevant information, the committee has relied on four indirect mea- sures derived from data compiled in the NRC's Survey of Earned Doctor- ates.6 Although each measure has serious limitations (described below), the committee believes it more desirable to include this infor- mation than not to include data about program graduates. In identifying program graduates who had received their doctorates in the previous five years {FY1975-79) ,7 the faculty lists furnished by the study coordinators at universities were compared with the names of dissertation advisers (available from the NRC survey). m e latter source contains records for virtually all individuals who have earned research doctorates from U.S. universities since 1920. The institu- 5A copy of the survey form and instructions sent to study coordinators is included in Appendix A. 6A copy of the questionnaire used in this survey is found in Appendix B. 7 Survey data for the FY1980 Ph.D. recipients had not yet been compiled at the time this assessment was undertaken.

18 Lion, year, and specialty field of Ph.D. recipients were also used in determining the identity of program graduates. It is estimated that this matching process provided information on the graduate training and employment plans of more than 90 percent of the FY1975-79 graduates from the engineering programs. In the calculation of each of the four measures derived from the NRC survey, program data are reported only if the survey information is available on at least 10 graduates. Con- sequently, in the discipline with the fewest graduates per program-- civil engineering--only slightly more than half the programs are in- cluded in these measures, whereas almost 90 percent of the electrical engineering programs are included. Measure 04 constitutes the fraction of FY1975-79 graduates of a program who had received at least some national fellowship support, including National Institutes of Health fellowships or traineeships, National Science Foundation fellowships, other federal fellowships, Wood row Wilson fellowships, or fellowships/traineeships from other U.S. national organizations. One might expect the more selective programs to have a greater proportion of students with national fellowship sup- port--especially "portable fellowships." Although the committee con- sidered alternative measures of student ability (e.g., Graduate Record Examination scores, undergraduate grade point averages), reliable in- formation of this sort was unavailable for a national assessment. It should be noted that the relevance of the fellowship measure varies considerably among disciplines. In the biomedical sciences a substan- tial fraction of the graduate students are supported by training grants and fellowships; in engineering the majority are supported by research assistantships and teaching assistantships. Measure 05 is the median number of years elapsed from the time program graduates first enrolled in graduate school to the time they received their doctoral degrees. For purposes of analysis the commit- tee has adopted the conventional wisdom that the most talented students are likely to earn their doctoral degrees in the shortest periods of time--hence, the shorter the median time-to-Ph.D., the higher the standardized score that is assigned. Although this measure has fre- quently been employed in social science research as a proxy for student ability, one must regard its use here with some skepticism. It is quite possible that the length of time it takes a student to complete requirements for a doctorate may be significantly affected by the explicit or implicit policies of a university or department. For example, in certain cases a short time-to-Ph.D. may be indicative of less stringent requirements for the degree. Furthermore, previous studies have demonstrated that women and members of minority groups, for reasons having nothing to do with their abilities, are more likely than male Caucasians to interrupt their graduate education or to be enrolled on a part-time basis.8 As a consequence, the median time- ~For a detailed analysis of this subject, see Dorothy M. Gilford and Joan Snyder, Women and Minority Ph.D.'s in the 1970's: A Data Book, National Academy of Sciences, Washington, D.C., 1977.

19 to-Ph.D. may be longer for programs with larger fractions of women and minority students. Measure 06 represents the fraction of FY1975-79 program graduates who reported at the time they had completed requirements for the doctorate that they had signed contracts or made firm commitments for postgraduation employment (including postdoctoral appointments as well as other positions in the academic or nonacademic sectors) and who provided the names of their prospective employers. Although this measure is likely to vary discipline by discipline according to the availability of employment opportunities, a program's standing relative to other programs in the same discipline should not be affected by this variation. In theory, the graduates with the greatest promise should have the easiest time in finding jobs. However, the measure is also influenced by a variety of other factors, such as personal job prefer- ences and restrictions in geographic mobility, that are unrelated to the ability of the individual. It also should be noted parenthetically that unemployment rates for doctoral recipients are quite low and that nearly all of the graduates seeking jobs find positions soon after completing their doctoral programs.9 Furthermore, first employment after graduation is by no means a measure of career achievement, which is what one would like to have if reliable data were available. Measure 07, a variant of measure 06, constitutes the fraction of FY1975-79 program graduates who indicated that they had made firm com- mitments for employment in Ph.D.-granting universities and who provided the names of their prospective employers. This measure may be presumed to be an indication of the fraction of graduates likely to pursue ca- reers in academic research, although there is no evidence concerning how many of them remain in academic research in the long term. In some disciplines the path from Ph.D. to postdoctoral apprenticeship to junior faculty has traditionally been regarded as the road of success =~ ~ == ~ I_. .~ ~_-a a~__~ ~ ~~ ~ ~ _~ _ __ ~ _ 1 A_ ~ for one grown ana clevelopmen~ oI researcher cadent . me coIrunittee is well aware, of course, that other paths, such as employment in the major laboratories of industry and government, provide equally attrac- tive opportunities for growth. Indeed, in recent years increasing numbers of graduates are entering the nonacademic sectors. Unfortu- nately, the data compiled from the NRC's Survey of Earned Doctorates do not enable one to distinguish between employment in the top-flight laboratories of industry and government and employment in other areas of the nonacademic sectors. In each of the four engineering disci- plines, more than half of the doctoral graduates accept first employ- ment outside the academic sector (see Table 2.2), and many of the best qualified graduates in these and other disciplines undoubtedly are em- ployed, as a matter of choice, in industrial or government laborato- 9For new Ph.D. recipients in science and engineering the unemployment rate has been less than 2 percent (see National Research Council, Postdoctoral Appointments and Disappointments, National Academy Press, Washington, D.C., 1981, p. 313~.

20 TABLE 2.2 Percentage of FY1975-79 Doctoral Recipients with Definite Commitments for Employment Outside the Academic Sector* Chemical Engineering Civil Engineering Electrical Engineering Mechanical Engineering 74 51 66 65 *Percentages are based on respondents to the NRC's Survey of Earned Doctorates who indicated that they had made firm commit- ments for postgraduation employment and who provided the names of their prospective employers. These percentages may be considered to be lower-bound estimates of the actual percentages of doctoral recipients employed outside the academic sector. ries. Measure 07 reflects only academic employment; it is a program characteristic rather than a dimension of program quality. The inclusion of measure 07 in this report has been an issue of great concern, much debated by the committee. The majority of the committee considers the measure to be of sufficient interest to warrant its inclusion. High values on measure 07 mark programs from which relatively large proportions of graduates accept first employmn~ at academic institutions that award the Ph.D. degree. Having assembled data for measure 07 in all 32 disciplines covered in the assessment, the majority of the committee prefers that these data be reported, recognizing that readers will attend to them or not depending on their interest in this measure. Three members of the committee have objected to the majority position and object also to the inclusion of measure 06. Their views are presented in the Minority Statement, which follows Chapter VII in this report. REPUTATIONAL SURVEY RESULTS In April 1981, survey forms were mailed to a total of 975 faculty members in chemical engineering, civil engineering, electrical engi- neering, and mechanical engineering. m e evaluators were selected from the faculty lists furnished by the study coordinators at the 228 universities covered in the assessment. mese evaluators constituted approximately 16 percent of the total faculty population--6,196 faculty members--in the engineering programs being evaluated (see Table 2.3~. The survey sample was chosen on the basis of the number of faculty in a particular program and the number of doctorates awarded in the previous five years (FY1976-80~--with the stipulation that at least one evaluator was selected from every program covered in the assess- ment. In selecting the sample each faculty rank was represented in proportion to the total number of individuals holding that rank, and preference was given to those faculty members whom the study coordina- tors had nominated to serve as evaluators. As shown in Table 2.3, 822

21 individuals, 84 percent of the survey sample in engineering, had been recommended by study coordinators.~° Each evaluator was asked to consider a stratified random-sample of 50 research-doctorate programs in his or her discipline--with programs stratified by the number of faculty members associated with each program. Every program was included on 150 survey forms. The 50 programs to be evaluated appeared on a survey form in random sequence, preceded by an alphabetized list of all programs in that discipline that were being included in the study. No evaluator was asked to consider a program at his or her own institution. Ninety percent of the survey sample group were provided the names of faculty members in each of the 50 programs to be evaluated, along with data on the total number of doctorates awarded in the last five years. m e inclusion of this information represents a significant departure from the pro- cedures used in earlier reputational assessments. For purposes of comparison with previous studies, 10 percent (randomly selected in each discipline) were not furnished any information other than the names of the programs. The survey items were adapted from the form used in the Roose- Andersen study. Prior to mailing, the instrument was pretested using a small sample of faculty members in chemistry and psychology. As a result, two significant improvements were made in the original survey design. A question was added on the extent to which the evaluator was familiar with the work of the faculty in each program. Responses to this question, reported as measure 11, provide some insight into the relationship between faculty recognition and the reputational standing of a program. 2 Also added was a question on the evaluator's field of specialization--thereby making it possible to compare program evalu- ations in different specialty areas within a particular discipline. A total of 579 faculty members in engineering--59 percent of those asked to participate--completed and returned survey forms {see Table 2.3~. Two factors probably have contributed to this response rate be- ing approximately 20 percentage points below the rates reported in the Cartter and Roose-Andersen studies. 3 First, because of the consid- erable expense of printing individualized survey forms (each 25-30 pages), second copies were not sent to sample members not responding to the first mailing) 4 --as was done in the Car tter and Roose-Ander- ~°A detailed analysis of the survey participants in each discipline is given in subsequent chapters. This information was furnished to the committee by the study coor- dinators at the universities participating in the study. Evidence of the strength of the relationship is provided by corre- lations presented in Chapters III-VI, and an analysis of the relation- ship is provided in Chapter VII. 3 To compare the response rates obtained in the earlier surveys, see Roose and Andersen, Table 28, p. 29. MA follow-up letter was sent to those not responding to the first mailing, and a second copy was distributed to those few evaluators who specifically requested another form.

22 TABLE 2.3 Survey Response by Discipline and Characteristics of Evaluator Total Program Faculty N Discipline of Evaluator Survey Sample Total Respondents N N % Chemical Engineering 979 237 164 69 Civil Engineering 1,461 222 129 58 Electrical Engineering 2,134 273 142 52 Mechanical Engineering 1,622 243 144 59 Faculty Rank Professor 3,698 597 377 63 Associate Professor 1,401 244 123 50 Assistant Professor 1,008 132 79 60 Other 89 2 0 0 Evaluator Selection l Nominated by Institution 1,901 822 518 63 Other 4,295 153 61 40 Survey Form With Faculty Names N/A* 876 525 60 Without Names N/A* 99 54 55 Total All Fields 6,196 975 579 59 *Not applicable. sen efforts. Second, it is quite apparent that within the academic community there has been a growing dissatisfaction in recent years with educational assessments based on reputational measures. Indeed, this dissatisfaction was an important factor in the Conference Board's decision to undertake a multidimensional assessment, and some faculty members included in the sample made known to the committee their strong objections to the reputational survey. As can be seen in Table 2.3, there is some variation in the re- sponse rates in the four engineering disciplines. Of particular in- terest is the relatively high rate of response from chemical engineers and the low rate from those in electrical engineering--the latter may be related to the difficulties encountered in distinguishing between

23 electrical engineering and computer science program faculty members. It is not surprising to find that the evaluators nominated by study coordinators responded more often than did those who had been-selected at random. Each program was considered by an average of approximately 90 survey respondents from other programs in the same discipline. The evaluators were asked to judge programs in terms of scholarly quality of program faculty, effectiveness of program in educating research scholars/scientists, and change in program quality in the last five years. The mean ratings of a program on these three survey items constitute measures 08, 09, and 10. Evaluators were also asked to in- dicate the extent to which they were familiar with the work of the program faculty. The average of responses to this item constitutes measure 11. In making judgments about the quality of faculty, evaluators were instructed to consider the scholarly competence and achievements of the individuals. m e ratings were furnished on the following scale: 5 Distinguished 4 Strong 3 Good 2 Adequate 1 Marginal O Not sufficient for doctoral education X Don't know well enough to evaluate In assessing the effectiveness of a program, evaluators were asked to consider the accessibility of faculty, the curricula, the instructional and research facilities, the quality of the graduate students, the performance of graduates, and other factors that contribute to a pro- gram's effectiveness. This measure was rated accordingly: 3 Extremely effective 2 Reasonably effective 1 Minimally effective O Not effective X Don't know well enough to evaluate Evaluators were instructed to assess change in program quality on the basis of whether there has been improvement in the last five years in both the scholarly quality of faculty and the effectiveness in educat- ing research scholars/scientists. m e following alternatives were provided: 2 Better than five years ago 1 Little or no change in last five years O Poorer than five years ago X Don't know well enough to evaluate MA copy of the survey instrument and accompanying instructions is included in Appendix C.

24 Evaluators were asked to indicate their familiarity with the work of the program faculty according to the following scale: 2 Considerable familiarity 1 Some familiarity O Little or no familiarity In the computation of mean ratings on measures 08, 09, and 10, the "don't know" responses were ignored. An average program rating based on fewer than 15 responses {excluding the "don't knows responses) is not repor ted . Measures 08, 09, and 10 are subject to many of the same criticisms that have been directed at previous reputational surveys. Although care has been taken to improve the sampling design and to provide evaluators with some essential information about each program, the survey results merely reflect a consensus of faculty opinions. As discussed in Chapter I, these opinions may well be based on out-of- date information or be influenced by a variety of factors unrelated to the quality of the program. In Chapter VII a number of factors that may possibly affect the survey results are examined. In addition to these limitations, it should be pointed out that evaluators, on the average, were unfamiliar with almost one-third of the programs they were asked to consider. As might be expected, the smaller and less prestigious programs were not as well known, and for this reason one might have less confidence in the average ratings of these programs. For all four survey measures, standard errors of the mean ratings are reported; they tend to be larger for the lesser known programs. m e frequency of response to each of the survey items is discussed in Chapter VII. Two additional comments should be made regarding the survey activity. First it should be emphasized that the ratings derived from the survey reflect a program's standing relative to other programs in the same discipline and provide no basis for making cross-disciplinary comparisons. For example, the fact that a larger number of chemical engineering programs received "distinguished" ratings on measure 08 than did electrical engineering programs indicates nothing about the relative quality of faculty in these two disciplines. Nor is it advisable to compare the rating of a program in one discipline with that of a program in another discipline because the ratings are based on the opinions of different groups of evaluators who were asked to judge entirely different sets of programs. Second, early in the com- mittee's deliberations a decision was made to supplement the ratings obtained from faculty members with ratings from evaluators who hold research-oriented positions in institutions outside the academic sec- tor. mese institutions include industrial research laboratories, government research laboratories, and a variety of other research establishments. Over the past 10 years increasing numbers of doctoral ~ 6 See Table 7.6 in Chapter VII.

25 recipients have taken positions outside the academic setting. The extensive involvement of these graduates in nonacademic employment is reflected in the percentages reported in Table 2.2: An average of as many as 65 percent of the recent graduates in engineering disciplines indicated that they planned to take positions in nonacademic settings. Data from another NRC survey suggest that the actual fraction employed outside academia may be significantly higher. m e committee recognized that the inclusion of nonacademic evaluators would furnish information valuable for assessing nontraditional dimensions of doctoral education and would provide an important new measure not assessed in earlier studies. Results from a survey of this group would provide an inter- esting comparison with the results obtained from the survey of faculty members. A concentrated effort was made to obtain supplemental fund- ing for adding nonacademic evaluators in selected disciplines to the survey sample, but this effort was unsuccessful. The committee never- theless remains convinced of the importance of including evaluators from nonacademic research institutions. mese institutions are likely to employ increasing fractions of graduates in many disciplines, and it is urged that this group not be overlooked in future assessments of graduate programs. UNIVERSITY LIBRARY SI ZE The university library holdings are generally regarded as an im- portant resource for students in graduate {and undergraduate) educa- tion. me Association of Research Libraries (ARL) has compiled data from its academic member institutions and developed a composite measure of a university library's size relative to those of other ARL members. The ARL Library Index, as it is called, is based on 10 characteristics: volumes held, volumes added (gross), microform units held, current serials received, expenditures for library materials, expenditures for binding, total salary and wage expenditures, other operating expendi- tures, number of professional staff, and number of nonprofessional staff.~7 me 1979-80 index, which constitutes measure 12, is available for 89 of the 228 universities included in the assessment. (these 89 tend to be among the largest institutions.) me limited coverage of this measure is a major shortcoming. It should be noted that the ARL index is a composite description of library size and not a qualitative evaluation of the collections, services, or operations of the library. Also, it is a measure of aggregate size and does not take into account the library holdings in a particular department or discipline. Fin- ally, although universities with more than one campus were instructed to include figures for the main campus only, some in fact may have reported library size for the entire system. Whether this misreport- ing occurred is not known. Resee Appendix D for a description of the calculation of this index.

26 RESEARCH SUPPORT Using computerized data filed provided by the National Science Foundation (NSF) and the National Institutes of Health {NIH), it was possible to identify which faculty members in each program had been awarded research grants during the FY1978-80 period by either of these agencies or by the Alcohol, Drug Abuse, and Mental Health Administra- tion (ADAMHA). The fraction of faculty members in a program who had received any research grants from these agencies during this three-year period constitutes measure 13. Since these awards have been made on the basis of peer judgment, this measure is considered to reflect the perceived research competence of program faculty. However, it should be noted that significant amounts of support for research in engineer- ing come from other federal agencies as well, but it was not feasible to compile data from these other sources. It is estimated that 35 percent of the university faculty members in these disciplines who re- ceived federal R&D funding obtained their support from NSF and another 10 percent from NIH. The remaining 55 percent received support from the Department of Energy, Department of Defense, National Aeronautics and Space Administration, and other federal agencies. It also should be pointed out that only those faculty members who served as principal investigators or coinvestigators are counted in the computation of this measure. Measure 14 describes the total FY1979 expenditures by a university for R&D in all fields of engineering. These data have been furnished to the NS~° by universities and include expenditures of funds from both federal and nonfederal sources. If an institution has more than one program being evaluated in the same discipline, the aggregate university expenditures for research in that discipline are reported for each of the programs. In each discipline data are recorded for the 100 universities with the largest R&D expenditures. Unfortu- nately, these data are available only for aggregate expenditures in engineering and are not for expenditures in the individual engineering disciplines; thus, the value reported for an individual program represents the total university expenditures in engineering. This measure has several limitations related to the procedures by which the data have been collected. The committee notes that there is evidence within the source documental that universities employ varying IDA description of these files is provided in Appendix E. ~9 Based on special tabulations of data from the NRC's Survey of Doc- torate Recipients, 1979. 20A copy of the survey instrument used to collect these data appears in Appendix E. 2 National Science Foundation, Academic Science: R and D Funds, Fiscal Year 1979, U.S. Government Printing Office, Washington, D.CNSF 81-301, 1981.

27 practices for categorizing and reporting expenditures. Apparently, institutional support of research, industrial support of research, and expenditure of indirect costs are reported by different institutions in different categories (or not reported at all). Since measure 14 is based on total expenditures from all sources, the data used here are perturbed only when these types of expenditures are not subsumed under any reporting category. In contrast with measure 13, measure 14 is not reported on a scale relative to the number of faculty members and thus reflects the overall level of research activity at an institution in a particular discipline. Although research grants in the sciences and engineering provide some support for graduate students as well, these measures should not be confused with measure 04, which pertains to fellowships and training grants. PUBLICATION RECORDS Data from the 1978 and the 1979 Science Citation Index have been compiled2 2 on published articles associated with research-doctorate programs. Publication counts were associated with programs on the basis of the discipline of the journal in which an article appeared and the institution with which the author was affiliated. Coauthored articles were proportionately attributed to the institutions of the individual authors. Articles appearing in multidisciplinary journals (e.g., Science, Nature) were apportioned according to the characteris- tic mix of subject matter in those journals. For the purposes of assigning publication counts, this mix can be estimated with reasonable accuracy.2 3 Two measures have been derived from the publication records: mea- sure 15--the total number of articles published in the 1978-79 period that have been associated with a research-doctorate program and measure 16--an estimation of the "influence" of these articles. m e latter is a product of the number of articles attributed to a program and the estimated influence of the journals in which these articles appeared. The influence of a journal is determined from the weighted number of times, on the average, an article in that journal is cited--with ref- erences from frequently cited journals counting more heavily. A more detailed explanation of the derivation of these measures is given in Appendix F. Neither measure 15 nor measure 16 is based on actual counts of articles written only by program faculty. However, extensive analysis of the "influence" index in the fields of physics, chemistry, and biochemistry has demonstrated the stability of this index and the 2 2 The publication data have been generated for the committee's use by Computer Horizons, Inc., using source files provided by the Insti- tute for Scientific Information. 2 3 Francis Narin, Evaluative Bibliometrics: and Citations Analysis in the Evaluation of Scientific Activity, Report to the National Science Foundation, March 1976, p. 203. The Use of Publications

28 reliability associated with its use.24 Of course, this does not imp ply that the measure captures subtle aspects of publication "influ- ence." It is of interest to note that indices similar---to measures 15 and 16 have been shown to be highly correlated with the peer ratings of graduate departments compiled in the Roose-Andersen study. 2 S It must be emphasized that these measures encompass articles (pub- lished in selected journals) by all authors affiliated with a given university. Included therefore are articles by program faculty mem- bers, students and research personnel, and even members of other departments in that university who publish in those journals. More- over, these measures do not take into account the differing sizes of programs, and the measures clearly do depend on faculty size. Although consideration was given to reporting the number of published articles per faculty member, the committee concluded that since the measure in- eluded articles by other individuals besides program faculty members, the aggregate number of articles would be a more reliable measure of overall program quality. It should be noted that if a university had more than one program being evaluated in the same discipline, it is not possible to distinguish the relative contribution of each program. In such cases the aggregate university data in that discipline were assigned to each program. Since the data are confined to 1978-79, they do not take into account institutional mobility of authors after that period. Emus, articles by authors who have moved from one institution to another since 1979 are credited to the former institution. Also, the publica- tion counts fail to include the contributions of faculty members' pub- lications in journals outside their primary discipline. mis point may be especially important for those programs with faculty members whose research is at the intersection of several different disciplines. m e reader should be aware of two additional caveats with regard to the interpretation of measures 15 and 16. First, both measures are based on counts of published articles and do not include books. Since in engineering most scholarly contributions are published as journal articles, this may not be a serious limitation. Second, the "influ- ence" measure should not be interpreted as an indicator of the impact of articles by individual authors. Rather it is a measure of the impact of the journals in which articles associated with a particular program have been published. Citation counts, with all their diffi- culties, would have been preferable since they are attributable to individual authors and they register the impact of books as well as 2 4Narin, pp. 283-307. 2 s Richard C. Anderson, Francis Narin, and Paul McAllister, "Publica- tion Ratings Versus Peer Ratings of Universities, n Journal of the American Society for Information Science, March 1978, pp. 91-103; and . . Lyle V. Jones, " m e Assessment of Scholarship," New Directions for Program Evaluation, No. 6, 1980, pp. 1-20.

29 journal articles. However, the difficulty and cost of assembling reliable counts of articles by individual faculty members made their use infeasible. - ANALYSIS AND PRESENTATION OF To DATA m e next four chapters present all of the information that has been compiled on individual research-doctorate programs in chemical engineering, civil engineering, electrical engineering, and mechanical engineering. Each chapter follows a similar format, designed to assist the reader in the interpretation of program data. m e first table in each chapter provides a list of the programs evaluated in a discipline--including the names of the universities and departments or academic units in which programs reside--along with the full set of data compiled for individual programs. Programs are listed alphabeti- cally according to name of institution, and both raw and standardized values are given for all but one measure.26 For the reader's conven- ience an insert of information from Table 2.1 is provided that identi- fies each of the 16 measures reported in the table and indicates the raw scale used in reporting values for a particular measure. Stan- dardized values, converted from raw values to have a mean of 50 and a standard deviation of 10,27 are computed for every measure so that comparisons can easily be made of a program's relative standing on different measures. Thus, a standardized value of 30 corresponds with a raw value that is two standard deviations below the mean for that measure, and a standardized value of 70 represents a raw value two standard deviations above the mean. While the reporting of values in standardized form is convenient for comparing a particular program's standing on different measures, it may be misleading in interpreting actual differences in the values reported for two or more programs-- especially when the distribution of the measure being examined is highly skewed. For example, the numbers of published articles (measure 15) associated with four electrical engineering programs are reported in Table 5.1 as follows: Program Raw Value Standardized Value A 1 41 B 2 42 C 11 45 D 16 47 26Since the scale used to compute measure 16--the estimated "influence" of published articles--is entirely arbitrary, only standardized values are reported for this measure. 27The conversion was made from the precise raw value rather than from the rounded value reported for each program. Thus, two programs may have the same reported raw value for a particular measure but different standardized values.

30 Although programs C and D have many times the number of articles as have programs A and B. the differences reported on a standardized scale appear to be small. Thus, the reader is urged to-take note of the raw values before attempting to interpret differences in the standardized values given for two or more programs. m e initial table in each chapter also presents estimated standard errors of mean ratings derived from the four survey items (measures 08-11~. A standard error is an estimated standard deviation of the sample mean rating and may be used to assess the stability of a mean rating reported for a particular program.28 For example, one may assert (with .95 confidence) that the population mean rating would lie within two standard errors of the sample mean rating reported in this assessment. No attempt has been made to establish a composite ranking of programs in a discipline. Indeed, the committee is convinced that no single measure adequately reflects the quality of a research-doctorate program and wishes to emphasize the importance of viewing individual programs from the perspective of multiple indices or dimensions. me second table in each chapter presents summary statistics (i.e., number of programs evaluated, mean, standard deviation, and decile values) for each of the program measures.29 m e reader should find these statistics helpful in interpreting the data reported on in- dividual programs. Next is a table of the intercorrelations among the various measures for that discipline. This table should be of partic- ular interest to those desiring information about the interrelations of the various measures. The remainder of each chapter is devoted to an examination of re- sults from the reputational survey. Included are an analysis of the characteristics of survey participants and graphical portrayals of the relationship of mean rating of scholarly quality of faculty (measure 08) with number of faculty (measure 01) and the relationship of mean rating of program effectiveness (measure 09) with the number of gradu- ates (measure 02~. A frequently mentioned criticism of the Roose- Andersen and Cartter studies is that small but distinguished programs have been penalized in the reputational ratings because they are not as highly visible as larger programs of comparable quality. The com- parisons of survey ratings with measures of program size are presented 2 ~ The standard error estimate has been computed by dividing the standard deviation of a program's ratings by the square root of the number of ratings. For a more extensive discussion of this topic, see Fred N. Kerlinger, Foundations of Behavioral Research, Holt, Reinhart, . . _ and Winston, Inc., New York, 1973, Chapter 12. Readers should note that the estimate is a measure of the variation in response and by no means includes all possible sources of error. 29 Standardized scores have been computed from precise values of the mean and standard deviation of each measure and not the rounded values reported in the second table of each chapter.

31 as the first two figures in each chapter and provide evidence about the number of small programs in each discipline that have received high reputational ratings. Since in each case the reputational rating is more highly correlated with the square root of program size than with the size measure itself, measures 01 and 02 are plotted on a square root scale.30 To assist the reader in interpreting results of the survey evaluations, each chapter concludes with a graphical presentation of the mean rating for every program of the scholarly quality of faculty (measure 08) and an associated "confidence interval" of 1.5 standard errors. In comparing the mean ratings of two programs, if their reported confidence intervals of 1.5 standard errors do not overlap, one may safely conclude that the program ratings are significantly different (at the .05 level of signifi- cance)--i.e., the observed difference in mean ratings is too large to be plausibly attributable to sampling error.3~ The final chapter of this report gives an overview of the evalua- tion process in the four engineering disciplines and includes a summary of general findings. Particular attention is given to some of the extraneous factors that may influence program ratings of individual evaluators and thereby distort the survey results. The chapter con- cludes with a number of specific suggestions for improving future assessments of research-doctorate programs. r 3 Boor a general discussion of transforming variables to achieve linear fits, see John W. Tukey, Exploring Data Analysis, Addison- Wesley, Reading, Massachusetts, 1977. 3 This rule for comparing nonoverlapping intervals is valid as long as the ratio of the two estimated standard errors does not exceed 2.41. (The exact statistical significance of this criterion then lies between .050 and .034.) Inspection of the standard errors reported in each discipline shows that for programs with mean ratings differing by less than 1.0 (on measure 08), the standard error of one mean very rarely exceeds twice the standard error of another.

Next: III. Chemical Engineering Programs »
An Assessment of Research-Doctorate Programs in the United States: Engineering Get This Book
×
 An Assessment of Research-Doctorate Programs in the United States: Engineering
Buy Paperback | $55.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!