Click for next page ( 2


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 1
Ongns~. y and Selection of Programs Each year more than 22,000 candidates are awarded doctorates in engineering, the humanities, and the sciences from approximately 250 U.S. universities. They have spent, on the average, five-and-a-half years in intensive education in preparation for research careers either in universities or in settings outside the academic sector, and many will make significant contributions to research. Yet we are poorly informed concerning the quality of the programs producing these gradu- ates. This study is intended to provide information pertinent to this complex and controversial subject. The charge to the study committee directed it to build upon the planning that preceded it. The planning stages included a detailed review of the methodologies and the results of past studies that had focused on the assessment of doctoral-level programs. The committee has taken into consideration the reactions of various groups and indi- viduals to those studies. The present assessment draws upon previous experience with program evaluation, with the aim of improving what was useful and avoiding some of the difficulties encountered in past studies. m e present study, nevertheless, is not purely reactive: it has its own distinctive features. First, it focuses only on programs awarding research doctorates and their effectiveness in preparing stu- dents for careers in research. Although other purposes of graduate education are acknowledged to be important, they are outside the scope of this assessment. Second, the study examines a variety of different indices that may be relevant to the program quality. This multidimen- sional approach represents an explicit recognition of the limitations of studies that rely entirely on peer ratings of perceived quality-- the so-called reputational ratings. Finally, in the compilation of reputational ratings in this study, evaluators were provided the names of faculty members involved with each program to be rated and the num- ber of research doctorates awarded in the last five years. In previous reputational studies evaluators were not supplied such information. During the past two decades increasing attention has been given to describing and measuring the quality of programs in graduate education. It is evident that the assessment of graduate programs is highly important for university administrators and faculty, for employers in 1

OCR for page 1
2 industrial and government laboratories, for graduate students and prospective graduate students, for policymakers in state and national organizations, and for private and public funding agencies. Past ex- perience, however, has demonstrated the difficulties with such assess- ments and their potentially controversial nature. AS one critic has asserted: . . . the overall effect of these reports seems quite clear. They tend, first, to make the rich richer and the poor poorer; second, the example of the highly ranked clearly imposes constraints on those institu- tions lower down the scale {the "Hertz-Avis" effect). And the effect of such constraints is to reduce diver- sity, to reward conformity or respectability, to penalize genuine experiment or risk. mere is, also, I believe, an obvious tendency to promote the preva- lence of disciplinary dogma and orthodoxy. All of this might be tolerable if the reports were tolerably accu- rate and judicious, if they were less prescriptive and more descriptive; if they did not pretend to "objec- tivity" and if the very fact of ranking were not per- nicious and invidious; if they genuinely promoted a meaningful "meritocracy" (instead of simply perpetuat- ing the status quo ante and an establishment mental- ity). But this is precisely what they cannot claim to be or do. The widespread criticisms of ratings in graduate education were carefully considered in the planning of this study. At the outset con- sideration was given to whether a national assessment of graduate pro- grams should be undertaken at this time and, if so, what methods should be employed. The next two sections in this chapter examine the back- ground and rationale for the decision by the Conference Board of Asso- ciated Research Councils2 to embark on such a study. The remainder of the chapter describes the selection of disciplines and programs to be covered in the assessment. m e overall study encompasses a total of 2,699 graduate programs in 32 disciplines. In this report--the fourth of five reports issuing from the study--we examine 616 programs in six disciplines in the bio- logical sciences: biochemistry, botany, cellular/molecular biology, microbiology, physiology, and zoology. These programs account for more "William A. Arrowsmith, "Preface" in m e Ranking Game: The Power of the Academic Elite, by W. Patrick Dolan, University of Nebraska Print- ing and Duplicating Service, Lincoln, Nebraska, 1976, p. ix. 2 The Conference Board includes representatives of the American Coun- cil of Learned Societies, American Council on Education, National Re- search Council, and Social Science Research Council.

OCR for page 1
3 than 90 percent of the research doctorates awarded in these six disci- plines. It should be emphasized that the selection of disciplines to be covered was determined on the basis of total doctoral awards during the FY1976-78 period (as described later in this chapter), and the ex- clusion of a particular discipline was in no way based on a judgment of the importance of graduate education or research in that discipline. Also, although the assessment is limited to programs leading to the research-doctorate (Ph.D. or equivalent) degree, the Conference Board and study committee recognize that graduate schools provide many other forms of valuable and needed education. PRIOR ATTEMPTS TO ASSESS QUALITY IN GRADUATE EDUCATION Universities and affiliated organizations have taken the lead in the review of programs in graduate education. At most institutions program reviews are carried out on a regular basis and include a com- prehensive examination of the curriculum and educational resources as well as the qualifications of faculty and students. One special form of evaluation is that associated with institutional accreditation: The process begins with the institutional or programs matic self-study, a comprehensive effort to measure progress according to previously accepted objectives. The self-study considers the interest of a broad cross- section of constituencies--students, faculty, admini- strators, alumni, trustees, and in some circumstances the local community. The resulting report is reviewed by the appropriate accrediting commission and serves as the basis for evaluation by a site-visit team from the accrediting group. . . . Public as well as educa- tional needs must be served simultaneously in deter- mining and fostering standards of quality and integrity in the institutions and such specialized programs as they offer. Accreditation, conducted through nongov- ernmental institutional and specialized agencies, pro- vides a major means for meeting those needs.3 Although formal accreditation procedures play an important role in higher education, many university administrators do not view such pro- cedures as an adequate means of assessing program quality. Other ef- forts are being made by universities to evaluate their programs in graduate education. The Educational Testing Service, with the spon- sorship of the Council of Graduate Schools in the United States and the Graduate Record Examinations Board, has recently developed a set of 3Council on Postsecondary Accreditation, The Balance Wheel for Accreditation, Washington, D.C., July 1981, pp. 2-3.

OCR for page 1
4 procedures to assist institutions in evaluating their own graduate programs.4 -~ While reviews at the institutional (or state) level have proven useful in assessing the relative strengths and weaknesses of individual programs, they have not provided the information required for making national comparisons of graduate programs. Several attempts have been made at such comparisons. The most widely used of these have been the studies by Keniston (1959), Cartter (1966), and Roose and Andersen (1970~. All three studies covered a broad range of disciplines in en- gineering, the humanities, and the sciences and were based on the opin- ions of knowledgeable individuals in the program areas covered. Ken- istonS surveyed the department chairmen at 25 leading institutions. The Cartter6 and Roose-Andersen7 studies compiled ratings from much larger groups of faculty peers. The stated motivation for these studies was to increase knowledge concerning the quality of graduate education: A number of reasons can be advanced for undertaking such a study. me diversity of the American system of higher education has properly been regarded by both the professional educator and the layman as a great source of strength, since it permits flexibility and adaptability and encourages experimentation and com- peting solutions to common problems. Yet diversity also poses problems. . . . Diversity can be a costly luxury if it is accompanied by ignorance. . . . Just as consumer knowledge and honest advertising are req- uisite if a competitive economy is to work satisfac- torily, so an improved knowledge of opportunities and of quality is desirable if a diverse educational system is to work effectively.8 Although the program ratings from the Cartter and Roose-Andersen studies are highly correlated, some substantial differences in succes- sive ratings can be detected for a small number of programs--suggesting changes in the programs or in the perception of the programs. For the past decade the Roose-Andersen ratings have generally been regarded as 4 For a description of these procedures, see M. J. Clark, Graduate Program Self-Assessment Service: Handbook for Users, Educational Testing Service, Princeton, New Jersey, 1980. sH. Keniston, Graduate Study in Research in the Arts and Sciences at the University of Pennsylvania, University of Pennsylvania Press, Philadelphia, 1959. 6A. Me Cartter, An Assessment of Quality in Graduate Education, American Council on Education, Washington, D.C., 1966. 7K. D. Roose and C. J. Andersen, A Rating of Graduate Programs, American Council on Education, Washington, D.C., 1970. ~Cartter, p. 3 e

OCR for page 1
5 the best available source of information on the quality of doctoral programs. Although the ratings are now more than 10 years out of date and have been criticized on a variety of grounds, they are still used extensively by individuals within the academic community and by those in federal and state agencies. A frequently cited criticism of the Cartter and Roose-Andersen studies is their exclusive reliance upon reputational measurement. The ACE rankings are but a small part of all the eval- uative processes, but they are also the most public, and they are clearly based on the narrow assumptions and elitist structures that so dominate the present direction of higher education in the United States. As long as our most prestigious source of information about postsecondary education is a vague popularity contest, the resultant ignorance will continue to pro- vide a cover for the repetitious aping of a single model. . . . All the attempts to change higher educa- tion will ultimately be strangled by the "legitimate" evaluative processes that have already programmed a single set of responses from the start.9 A number of other criticisms have been leveled at reputational rankings of graduate programs.~ First, such studies inherently reflect per- ceptions that may be several years out of date and do not take into account recent changes in a program. Second, the ratings of individual programs are likely to be influenced by the overall reputation of the university--i.e., an institutional "halo effect." Also, a dispropor- tionately large fraction of the evaluators are graduates of and/or faculty members in the largest programs, which may bias the survey re- sults. Finally, on the basis of such studies it may not be possible to differentiate among many of the lesser known programs in which rel- atively few faculty members have established national reputations in research. Despite such criticisms several studies based on methodologies similar to those employed by Cartter and Roose and Andersen have been carried out during the past 10 years. Some of these studies evaluated post-baccalaureate programs in areas not covered in the two earlier reports--including business, religion, educational administration, and medicine. Others have focused exclusively on programs in particular disciplines within the sciences and humanities. A few attempts have been made to assess graduate programs in a broad range of disciplines, many of which were covered in the Roose-Andersen and Cartter ratings, but in the opinion of many each has serious deficiencies in the methods and procedures employed. In addition to such studies, a myriad of ar- 9Dolan, p. 81. for a discussion of these criticisms, see David S. Webster, "Meth- ods of Assessing Quality, n Change, October 1981, pp. 20-24.

OCR for page 1
6 ticles have been written on the assessment of graduate programs since the release of the Roose-Andersen report. With the heightening inter- est in these evaluations, many in the academic community have recog- nized the need to assess graduate programs, using other criteria in addition to peer judgment. Though carefully done and useful in a number of ways, these ratings (Cartter and Roose-Andersen) have been criticized for their failure to reflect the complexity of graduate programs, their tendency to emphasize the traditional values that are highly related to program size and wealth, and their lack of timeliness or cur- rency. Rather than repeat such ratings, many members of the graduate community have voiced a preference for developing ways to assess the quality of graduate pro- grams that would be more comprehensive, sensitive to the different program purposes, and appropriate for use at any time by individual departments or universi- ties.~ Several attempts have been made to go beyond the reputational assess- ment. Clark, Harnett, and Baird, in a pilot study 2 of graduate programs in chemistry, history, and psychology, identified as many as 30 possible measures significant for assessing the quality of graduate education. Glowers 3 has ranked engineering schools according to the total amount of research spending and the number of graduates listed in Who's Who in Enain--rin^- House and Yeageri4 rated economics de- . partments on the basis of the total number of pages published by full professors in 45 leading journals in this discipline. Other ratings based on faculty publication records have been compiled for graduate programs in a variety of disciplines, including political science, psychology, and sociology. These and other studies demonstrate the feasibility of a national assessment of graduate programs that is founded on more than reputational standing among faculty peers. Clark, p. 1. I'M. J. Clark, R. T. Harnett, and L. L. Baird, Assessing Dimensions of Quality in Doctoral Education. A Technical Renort of a National Study in Tree Fields, Educational Testing Service, Princeton, New Jersey, 1976. ~ 3 Donald D. Glower, "A Rational Method for Ranking Engineering Pro- grams," Engineering Education, May 1980. - `4Donald R. House and James H. Yeager, Jr., " m e Distribution of Pub- lication Success Within and Among Top Economics Departments: A Disag- gregate View of Recent Evidence, n Economic Inquiry, Vol. 16, No. 4, October 1978, pp. 593-598.

OCR for page 1
7 DEVELOPM}3NT OF STUDY PI.AN S In September 1976 the Conference Board, with support from the Car- negie Corporation of New York and the Andrew W. Mellon Foundation, convened a three-day meeting to consider whether a study of programs in graduate education should be undertaken. The 40 invited partici- pants in this meeting included academic administrators, faculty members, and agency and foundation officials and represented a variety of institutions, disciplines, and convictions. In these dis- cussions there was considerable debate concerning whether the potential benefits of such a study outweighed the possible misrepresentations of the results. On the one hand, "a substantial majority of the Confer- ence [participants believed] that the earlier assessments of graduate education have received wide and important use: by students and their advisors, by the institutions of higher education as aids to planning and the allocation of educational functions, as a check on unwarranted claims of excellence, and in social science research." 6 On the other hand, the conference participants recognized that a new study assessing the quality of graduate education "would be conducted and received in a very different atmosphere than were the earlier Cartter and Roose-Andersen reports. . . . Where ratings were previously used in deciding where to increase funds and how to balance expanding pro- grams, they might now be used in deciding where to cut off funds and programs." After an extended debate of these issues, it was the recommendation of this conference that a study with particular emphasis on the effec- tiveness of doctoral programs in educating research personnel be under- taken. The recommendation was based principally on four considera- tions: (1) the importance of the study results to national and state bodies, (2) the desire to stimulate continuing emphasis on quality in graduate education, {3) the need for current evaluations that take into account the many changes that have occurred in programs since the Roose-Andersen study, and (4) the value of extending the range of measures used in evaluative studies of graduate programs. Although many participants expressed interest in an assessment of mas- ter's degree and professional degree programs, insurmountable problems prohibited the inclusion of these types of programs in this study. Following this meeting a 13-member committee, 7 co-chaired by Gardner Lindzey and Harriet A. Zuckerman, was formed to develop a de- ee Appendix G for a list of the participants in this conference. Carom a summary of the Woods Hole Conference (see Appendix G). 7See Appendix H for a list of members of the planning committee.

OCR for page 1
8 tailed plan for a study limited to research-doctorate programs and de- signed to improve upon the methodologies utilized in earlier studies. In its deliberations the planning committee carefully considered the criticisms of the Roose-Andersen study and other national assessments. Particular attention was paid to the feasibility of compiling a variety of specific measures {e.g., faculty publication records, quality of students, program resources) that were judged to be related to the quality of research-doctorate programs. Attention was also given to making improvements in the survey instrument and procedures used in the Cartter and Roose-Andersen studies. In September 1978 the planning group submitted a comprehensive report describing alternative strate- gies for an evaluation of the quality and effectiveness of research- doctorate programs. The proposed study has its own distinctive features. It is characterized by a sharp focus and a multidimen- sional approach. (1) It will focus only on programs awarding research doctorates; other purposes of doc- toral training are acknowledged to be important, but they are outside the scope of the work contemplated. (2) m e multidimensional approach represents an ex- plicit recognition of the limitations of studies that make assessments solely in terms of ratings of per- ceived quality provided by peers--the so-called reputa- tional ratings. Consequently, a variety of quality- related measures will be employed in the proposed study and will be incorporated in the presentation of~the results of the study. This report formed the basis for the decision by the Conference Board to embark on a national assessment of doctorate-level programs in the sciences, engineering, and the humanities. In June 1980 an 18-member committee was appointed to oversee the study. m e committee,~9 made up of individuals from a diverse set of disciplines within the sciences, engineering, and the humanities, includes seven members who had been involved in the planning phase and several members who presently serve or have served as graduate deans in either public or private universities. During the first eight months the committee met three times to review plans for the study activities, make decisions on the selection of disciplines and programs to be covered, and design the survey instruments to be used. Early in the study an effort was made to solicit the views of presidents and graduate deans at more than 250 universities. Their suggestions were most helpful to the committee in drawing up fine' plans for the assess- ment. With the assistance of the Council of Graduate Schools in the Nonnational Research Council, A Plan to Study the Quality and Effec- tiveness of Research-Doctorate Programs, 1978 (unpublished report). ~9See p. vii of this volume for a list of members of the study committee.

OCR for page 1
9 United States, the committee and its staff have tried to keep the graduate deans informed about the progress being made in this-study. The final section of this chapter describes the procedures followed in determining which research-doctorate programs were to be included in the assessment. SELECTION OF DISCIPLINES AND PRAM TO BE EVALUATED One of the most difficult decisions made by the study committee was the selection of disciplines to be covered in the assessment. Early in the planning stage it was recognized that some important areas of graduate education would have to be left out of the study. Limited financial resources required that efforts be concentrated on a total of no more than about 30 disciplines in the biological sciences, engi- neering, humanities, mathematical and physical sciences, and social sciences. At its initial meeting the committee decided that the selec- tion of disciplines within each of these five areas should be made primarily on the basis of the total number of doctorates awarded na- tionally in recent years. At the time the study was undertaken, aggregate counts of doctoral degrees earned during the FY1976-78 period were available from two independent sources--the Educational Testing Service (ETS) and the National Research Council (NRC). Table 1.1 presents doctoral awards data for 19 disciplines within the life sciences (including biological and agricultural sciences). As alluded to in footnote 1 of the table, discrepancies between the ETS and NRC counts may be explained, in part, by differences in the data collection procedures. The ETS counts, derived from information provided by universities, have been categor- ized according to the discipline of the department/academic unit in which the degree was earned. The NRC counts were tabulated from the survey responses of FY1976-78 Ph.D. recipients, who had been asked to identify their fields of specialty. Since separate totals for research doctorates in anatomy, biophysics, cellular/molecular biology, ecology, genetics, pathology, pharmacology, and physiology were not available from the ETS manual, the committee made its selection of six disci- plines primarily on the basis of the NRC data. In the case of cellu- lar/molecular biology, consideration was given to the fact that the NRC count excludes doctoral awards in genetics, anatomy, developmental biology, and other related fields and thus substantially underestimates the total number of doctorates in cellular/molecular biology.20 The selection of biological science disciplines to be covered in the assessment was especially difficult since there are differing opinions within the scientific community concerning the most appropri- 2 evidence for this may be found from the data provided by institu- tional coordinators, who reported that a total of 1,871 doctoral recipients graduated from 89 cellular/molecular biology programs during the FY1976-80 period. See Table 1.2, p. 13.

OCR for page 1
10 TABLE 1.1 Number of Research-Doctorates Awarded in Biological Science Disciplines, FY1976-78 Source of Dated Disciplines Included in the Assessment . Biochemistry Microbiology Zoology Botany Physiology Cellular/Molecular Biology2 Total DISC iPlines Not Included in the Assessment _ Public Health Forestry & Natural Resources Mgmt Agronomy & Soil Sciences Animal Sciences Entomology Agricultural Economics Horticulture Pharmacology Ecology Genetics Anatomy Biophysics Pathology Other Biological Sciences TOTAL , . ETS 1,428 19094 1~045 869 N/A N/A 647 620 616 521 482 477 197 N/A N/A N/A N/A N/A N/A N/A NRC 1~833 1J358 743 890 921 567 - 6~312 372 416 642 321 443 464 176 614 473 409 392 375 282 2t671 l ALSO Data on FY1976-78 doctoral awards were derived from two independent sources: Educational Testing Service (ETS), Graduate Programs and Ad- missions Manual, 1979-81, and the NRC's Survey of Earned Doctorates, 1976-78. Differences in field definitions account for discrepancies between the ETS and NRC data. 2 NRC data exclude doctoral awards in genetics, anatomy, developmental biology, and other related fields and thus substantially underestimate the number of doctorates in cellular/molecular biology.

OCR for page 1
ate taxonomy of biological fields in graduate education and research. Several knowledgeable individuals were consulted regarding this matter. The taxonomy the committee decided to use in this assessment, although considered by some to be out of date, reflects the departmental struc- ture commonly found in graduate institutions. Some readers may be surprised not to find biology among the six disciplines selected. Since biology encompasses many different biological science disci- plines, members of the committee were concerned that a university coordinator, when asked to identify research-doctorate programs to be included in the assessment, might have considerable difficulty in de- ciding whether a particular biological science program belonged under "biology" or one of the other disciplinary categories. It should be noted that many programs found in departments of biology have been in- cluded in the assessment of programs in cellular/molecular biology. In addition, programs from departments of anatomy, biochemistry, bio- physics, cell biology, developmental biology, and genetics have been included in the assessment in cellular and molecular biology (see Table 5.1 in Chapter V). The selection of the research-doctorate programs to be evaluated in each discipline was made in two stages. Programs meeting either of the following criteria2i were initially nominated for inclusion in the study: (1) more than a specified number (see below) of re- search doctorates awarded during the FY1976-78 period or (2) more than one-third of that specified number of doctorates awarded in FY1979. In each discipline the specified number of doctorates required for inclusion in the study was determined in such a way that the programs meeting this criterion accounted for at least 90 percent of the doc- torates awarded in that discipline during the FY1976-78 period. In the biological science disciplines the following numbers of FY1976-78 doctoral awards were required to satisfy the first criterion (above): Biochemistry--5 or more doctorates Botany--7 or more doctorates Cellular/Molecular Biology--10 or more doctorates Microbiology--4 or more doctorates Physiology--ll or more doctorates Zoology--8 or more doctorates A list of the nominated programs at each institution was then sent to a designated individual (usually the graduate dean) who had been ap- 2 ~ In the first three volumes of the committee's study, which pertain to the mathematical and physical sciences, humanities, and engineer- ing, it is mistakenly reported that a third criterion based on results from the Roose-Andersen study was used in the nomination of programs to be included in the assessment. This third criterion, while at one time considered by the committee, was not adopted.

OCR for page 1
12 pointed by the university president to serve as study coordinator for the institution. The coordinator was asked to review the list and eliminate any programs no longer offering research doctorates or notbelonging in the designated discipline. The coordinator also was given an opportunity to nominate additional programs that he or she believed should be included in the study. 2 2 Coordinators were asked to restrict their nominations to programs that they considered to be "of uncommon distinction" and that had awarded no fewer than two re- search-doctorates during the past two years. In order to be eligible for inclusion, of course, programs had to belong in one of the disci- plines covered in the study. If the university offered more than one research-doctorate program in a discipline, the coordinator was in- structed to provide information on each of them so that these programs could be evaluated separately. In each of the six biological science disciplines it was not unusual for a university to have separate pro- grams from the graduate school of arts and sciences and from the medi- cal school, school of agriculture, or the school of public health. In such cases the separate programs have been identified according to the schools in which they reside within the university. In many institu- tions research-doctorate programs that have been identified as being located in academic units other than arts and sciences nonetheless are considered within the academic structure of the graduate school. The committee received excellent cooperation from the study coor- dinators at universities. Of the 243 institutions that were identified as having one or more research-doctorate programs satisfying the cri- teria (listed earlier) for inclusion in the study, only 7 declined to participate in the study and another 8 failed to provide the program information requested within the three-month period allotted (despite several reminders). None of these 15 institutions had doctoral pro- grams that had received strong or distinguished reputational ratings in prior national studies. Since the information requested had not been provided, the committee decided not to include programs from these institutions in any aspect of the assessment. In each of the six chap- ters that follows, a list is given of the universities that met the criteria for inclusion in a particular discipline but that are not represented in the study. As a result of nominations by institutional coordinators, some pro- grams were added to the original list and others dropped. Table 1.2 reports the final coverage in each of the six biological science disci- plines. The number of programs evaluated varies considerably by disci- pline. A total of 139 biochemistry and 134 microbiology programs have been included in the study; in zoology only about half this number have been included. Although the final determination of whether a program should be considered in the assessment was left in the hands of the institutional coordinator, it is entirely possible that a few programs meeting the criteria for inclusion in the assessment were overlooked 22See Appendix A for the specific instructions given to the coordi- nators.

OCR for page 1
13 TABLE 1.2 Number of Programs Evaluated in Each Discipline and the Total FY1976-80 Doctoral Awards from These Programs Discipline Biochemistry Botany Cellu~ar/Molecular Biology Microbiology Physiology Zoology TOTAL Programs FY1976-80 Doctorates* 139 83 89 134 101 70 616 2,753 1,S74 1,871 2,058 1,369 1,753 11,378 *The data on doctoral awards were provided by the study coordinator at each of the universities covered in the assessment. by the coordinators. Of particular concern in this regard is the se- lection of cellular/molecular biology programs. Because of the diver- sity of departmental structures within universities, one is likely to find inconsistencies in the identification of research-doctorate pro- grams in this discipline. For example, some coordinators decided to include programs in departments of genetics and anatomy, while others chose to exclude such programs. r In the chapter that follows, a detailed description is given of each of the measures used in the evaluation of research-doctorate pro- grams in the biological sciences. The description includes a discus- _, _ _ _ sion of the rationale for using the measure, the source from which data for that measure were derived, and any known limitations that would affect the interpretation of the data reported. The committee wishes to emphasize that there are limitations associated with each of the measures and that none of the measures should be regarded as a precise indicator of the quality of a program in educating scientists for careers in research. The reader is strongly urged to consider the de- scriptive material presented in Chapter II before attempting to inter- pret the program evaluations reported in subsequent chapters. In pre- senting a frank discussion of any shortcomings of each measure, the committee's intent is to reduce the possibility of misuse of the re- sults from this assessment of research-doctorate programs.

OCR for page 1