Click for next page ( 36


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 35
6 Reputation and Data Presentation INTRODUCTION Since the first study of research-doctorate programs in 1925, users have focused on the reputational rating of pro- grams as the primary measure for assessing the quality of doctoral programs. Even with the introduction of many quantitative measures in the 1982 and 1995 Studies, ratings of scholarly quality of faculty by other scholars in the same field and the resulting rankings of programs have remained the primary object of attention. Recognizing this fact, the Committee and its Panel on Reputational Measures and Data Presentation set as their task the development of procedures that would: Identify useful reputational measures, Select raters who have a knowledge of the programs that they are asked to rate, Provide raters with information about the programs they are rating, and Describe clearly the variation in ratings that result from a sample survey and present program ratings in a manner that meaningfully reflected this variation. A useful reputational measure is one that reflects peer assessment of the scholarly quality of program faculty. Ideally, such a measure would be based only on the knowl- edge and familiarity of the raters with the scholarly quality of the faculty of the programs they are asked to rate and would not be directly influenced by other factors, such as the overall reputation of the program's institution (a "halo effect") or the size or age of the program. Both the 1982 and the 1995 Studies presented correlations of reputation with a number of other quantitative measures. The next assessment should expand on these correlational analyses and consider including and interpreting multivariate analyses. An example of an expanded analysis that would be of considerable interest is one that explores the relation between 35 scholarly reputation and program size. The 1982 Study found a linear relation between scholarly reputation of program faculty and the square root of program size. Ehrenberg and Hurst (1998) also found a positive effect of program size. Both these analyses suggest that there is a point beyond which an increase in program size ceases to be associated with a higher reputational rating, but it is also clear that small programs are not rated as high as middle and large size programs. Further analyses along these lines would be useful. The Committee believes that the reputational measure of the scholarly quality of faculty is important and consequen- tial. A highly reputed program may have an easier time attracting excellent students, good faculty, and research resources than a program that is less highly rated. At the same time, reputation is not everything. Students, faculty, and funders need to examine detailed program directions and offerings to be able to assess the quality of a program for their particular objectives. THE MEASUREMENT OF SCHOLARLY QUALITY OF PROGRAM FACULTY PRACTICES AND CRITICISMS The Reputational Measure of Scholarly Quality of Program Faculty To obtain the reputational measure of scholarly quality, raters have been presented with lists of faculty and the number of doctorates awarded in a program over the previ- ous 5 years. They were then asked to rate the programs: 1. On a 3-point scale, their familiarity with the work of the program faculty, 2. On a 6-point scale, their view of the scholarly quality of program faculty (a seventh category was included "Do not know well enough to evaluate".

OCR for page 35
36 For years, the use of a reputational survey to assess the scholarly quality of program faculty and the effectiveness of a doctoral program has attracted criticism. Critics cite pro- gram size as a factor that correlates with quality. The "halo effect" that raises the perceived quality of all programs in an institution that is considered to have a good reputation, the national visibility of a department or institution, and the "star effect" in which a few well-known faculty members in a program can also raise ratings. There are nonreputational measures by which individuals can assess programs, such as educational or research facilities and quality of graduate- level teaching and advising, but these are often not widely known outside the doctoral program, and raters would have limited information on which to make a judgment unless they are closely associated with it. In fact, the strong correlation between the reputational measure of scholarly quality of the program faculty ("Q") and the effectiveness of the doctoral program in training scholars ("E") present in past studies suggests that raters have little knowledge of educational pro- grams independent from faculty lists. Rater Selection For the 1995 Study, a large enough number of raters was selected to provide 200 ratings for each program in non- biological fields and 300 ratings for biological science fields. For example, if there were 150 programs in a nonbiological field, then 600 raters would be needed to provide the 200 ratings, since each rater was asked to rate 50 programs. In the biological sciences the number of raters needed to rate 150 programs was 750, since 60 programs appeared on each questionnaire and 300 ratings was the desired goal. The reason for this increase in raters and ratings stems from the realization by the last study committee that their taxonomy did not accurately describe fields in the biological sciences and, therefore, the field of some raters did not often match that of the programs they were asked to rate. Raters in the 1995 Study were selected in an almost random manner with the following restrictions: at least one rater was selected from each program; the number of raters from a particular program was proportional to the size of the program; and if there were more than three, raters were selected on the basis of faculty rank, with the first chosen from among a pool of full professors, the second from among associate professors, and so on. The response rate for this sample was about 50 percent across the 41 fields in the study, and in many cases the more visible national programs received most of the responses with about 100 ratings. Pro- grams at regional universities received fewer ratings, and in some cases scores could not be averaged after trimming. It was also noted that, by using the question that asked for a rater' s "familiarity" with the program faculty and by weight- ing the response to the question concerning program quality by familiarity, ratings increased for the higher-rated pro- grams and decreased for lower-rated programs. It appears ASSESSING RESEARCH-DOCTORATE PROGRAMS that more reliable and useable ratings would result if rater familiarity were considered. Program Information The last two assessments provided raters with a limited amount of program information. Faculty names by rank were listed on the questionnaire, and for some fields, the number of program graduates over a 5-year period was also included. This information was provided to assist raters in associating researchers with their institutions, but based on a sample of raters who were asked to indicate the number of names they recognized, most raters recognized at most one or two faculty members in most programs. Thus, it may have been that only the most visible scholars and scientists determine reputational rating and faculty lists may have been of little assistance in providing information to help raters. Additional program information or cues might assist raters in assessing program quality. Variability of Reputational Measures Since the National Surveys of Graduate Faculty for past studies were sample surveys, there is a certain amount of variability in the results. If a different sample of raters had been selected, the ratings would, in general, have been different.) This possible variability was described for past studies by estimating the confidence intervals for the scores of each program and displaying the results graphically to show the overlaps. However, this analysis was generally ignored by users and the rank order of the programs remained the focus of attention. An important remaining issue is the communication of uncertainty or variability of the ratings to users and the presentation of data that reflects the variability. Doing so can help to dispel a spurious impression of accu- racy in ratings. IS SCHOLARLY REPUTATION WORTH MEASURING? While the 1995 Study has been criticized for many of the measures it reported, the major objection was its ranking of programs on the basis of scholarly reputation of program faculty. In particular, critics argued that few scholars know enough about more than a handful of programs in their disci- pline, that programs change more rapidly than the reputa- tions that follow them, that response bias presents a false sense of program ratings, that reputation is dependent on program size, and that weak programs at well-known institu- tions benefit from a "halo effect." On the other hand, repu- tations of programs definitely exist for individual programs as well as universities. Reputational standing is real in its iCole and Cole (1973).

OCR for page 35
REPUTATION AND DATA PRESENTATION consequences and has a strong correlation with other indica- tors of quality. Perceptions of program quality held by knowledgeable outsiders is important to deans, department chairs, and other administrators in designing and promoting their programs; to governing boards in allocating resources across programs; and to prospective students in choosing among programs. More importantly, reputational measures provide a benchmark against which other quantitative measures can be calibrated. The Panel on Reputational Measures and Data Presenta- tion took the criticisms of the reputational measure as a chal- lenge, recognizing that the techniques used in earlier studies to generate reputational ratings were developed in an era when there were fewer doctoral programs, program faculty were less specialized, and the mission of most doctoral pro- grams was the training of students for academic positions. Although many doctorate holders were taking nonacademic jobs at the time of the 1995 Study, the desire to maintain continuity with earlier studies dictated a continuation of the earlier methodology. These changes in the doctoral educa- tion environment made the task of developing a meaningful reputational measure more difficult, but at the same time the technological developments of the past decade make pos- sible the use of online questionnaires to enhance and expand the scope of a survey. Modern database analysis methods also provide users with techniques to analyze the results of reputational surveys as well as the quantitative measures from the study to address their program, institutional, and research needs. ADDRESSING ISSUES RELATED TO REPUTATIONAL MEASURES The issues to be addressed fell in two major categories: 1) the development of procedures that would improve the quality of a reputational survey, and 2) the presentation of data from the reputational survey that would minimize spurious inferences of precision in program ratings by users. Efforts to improve the quality of reputational surveys focused on having a more informed rater pool by either pro- viding raters with additional information about the programs they were rating or matching the characteristics of raters with those of the programs. Matching raters to programs appears to be a good idea, but it introduces many complications, since the variety of missions and subfields present in any one of the fields in the taxonomy would rapidly create a multi- dimensional stratification of the rater pool and introduce unknown biases. Developing a large rater pool with few constraints would provide ratings that could be analyzed on the basis of program and rater characteristics. This would enable a better understanding of the process that generates reputational ratings. It would also provide a sufficient num- ber of ratings so that institutions could evaluate the study findings based on a sample of ratings they judge to be mean- ingful. For example, a program could analyze only those 37 program ratings from raters at peer institutions. This would also allow institutions to analyze their programs with par- ticular subfield specializations against those in other similarly specialized programs to gain a more accurate assessment. This could be done through the use of an online data-extraction program where there is a quantitative data- base for each program, and certain data, such as the list of program faculty, could be linked to the database to provide information on faculty productivity and scholarship. Beyond the issue of survey methodology is the issue of data presentation for all the measures, reputational and quantitative, from the study. For the 1995 Study the data were collated into a large publication consisting primarily of statistical results tables for each field displayed data for various measures. This will no longer be possible consider- ing the increase in the number of measures, programs, and fields. For the 1995 Study a CD-ROM was also produced that contained the raw data from different data sources which were intended to serve as a research tool for specialized analyses. While this basic data set will be available for the next study in electronic form, there will also be a public-use file for general users to access, retrieve, and analyze any program included in the study. The printed study would provide examples of analyses that could be conducted using the data. MODELS OF REPUTATION Another criticism of the reputational measure of schol- arly quality is that it ages between studies and, since the study is conducted only every 10 years or so, users must rely on an obsolete measure of reputation during the interim period. In fact, reputational ratings change very slowly over time, but users might find it helpful to be able to approxi- mate the effects of program changes on their reputational status. One approach would be to construct a statistical model of reputation, dependent on quantitative variables. Using that model, it would then be possible to predict how the range of ratings would change when a quantitative vari- able changed, assuming the other variables remained con- stant. The parameters of such a model would measure the statistical effect of both the intrinsic and standardized quan- titative variables on the mean of the reputational variable for all programs in a field. This would permit a program to estimate the effect on reputation of, for example, shortening time to degree or increasing the percentage of faculty with research funding. Examination of outliers in this estimation would permit the identification of those programs for which such a model "underpredicts" or "overpredicts" reputation. Programs experiencing a "halo effect" would have a better reputation than that predicted by the quantitative variables in the model alone. A technical description of such a model and examples of it using data from the 1995 Study are shown in Appendix G. Such a model could be used to estimate ratings during the period between studies, if programs

OCR for page 35
38 updated their quantitative information regularly on a study Website. However, there is a cautionary note for this type of analysis. It assumes that the relationships (the parameters) of the model are invariant over time. Only the values of the program characteristics change. If there is sufficient change in program characteristics for a field during the period between assessments, the assumption will not be valid. At this time it is not possible to judge the effects of time on the model or the soundness of this analysis, but when data are collected for the next assessment it will be possible to compare the model parameters in Appendix G with those estimated using new data on the same characteristics. The current analysis is also limited by the number of charac- teristics for which data was collected for the 1995 Study, and since the next assessment will collect data on more characteristics, the model might be improved with an expanded data set and further refinement through subsequent assessments. FINDINGS AND RECOMMENDATIONS Why Measure Scholarly Reputation at All? The large amount of data collected during previous assessments of research-doctorate programs has been widely used and, in particular, scholarly reputation is a significant component of the evaluation of faculty and programs that has consequences for student choices, institutional invest- ments, and resource acquisition. Reputation is one part of the "reality" of higher education that affects a tremendous number of decisions where graduate students choose to study, where faculty choose to locate, and where resources may flow. It also has a strong correlation with honorific recognition of faculty. Critics have given reasons for dis- counting the reputational rating, including many that were stated earlier, but it is the most widely quoted and used statistic from the earlier studies, and by using better sam- pling methods and more accurate ways to present survey results it can be a more accurate and useful measure of the quality of research-doctorate programs. Institutions use the reputational measure to benchmark their programs against peer programs. If the measure were eliminated, institutions would no longer be able to map changes in programs in this admittedly ill-defined, but important, respect. The reputa- tional measure also provides a metric against which program resources and characteristics can be compared, as similar quantitative measures for similar programs are compared across a large list of institutions. While students were not considered to be potential users of past studies, they, in fact, used the reputational ratings in conjunction with the other measures in the reports to select programs for graduate study. Future studies should encourage this use by students and provide both reputational and quantitative measures to assist them in their decisions. ASSESSING RESEARCH-DOCTORATE PROGRAMS The care taken by the NRC in conducting studies is another factor to consider with regard to the retention of the reputational measure. NRC studies are subjected to a rigor- ous review process, and the study committee would be primarily composed of academic faculty, university admin- istrators, and others whose work involves the judgment of doctoral program quality. This may be the only reputational study of program quality that limits raters of programs to members of the discipline being rated. The proposed study will go even further to ensure that the ratings are made by people who know the programs that they rate. Further, unlike studies conducted by the popular press, NRC ratings are not based on weighted averages of factors. The reputational measure is a measure of evaluation of scholarly reputation of program faculty alone. Quantitative measures are presented unweighted. Thus users can apply the data from the study to reflect their own preferences, analyze the position of their own programs, and conduct their own comparisons. This cannot be accomplished with weighted measures. Recommendation 6.1: The next NRC survey should include measures of scholarly reputation of programs based on the ratings by peer researchers in relevant fields of study. Applying New Methods For Data Presentation The presentation of average ratings in previous surveys has led to an emphasis on a single ordering of programs based on these average ratings and has given a spurious sense of precision to program rankings. Using a different set of raters would probably lead to a different set of average scores and a different rank ordering of programs. This is demon- strated by the confidence interval analyses that appeared in the last two NRC study reports. However, variance in the ratings and rankings implied by the confidence interval analysis did not translate into the way the ratings (calculated to two decimal places) were used. To show the variance in a more direct way, modern statistical methods of data display, based on resampling, can be used to show that there is actu- ally a range of plausible ratings and, consequently, a range of plausible rankings for programs. These methods show that it is not unusual for these ranges to overlap, thereby dispelling the notion that a program is ranked precisely number 3, for example, but, rather, that it could have been ranked anywhere from first to fifth. The question then arises: What is the best way to calcu- late statistically the range of uncertainty for a program? This presentation would go beyond presentation of the mean and standard error. The panel investigated two statistical methods Random Halves and Bootstrap to display the variability of results for a sample survey. These techniques are discussed technically in Appendix G. The Random Halves method is a variation of the "Jack- knife Method," where only half of the ratings are used for

OCR for page 35
REPUTATION AND DATA PRESENTATION each draw and there is no replacement. For the next draw, a different half of the whole sample is taken and a mean rating calculated for that half. Again, a mean rating would be produced for each program after each draw, and a range of ratings would result after a large number of samples. The interquartile rating range would then be presented as the pro- gram rating. The Bootstrap method would be applied by taking a ran- dom draw from the pool of raters equal to the number of responses to the survey, then computing the mean rating and ranking for each program. This would be done "with replacement," i.e., a rater and the corresponding rating could be selected more than once. If this process were continued for a large number of draws, a range of ratings would be generated and a segment of that range for each program, such as the interquartile range, would be the range of pos- sible ratings. Both methods produce similar results if the number of samples taken is sufficiently large (greater than 50), since the variance of the average ratings for the two methods is nearly the same. It might be argued that neither method produces a true rating or ranking of a program by peers in the field, but unless the survey asked every person in the field to assess every program in the field and the response rate were 100 percent, the reputational rating would tee subject to error. Presenting that error in a clear way would be helpful to users of the assessment. An illustration of data presentation where the rankings are de-emphasized can be found in Chart 6-1A. The Random Halves method was applied using reputational survey data from the 1995 Study for programs in English Language and Literature. The data were resampled 100 times, and the programs were ordered alphabetically. Chart 6-1B is an example of the Bootstrap method applied to the same programs. Charts 6-2A and 6-2B present the same calculations for programs in mathematics. Tables 6- 1 and 6-2 showing applications of Random Halves and Bootstrap methods can be found at the end of this chapter, following the charts. The Committee favors the use of the Random Halves method over the Bootstrap Method, since it corresponds to surveying half the individuals in a rater pool and may be more intuitive to the users of the data. However either would be suitable. Both Random Halves, as a variation of the Jackknife Method, and Bootstrap are well-known in the statistics literature. Regardless of which technique is used, the interquartile range is then calculated in order to eliminate outliers. The results of either analysis could be presented in tabular or graphic form for programs listed alphabetically. These charts and tables are shown at the end of the chapter. The use of either of these methods has the advantage of displaying variability in a manner similar to confidence interval computations in the past reports, without the tech- nical assumption of a normal distribution of the data underly- ing the construction of a confidence interval. These methods 39 provide ranges, rather than a single number, and differ from the presentation of survey results in the 1982 and 1995 Studies. The 1982 and 1995 Studies presented program rating as just one of the program characteristics in order to de-emphasize their importance. Tables in thel982 Study presented the data in alphabetical order by institution, and in the 1995 Study programs were ordered by faculty quality ratings. However, in both cases ratings were quickly con- verted into rankings by both the press and academic administrators, and programs were compared on that basis. If used properly, there is value in the use of rankings over ratings, since raters use subjective and different distributions of programs across the scale and this effect can only be elimi- nated by renormalization (or standardization). Rankings have the advantage of all nonparametric statistical mea- sures they are independent of variable and shifting rater scales. Thus the Committee concluded that if methods, such as Random Halves or Bootstrap, were used to address the issue of spurious accuracy, some of the defects attributed to misuse of rankings would be alleviated. The committee that will actually conduct the next assessment will have the option of presenting the data in an alphabetical order or rank order of a measure, such as the average faculty quality rating, or by the ranking range obtained from either the Bootstrap or Random Halves methods. Recommendation 6.2: Resampling methods should be applied to ratings to give ranges of rankings for each pro- gram that reflect the variability of ratings by peer raters. The panel investigated two related methods, one based on Bootstrap resampling and another closely related method based on Random Halves, and found that either method would be appropriate. The Use and Collection of Auxiliary Data Previous reputational surveys have not helped our under- standing of the causes and correlates of scholarly reputation. Raters were selected randomly and were asked to provide a limited amount of personal data. For the 1982 Study a simple analysis showed that raters rated programs higher if they had received their doctorate from that institution. Other infor- mation that could influence raters are the number of national conferences they attended in the last few years or their use of the Internet. These data might help to explain general questions of rater bias and the "halo effect." They may also be useful to programs and to university administrators in attempting to understand ratings and improve their programs. New technologies such as Web-based surveys and matrix sampling allow us to add significant information on pro- grams and on peer raters to allow better understanding of the causes and correlates of scholarly reputation. For example, statistical analyses could be conducted to relate rater charac- teristics to ratings. Beyond that, matrix sampling could be

OCR for page 35
40 used to explore how ratings vary when raters are given infor- mation beyond just lists of faculty names.2 Recommendation 6.3: The next study should have suffi- cient resources to collect and analyze auxiliary informa- tion from peer raters and the programs being rated to give meaning and context to the rating ranges that are obtained for the programs. Obtaining the resources to collect such data and to carry out such analyses should be a high priority. Survey Questions and Previous Survey In the 1982 and 1995 assessments of research-doctorate programs three qualitative questions were asked of peer reviewers. These addressed the quality of the program faculty (Qj, the effectiveness of the graduate program (E), and change in program quality in the past 5-year period (C). Only one question regarding the scholarly quality of the pro- gram faculty seemed to produce any significant results. The effectiveness question correlated highly with the quality question but did not appear to provide any other useful infor- mation. The results for the change question were also not significant, and the study committee in 1995 relied on a com- parison of data and quality scores from the 1982 and 1995 Studies to analyze change in quality, in addition to change in program size and time to degree. Recommendation 6.4: The proposed survey should not use the two reputational questions on educational effec- tiveness (E) and change in program quality over the past 5 years (C). Information about changes in program quality can be found from comparisons with the previous survey, analyzed in the manner we propose for the next survey. The Selection of Peer Raters for Programs Peer raters in a field were selected almost randomly, as described earlier, and only from the pool of faculty listed by the programs. Many Ph.D.s teach outside of research uni- versities. While in some fields a large number of new Ph.D.s go into academic careers, this is far from universal. In many fields, such as those in engineering, a large number of doctorates go into industrial or governmental positions. How well the programs serve the needs of employers in these other areas has been a long-standing question. The 1995 Study investigated the possibility of surveying supervisors of 2Doing this would confuse "reputation" with more detailed knowledge of faculty productivity and other factors, but learning whether such infor- mation changes reputational ratings would be important to understanding what reputational measures actually tell us. This issue is discussed in greater detail below. ASSESSING RESEARCH-DOCTORATE PROGRAMS research teams or human resource officers to determine their opinions on academic programs, but the conclusion was that many companies hire regionally and there did not appear to be a way to integrate the information into a useful measure. The issue of expanding the rater pool has not been resolved and various constituencies have asked that peer raters for programs be drawn from a wider pool than from the academic programs being rated. This could be assisted, in part, if the next committee included members who could represent industrial and governmental research, as well as academic institutions that are not research universities. The pool of raters could be expanded to include: industrial researchers in engineering; government researchers in fields such as physics, biomedical sciences, and mathematics; and faculty at 4-year colleges. It might be possible to identify a pool of raters from these sectors through nominations by pro- fessional organizations whose membership extends beyond academics. Recommendation 6.5: Expanding the pool of peer raters to include scholars and researchers employed outside of research universities should be investigated with the understanding that it may be useful and feasible only for particular fields. Consideration of Program Mission Doctoral programs and institutions have varying missions and they serve different student populations and employment sectors. While large institutions have the capacity for pro- grams that span many subfields of a discipline, smaller insti- tutions may be limited to developing excellence in only one or two subfields. Comparison of broad programs to such "niche" programs would possibly be biased by the visibility of broader programs. Similarly, programs may have as their mission the training of researchers for regional industries and would, therefore, not have the same national prestige as programs whose graduates go into academic positions. One main criticism of past assessments was that these factors were not taken into account. Taking subfield differences and program mission into consideration in the selection of raters for the reputational survey appears to be an obvious way to obtain more mean- ingful results. However, fragmenting rater pools into many segments based, for example, on subfields, would compli- cate the survey process by expanding the current 56 fields in the taxonomy to several hundred and many more, if factors such as the employment sectors of the graduates were con- sidered. A more manageable way to account for program mission and other factors would be to have a sufficiently diverse rater pool and collect data on the raters and program characteristics so that individual programs could make comparisons with like programs on the basis of ratings from raters who have knowledge of those programs.

OCR for page 35
REPUTATION AND DATA PRESENTATION Recommendation 6.6: Ratings should not be conditioned on the mission of programs, but data to conduct such analyses should be made available to those interested in using them. Providing Peer Raters with Aciclitional Information It is clear from the familiarity and visibility measures used for past studies that raters generally have little or no knowl- edge on which to base their rating for many programs. The limited amount of program information provided to raters in the last study may not have been of assistance, since many of the raters in the sample were unable to identify any faculty member in programs that were rated in the lower half of the rankings. It is therefore unclear on what basis many ratings were made. It is possible that information provided to raters could influence their ratings, especially for lower-rated pro- grams, but this phenomenon is not well understood. Since the reputational survey for faculty will probably be Web- based, there is an opportunity to provide a large amount of quantitative data, such as the honors of individual faculty members or their publication information, directly in the questionnaire as links to a database. Exploring this approach for a sample of the programs and raters might provide insight in the use and value of reputational surveys. Recommendation 6.7: Serious consideration should be given to the cues that are provided to peer raters. The possibility of embedding experiments using different sets of cues given to random subsets of peer raters should be 41 seriously considered in order to increase the understand- ing of the effects of cues. THE EFFECTS OF THE FAMILIARITY OF PEER RATERS WITH PROGRAMS ON THEIR RATINGS It is well-known that raters who are more familiar with a program will rate it higher than raters who are less familiar. This fact was demonstrated by weighting the ratings with responses to the familiarity question for the 1995 Study; however, those results were actually not used in compiling the final ratings. In fact, the only familiarity measure that was used for that study was a visibility measure for each program that gave the percentage of raters who gave "Don't know well enough to evaluate" or "Little or no familiarity" as one or more of their responses to the five questions. By comparing this measure with the faculty quality measure it is clear that the more highly ranked programs were more visible. While accounting for familiarity in compiling pro- gram ratings may not change the ranking of programs, it does provide validity to ratings by assigning some basis for the rating. Recommendation 6.8: Raters should be asked how familiar they are with the programs they rate and this information should be used both to measure the visibility of the programs and, possibly, to weight differentially the ratings of raters who are more familiar with the program.

OCR for page 35
42 Arizona State University Auburn University Ball State University Baylor University Boston College Boston University Bowling Green State University Brandeis University Brown University Carnegie Mellon University Case Western Reserve Univ Catholic University of America Claremont Graduate School Columbia University Cornell University CUNY - Grad Sch & Univ Center Drew University Duke University Emory University Florida State University Fordham University George Washington University Harvard University Howard University Idaho State University Illinois State University Indiana Univ of Pennsylvania Indiana University Johns Hopkins University Kent State University Lehigh University Louisiana State U & A&M College Loyola University of Chicago Miami University Michigan State University Middle Tennessee State University New York University Northern Illinois University Northwestern University Ohio State University Ohio University Oklahoma State University Pennsylvania State University Princeton University Purdue University Rice University Rutgers State Univ-New Brunswick ASSESSING RESEARCH-DOCTORATE PROGRAMS Chart 6-1A: Interquartile Range of Program Rankings* in English Language and Literature Random Halves Alphabetical Order 0 20 40 60 80 100 120 _ - - - _ ~ - _ -

OCR for page 35
REPUTATION AND DATA PRESENTATION Chart 6-1A Interquartile Range of Program Rankings* in English Language and Literature Random Halves (Cons 0 20 40 Alphabetical Order 60 80 100 120 Saint Louis University Southern Illinois University St John's University Stanford University State U of New York-Stony Brook State Univ of New York-Binghamton State Univ of New York-Buffalo Syracuse University Temple University Texas A&M University Texas Christian University Texas Tech University Texas Woman's University Tufts University Tulane University U of Illinois at Urbana-Champaign U of Massachusetts at Amherst U of North Carolina-Chapel Hill U of North Carolina-Greensboro Univ of Arkansas-Fayetteville Univ of California-Berkeley Univ of California-Davis Univ of California-lrvine Univ of California-Los Angeles U n iv of Cal iforn ia-Riverside Univ of California-San Diego Univ of California-Santa Barbara Univ of California-Santa Cruz Univ of Southern Mississippi Univ of Southwestern Louisiana University of Alabama University of Arizona University of Chicago University of Cincinnati University of Colorado University of Connecticut University of Denver University of Florida University of Georgia University of Houston University of Illinois at Chicago University of Iowa University of Kansas University of Kentucky University of Maryland College Park University of Miami University of Michigan University of Minnesota 43 - , _ - - - -

OCR for page 35
44 University of Mississippi University of Missouri-Columbia University of Nebraska-Lincoln University of New Hampshire University of North Dakota University of North Texas University of Notre Dame University of Oklahoma University of Oregon University of Pennsylvania University of Pittsburgh University of Rhode Island University of Rochester University of South Carolina University of South Florida University of Southern California University of Tennessee-Knoxville University of Texas at Arlington University of Texas at Austin University of Texas at Dallas University of Toledo University of Tulsa University of Virginia University of Washington University of Wisconsin-Madison University of Wisconsin-Milwaukee Vanderbilt University Washington State University Washington University Wayne State University West Virginia University Yale University ASSESSING RESEARCH-DOCTORATE PROGRAMS Chart 6-1A Interquartile Range of Program Rankings* in English Language and Literature Random Halves (Cons Alphabetical Order 0 20 40 60 80 100 120 *Data from 1995 Study.

OCR for page 35
REPUTATION AND DATA PRESENTATION Chart 6-1 B: Interquartile Range of Program Rankings* in English Language and Literature Bootstrap Alphabetical Order 0 20 40 60 80 100 120 Arizona State University Auburn University Ball State University Baylor University Boston College Boston University Bowling Green State University Brandeis University Brown University Carnegie Mellon University Case Western Reserve Univ Catholic University of America Claremont Graduate School Columbia University Cornell University CUNY - Grad Sch & Univ Center Drew University Duke University Emory University Florida State University Fordham University George Washington University Harvard University Howard University Idaho State University Illinois State University Indiana Univ of Pennsylvania Indiana University Johns Hopkins University Kent State University Lehigh University Louisiana State U & A&M College Loyola University of Chicago Miami University Michigan State University Middle Tennessee State University New York University Northern Illinois University Northwestern University Ohio State University Ohio University Oklahoma State University Pennsylvania State University Princeton University Purdue University Rice University Rutgers State Univ-New Brunswick 45 - - - _ _ _ - -

OCR for page 35
so University of Georgia University of Hawaii at Manoa University of Houston University of Illinois at Chicago University of Iowa University of Kentucky University of Maryland College Park University of Miami University of Michigan University of Minnesota University of Mississippi University of Missouri-Columbia University of Missouri-Rolla University of Nebraska-Lincoln University of North Texas University of Notre Dame University of Oklahoma University of Oregon University of Pennsylvania University of Pittsburgh University of Rhode Island University of Rochester University of South Carolina University of South Florida University of Southern California University of Tennessee-Knoxville University of Texas at Arlington University of Texas at Austin University of Texas at Dallas University of Utah University of Virginia University of Washington-Applied Mathematics University of Washington-Computational & Applied Math University of Wisconsin-Madison University of Wisconsin-Milwaukee University of Wyoming Vanderbilt University Virginia Polytech Inst & State U Washington State University Washington University Wayne State University Wesleyan University Western Michigan University Yale University ASSESSING RESEARCH-DOCTORATE PROGRAMS Chart 6-2A: Interquartile Range of Program Rankings* in Mathematics Random Halves (Cont.) 0 20 Alphabetical Order 40 60 80 100 120 - . - _ _ - *Data from 1995 Study.

OCR for page 35
REPUTATION AND DATA PRESENTATION Chart 6-2B: Interquartile Range of Program Rankings* in Mathematics Bootstrap 0 20 Alphabetical Order 40 60 80 100 120 Adelphi University Arizona State University Auburn University Boston University Bowling Green State University Brandeis University Brown University-Applied Mathematics Brown University-Computational & Applied Math California Institute Technology Carnegie Mellon University Case Western Reserve Univ Claremont Graduate School Clarkson University Clemson University Colorado School of Mines Colorado State University Columbia University Cornell University CUNY - Grad Sch & Univ Center Dartmouth College Drexel University Duke University Florida Institute of Technology Florida State University George Washington University Georgia Institute of Technology Harvard University Howard University Idaho State University Illinois Institute of Technology Illinois State University Indiana University Iowa State University Johns Hopkins University-Applied Math Johns Hopkins University-Computational & Applied Math Kansas State University Kent State University Lehigh University Louisiana State U & A&M College Massachusetts Inst of Technology Michigan State University New Mexico State University New York University North Carolina State University Northeastern University Northern Illinois University Northwestern University 51 - - - - . - -

OCR for page 35
52 Ohio State University Ohio University Old Dominion University Oregon State University Pennsylvania State University Polytechnic University Princeton University Purdue University Rensselaer Polytechnic Inst Rice University-Applied Mathematics Rice University-Computational & Applied Math Rutgers State Univ-New Brunswick Saint Louis University Southern Illinois University Southern Methodist University Stanford University State U of New York-Stony Brook State Univ of New York-Albany State Univ of New York-Binghamton State Univ of New York-Buffalo Stevens Inst of Technology Syracuse University Temple University Texas A&M University Texas Tech University Tulane University U of Illinois at Urbana-Champaign U of Maryland Baltimore County U of Massachusetts at Amherst U of North Carolina-Chapel Hill Univ of California-Berkeley Univ of California-Los Angeles U n iv of Cal iforn ia-Riverside Univ of California-San Diego Univ of California-Santa Barbara Univ of California-Santa Cruz Univ of Southwestern Louisiana University of Alabama University of Alabama-Huntsville University of Arizona University of California-Davis University of California-lrvine University of Chicago University of Cincinnati University of Colorado University of Connecticut University of Delaware University of Florida - - - - ASSESSING RESEARCH-DOCTORATE PROGRAMS Chart 6-2B: Interquartile Range of Program Rankings* in Mathematics Bootstrap (Cont.) 0 20 40 Alphabetical Order 60 80 100 120

OCR for page 35
REPUTATION AND DATA PRESENTATION Chart 6-2B: Interquartile Range of Program Rankings* in Mathematics Bootstrap (Cont.) 0 20 Alphabetical Order 40 60 80 100 120 University of Georgia University of Hawaii at Manoa University of Houston University of Illinois at Chicago University of Iowa University of Kentucky University of Maryland College Park University of Miami University of Michigan University of Minnesota University of Mississippi University of Missouri-Columbia University of Missouri-Rolla University of Nebraska-Lincoln University of North Texas University of Notre Dame University of Oklahoma University of Oregon University of Pennsylvania University of Pittsburgh University of Rhode Island University of Rochester University of South Carolina University of South Florida University of Southern California University of Tennessee-Knoxville University of Texas at Arlington University of Texas at Austin University of Texas at Dallas University of Utah University of Virginia University of Washington-Applied Mathematics University of Washington-Computational & Applied Math University of Wisconsin-Madison University of Wisconsin-Milwaukee University of Wyoming Vanderbilt University Virginia Polytech Inst & State U Washington State University Washington University Wayne State University Wesleyan University Western Michigan University Yale University 53 - . - _ . . *Data from 1995 Study.

OCR for page 35
54 ASSESSING RESEARCH-DOCTORATE PROGRAMS TABLE 6-1A Interquartile Range of Program Rankings* in English Language and Literature - Random Halves Rankings Rankings Quartiles Quartiles Institution 1st 3rd Institution 1st 3rd Arizona State University 75 82 U of North Carolina-Chapel Hill 23 29 Auburn University 85 93 Uof North Carolina-Greensboro 89 98 Ball State University 110 117 Univ of Arkansas-Fayetteville 110 117 Baylor University 118 122 Univ of California-Berkeley 1 3 Boston College 59 68 Univ of California-Davis 45 50 Boston University 36 43 Univ of California-Irvine 14 16 Bowling Green State University 98 107 Univ of California-Los Angeles 12 13 Brandeis University 43 51 Univ of California-Riverside 30 38 Brown University 13 15 Univ of California-San Diego 36 43 Carnegie Mellon University 47 60 Univ of California-Santa Barbara 31 38 Case Western Reserve Univ 87 94 Univ of California-Santa Cruz 41 51 Catholic University of America 118 122 Univ of Southern Mississippi 83 92 Claremont Graduate School 76 89 Univ of Southwestern Louisiana 103 110 Columbia University 7 9 University of Alabama 76 83 Cornell University 6 8 University of Arizona 57 63 CUNY - Grad Sch & Univ Center 18 19 University of Chicago 8 10 Drew University 122 125 University of Cincinnati 103 111 Duke University 5 7 University of Colorado 49 58 Emory University 27 33 University of Connecticut 78 84 Florida State University 82 91 University of Denver 102 112 Fordham University 102 112 University of Florida 37 42 George Washington University 76 86 University of Georgia 53 60 Harvard University 1 2 University of Houston 86 93 Howard University 107 114 University of Illinois at Chicago 60 69 Idaho State University 124 126 University of Iowa 41 49 minois State University 100 109 University of Kansas 63 70 Indiana Univ of Pennsylvania 122 125 University of Kentucky 41 49 Indiana University 18 20 University of Maryland College Park 36 41 Johns Hopkins University 10 11 University of Miami 68 73 Kent State University 87 95 University of Michigan 15 17 Lehigh University 108 115 University of Minnesota 32 38 Louisiana State U & A&M College 55 64 University of Mississippi 94 102 Loyola University of Chicago 85 94 University of Missouri-Columbia 57 64 Miami University 72 78 University of Nebraska-Lincoln 69 75 Michigan State University 54 62 University of New Hampshire 70 77 Middle Tennessee State University 126 127 University of North Dakota 117 121 New York University 18 20 University of North Texas 86 94 Northern minois University 94 103 University of Notre Dame 56 65 Northwestern University 26 33 University of Oklahoma 81 87 Ohio State University 31 38 University of Oregon 64 69 Ohio University 99 110 University of Pennsylvania 7 10 Oklahoma State University 119 122 University of Pittsburgh 25 31 Pennsylvania State University 38 45 University of Rhode Island 98 107 Princeton University 13 14 University of Rochester 44 50 Purdue University 53 61 University of South Carolina 48 59 Rice University 49 60 University of South Florida 110 117 Rutgers State Univ-New Brunswick 15 17 University of Southern California 24 29 Saint Louis University 70 76 University of Tennessee-Knoxville 59 70 Southern minois University 104 113 University of Texas at Arlington 99 106 St.John's University 118 122 University of Texas at Austin 21 24 Stanford University 5 7 University of Texas at Dallas 98 105 State U of New York-Stony Brook 44 52 University of Toledo 101 110 State Univ of New York-Binghamton 63 71 University of Tulsa 90 98 State Univ of New York-Buffalo 25 29 University of Virginia 4 6 Syracuse University 74 79 University of Washington 22 26 Temple University 56 64 University of Wisconsin-Madison 21 24 Texas A&M University 53 62 University of Wisconsin-Milwaukee 27 36 Texas Christian University 84 97 Vanderbilt University 28 35 Texas Tech University 101 110 Washington State University 84 93 Texas Woman's University 122 125 Washington University 47 53 Tufts University 66 74 Wayne State University 77 84 Tulane University 81 89 West Virginia University 107 115 U of minois at Urbana-Champaign 25 31 Yale University 1 3 U of Massachusetts at Amherst 37 43 *Data from 1995 Study.

OCR for page 35
REPUTATION AND DATA PRESENTATION TABLE 6-1B Interquartile Range of Program Rankings* in English Language and Literature - Bootstrap Rankings Rankings Quartiles Quartiles Institution 1st 3rd Institution 1st 3rd Arizona State University 75 82 U of North Carolina-Chapel Hill 23 29 Auburn University 85 94 U of North Carolina-Greensboro 88 98 Ball State University 110 117 Univ of Arkansas-Fayetteville 111 117 Baylor University 119 122 Univ of California-Berkeley 1 3 Boston College 61 68 Univ of California-Davis 44 51 Boston University 36 45 Univ of California-Irvine 14 16 Bowling Green State University 97 106 Univ of California-Los Angeles 12 13 Brandeis University 43 51 Univ of California-Riverside 30 37 Brown University 13 15 Univ of California-San Diego 37 45 Carnegie Mellon University 46 57 Univ of California-Santa Barbara 30 36 Case Western Reserve Univ 88 96 Univ of California-Santa Cruz 41 49 Catholic University of America 118 122 Univ of Southern Mississippi 81 93 Claremont Graduate School 78 89 Univ of Southwestern Louisiana 103 112 Columbia University 6 9 University of Alabama 78 85 Cornell University 6 8 University of Arizona 56 63 CUNY - Grad Sch & Univ Center 17 19 University of Chicago 9 10 Drew University 122 125 University of Cincinnati 104 113 Duke University 5 7 University of Colorado 50 56 Emory University 29 34 University of Connecticut 79 85 Florida State University 83 91 University of Denver 103 112 Fordham University 105 113 University of Florida 36 43 George Washington University 77 84 University of Georgia 52 60 Harvard University 1 2 University of Houston 85 94 Howard University 103 114 University of Illinois at Chicago 61 69 Idaho State University 124 126 University of Iowa 41 49 minois State University 102 109 University of Kansas 64 71 Indiana Univ of Pennsylvania 123 125 University of Kentucky 44 50 Indiana University 18 20 University of Maryland College Park 35 40 Johns Hopkins University 10 11 University of Miami 68 75 Kent State University 88 96 University of Michigan 15 17 Lehigh University 109 116 University of Minnesota 32 38 Louisiana State U & A&M College 53 62 University of Mississippi 93 101 Loyola University of Chicago 85 96 University of Missouri-Columbia 55 64 Miami University 72 79 University of Nebraska-Lincoln 69 75 Michigan State University 55 64 University of New Hampshire 69 77 Middle Tennessee State University 126 127 University of North Dakota 118 121 New York University 18 20 University of North Texas 86 96 Northern minois University 94 101 University of Notre Dame 56 65 Northwestern University 27 33 University of Oklahoma 82 89 Ohio State University 30 39 University of Oregon 64 71 Ohio University 99 108 University of Pennsylvania 7 10 Oklahoma State University 118 122 University of Pittsburgh 25 31 Pennsylvania State University 39 45 University of Rhode Island 97 107 Princeton University 13 15 University of Rochester 44 52 Purdue University 54 63 University of South Carolina 50 59 Rice University 52 62 University of South Florida 110 117 Rutgers State Univ-New Brunswick 15 18 University of Southern California 23 28 Saint Louis University 68 76 University of Tennessee-Knoxville 60 68 Southern minois University 104 112 University of Texas at Arlington 98 107 St.John's University 116 122 University of Texas at Austin 21 25 Stanford University 5 7 University of Texas at Dallas 97 107 State U of New York-Stony Brook 45 51 University of Toledo 101 110 State Univ of New York-Binghamton 63 72 University of Tulsa 90 97 State Univ of New York-Buffalo 24 29 University of Virginia 4 5 Syracuse University 74 79 University of Washington 23 27 Temple University 56 64 University of Wisconsin-Madison 21 24 Texas A&M University 52 62 University of Wisconsin-Milwaukee 28 34 Texas Christian University 82 91 Vanderbilt University 28 36 Texas Tech University 101 110 Washington State University 88 95 Texas Woman's University 123 125 Washington University 45 53 Tufts University 66 74 Wayne State University 78 83 Tulane University 80 88 West Virginia University 107 115 Uof minois et Urbane-Champaign 25 32 Yale University 1 3 U of Massachusetts at Amherst 35 41 *Data from 1995 Study. 55

OCR for page 35
56 ASSESSING RESEARCH-DOCTORATE PROGRAMS TABLE 6-2A Interquartile Range of Program Rankings* in Mathematics - Random Halves Rankings Quartiles Institution 1st 3rd Institution Rankings Quartiles 3rd Adelphi University 128 133 Rice University-Applied Mathematics 35 39 Arizona State University 82 90 Rice University-Computational & Auburn University 88 96 Applied Math 22 28 Boston University 48 53 Rutgers State Univ-New Brunswick 18 21 Bowling Green State University 109 117 Saint Louis University 117 127 Brandeis University 30 34 Southern minois University 106 114 Brown University-Applied Mathematics 26 29 Southern Methodist University 110 121 Brown University-Computational & Stanford University 5 6 Applied Math 14 18 State U of New York-Stony Brook 17 21 California Institute Technology 10 11 State Univ of New York-Albany 81 91 Carnegie Mellon University 37 41 State Univ of New York-Binghamton 62 71 Case Western Reserve Univ 81 94 State Univ of New York-Buffalo 63 72 Claremont Graduate School 72 88 Stevens Inst of Technology 117 125 Clarkson University 110 123 Syracuse University 72 82 Clemson University 86 95 Temple University 70 76 Colorado School of Mines 128 133 Texas A&M University 57 66 Colorado State University 89 101 Texas Tech University 103 110 Columbia University 10 12 Tulane University 73 80 Cornell University 14 17 U of minois at Urbana-Champaign 18 22 CUNY - Grad Sch & Univ Center 28 31 U of Maryland Baltimore County 115 123 Dartmouth College 50 60 U of Massachusetts at Amherst 54 61 Drexel University 104 112 U of North Carolina-Chapel Hill 42 44 Duke University 33 38 Univ of California-Berkeley 1 2 Florida Institute of Technology 132 135 Univ of California-Los Angeles 11 13 Florida State University 77 88 Univ of California-Riverside 75 84 George Washington University 127 133 Univ of California-San Diego 14 19 Georgia Institute of Technology 43 46 Univ of California-Santa Barbara 48 54 Harvard University 2 4 Univ of California-Santa Cruz 56 66 Howard University 113 121 Univ of Southwestern Louisiana 132 134 Idaho State University 137 138 University of Alabama 122 128 minois Institute of Technology 122 128 University of Alabama-Huntsville 126 132 minois State University 139 139 University of Arizona 52 58 Indiana University 33 37 University of California-Davis 79 88 Iowa State University 73 81 University of California-Irvine 56 63 Johns Hopkins University-Applied Math 28 33 University of Chicago 5 6 Johns Hopkins University-Computational University of Cincinnati 102 108 & Applied Math 47 62 University of Colorado 60 67 Kansas State University 83 93 University of Connecticut 96 103 Kent State University 81 91 University of Delaware 77 85 Lehigh University 92 103 University of Florida 51 59 Louisiana State U & A&M College 66 72 University of Georgia 55 62 Massachusetts Inst of Technology 2 4 University of Hawaii at Manoa 91 101 Michigan State University 47 51 University of Houston 63 71 New Mexico State University 110 116 University of Illinois at Chicago 30 35 New York University 8 8 University of Iowa 56 66 North Carolina State University 56 67 University of Kentucky 65 74 Northeastern University 75 82 University of Maryland College Park 17 20 Northern minois University 115 121 University of Miami 95 107 Northwestern University 26 29 University of Michigan 9 10 Ohio State University 29 33 University of Minnesota 13 16 Ohio University 121 126 University of Mississippi 135 136 Old Dominion University 124 132 University of Missouri-Columbia 89 98 Oregon State University 89 98 University of Missouri-Rolla 127 132 Pennsylvania State University 35 37 University of Nebraska-Lincoln 84 95 Polytechnic University 94 104 University of North Texas 100 108 Princeton University 1 3 University of Notre Dame 45 49 Purdue University 22 25 University of Oklahoma 98 106 Rensselaer Polytechnic Inst 48 54 University of Oregon 49 55

OCR for page 35
REPUTATION AND DATA PRESENTATION Rankings Quartiles Institution 1st 3rd University of Pennsylvania 22 25 University of Pittsburgh 57 65 University of Rhode Island 119 125 University of Rochester 53 61 University of South Carolina 72 80 University of South Florida 108 116 University of Southern California 42 44 University of Tennessee-Knoxville 73 82 University of Texas at Arlington 104 111 University of Texas at Austin 21 24 University of Texas at Dallas 136 138 University of Utah 34 39 University of Virginia 43 45 University of Washington-Applied Mathematics 25 28 University of Washington-Computational & Applied Math 38 41 University of Wisconsin-Madison 12 15 University of Wisconsin-Milwaukee 109 117 University of Wyoming 122 128 Vanderbilt University 81 92 Virginia Polytech Inst & State U 63 70 Washington State University 101 109 Washington University 36 39 Wayne State University 89 97 Wesleyan University 99 108 Western Michigan University 109 118 Yale University 7 7 *Data from 1995 Study. 57

OCR for page 35
58 TABLE 6-2B Interquartile Range of Program Rankings* in Mathematics - Bootstrap Rankings Quartiles Institution 1st 3rd Institution ASSESSING RESEARCH-DOCTORATE PROGRAMS Rankings Quartiles 3rd Adelphi University 127 133 Rice University-Applied Mathematics 34 39 Arizona State University 82 91 Rice University-Computational & Auburn University 88 97 Applied Math 23 29 Boston University 48 53 Rutgers State Univ-New Brunswick 17 20 Bowling Green State University 107 118 Saint Louis University 118 127 Brandeis University 29 35 Southern minois University 106 115 Brown University-Applied Mathematics 26 29 Southern Methodist University 113 123 Brown University-Computational & Stanford University 5 6 Applied Math 14 18 State U of New York-Stony Brook 18 22 California Institute Technology 9 11 State Univ of New York-Albany 81 91 Carnegie Mellon University 38 41 State Univ of New York-Binghamton 62 74 Case Western Reserve Univ 80 93 State Univ of New York-Buffalo 64 72 Claremont Graduate School 75 88 Stevens Inst of Technology 116 126 Clarkson University 111 123 Syracuse University 73 81 Clemson University 87 96 Temple University 71 76 Colorado School of Mines 129 134 Texas A&M University 57 65 Colorado State University 89 98 Texas Tech University 102 111 Columbia University 10 12 Tulane University 72 79 Cornell University 13 16 U of minois at Urbana-Champaign 19 22 CUNY - Grad Sch & Univ Center 28 32 U of Maryland Baltimore County 117 124 Dartmouth College 50 61 U of Massachusetts at Amherst 54 60 Drexel University 107 112 U of North Carolina-Chapel Hill 42 44 Duke University 33 37 Univ of California-Berkeley 1 2 Florida Institute of Technology 133 135 Univ of California-Los Angeles 11 13 Florida State University 80 89 Univ of California-Riverside 74 83 George Washington University 127 133 Univ of California-San Diego 15 18 Georgia Institute of Technology 44 46 Univ of California-Santa Barbara 48 54 Harvard University 2 4 Univ of California-Santa Cruz 56 66 Howard University 113 120 Univ of Southwestern Louisiana 132 135 Idaho State University 137 138 University of Alabama 122 128 minois Institute of Technology 120 129 University of Alabama-Huntsville 128 132 minois State University 139 139 University of Arizona 50 57 Indiana University 32 37 University of California-Davis 80 88 Iowa State University 73 81 University of California-Irvine 56 64 Johns Hopkins University-Applied Math 29 35 University of Chicago 5 6 Johns Hopkins University-Computational University of Cincinnati 101 108 & Applied Math 47 64 University of Colorado 60 66 Kansas State University 85 93 University of Connecticut 97 102 Kent State University 81 91 University of Delaware 77 84 Lehigh University 94 103 University of Florida 53 60 Louisiana State U & A&M College 66 72 University of Georgia 55 62 Massachusetts Inst of Technology 2 4 University of Hawaii at Manoa 91 103 Michigan State University 47 51 University of Houston 62 70 New Mexico State University 109 117 University of Illinois at Chicago 30 35 New York University 8 8 University of Iowa 57 65 North Carolina State University 56 67 University of Kentucky 66 71 Northeastern University 75 83 University of Maryland College Park 17 20 Northern minois University 114 121 University of Miami 95 106 Northwestern University 27 29 University of Michigan 9 10 Ohio State University 28 33 University of Minnesota 14 17 Ohio University 120 125 University of Mississippi 135 136 Old Dominion University 125 131 University of Missouri-Columbia 90 101 Oregon State University 87 96 University of Missouri-Rolla 127 131 Pennsylvania State University 35 38 University of Nebraska-Lincoln 84 93 Polytechnic University 94 103 University of North Texas 100 109 Princeton University 1 3 University of Notre Dame 44 48 Purdue University 23 26 University of Oklahoma 97 106 Rensselaer Polytechnic Inst 48 55 University of Oregon 48 55

OCR for page 35
REPUTATION AND DATA PRESENTATION Rankings Quartiles Institution 1st 3rd University of Pennsylvania 21 25 University of Pittsburgh 55 66 University of Rhode Island 120 126 University of Rochester 55 62 University of South Carolina 71 81 University of South Florida 108 115 University of Southern California 42 44 University of Tennessee-Knoxville 74 82 University of Texas at Arlington 103 112 University of Texas at Austin 21 24 University of Texas at Dallas 137 138 University of Utah 33 38 University of Virginia 43 45 University of Washington-Applied Mathematics 24 28 University of Washington-Computational & Applied Math 39 41 University of Wisconsin-Madison 12 15 University of Wisconsin-Milwaukee 109 116 University of Wyoming 123 128 Vanderbilt University 81 91 Virginia Polytech Inst & State U 62 69 Washington State University 101 108 Washington University 36 40 Wayne State University 90 97 Wesleyan University 100 109 Western Michigan University 110 118 Yale University 7 7 *Data from 1995 Study. 59

OCR for page 35