Skip to main content

Currently Skimming:

2 Methodology
Pages 13-32

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 13...
... Pirsig Zen and the Art of Motorcycle Maintenance Both the planning committee and our own study committee have given careful consideration to the types of measures to be employed in the assessment of research-doctorate programs. The committees recognized that any of the measures that might be used is open to criticism and that no single measure could be expected to provide an entirely satisfactory index of the quality of graduate education.
From page 14...
... The committee was aided by the many suggestions received from university administrators and others within the academic community. Although the initial design called for an assessment based on approximately six measures, the committee concluded that it would be highly desirable to expand this effort.
From page 15...
... 07 Fraction of FY1975-79 program graduates who at the time they completed requirements for the doctorate reported that they had made definite commitments for postgraduation employment in Ph.D.-granting universities. Renutational Survey Results4 08 Mean rating of the scholarly quality of program faculty.
From page 16...
... Other measures, such as those relating to university library size and support for research and training, describe some of the resources generally recognized as being important in maintaining a vibrant program in graduate education. Measures derived from surveys of faculty peers or from the publication records of faculty members, on the other hand, have traditionally been regarded as indices of the overall quality of graduate programs.
From page 17...
... Unfortunately, reliable information on the subsequent employment and career achievements of the graduates of individual programs is not available. In the absence of this directly relevant information, the committee has relied on four indirect measures derived from data compiled in the NRC's Survey of Earned Doctorates.6 Although each measure has serious limitations {described below)
From page 18...
... recipients were also used in determining the identity of program graduates. It is estimated that this matching process provided information on the graduate training and employment plans of more than 90 percent of the FY1975-79 graduates from the social and behavioral science programs.
From page 19...
... Measure 06 represents the fraction of FY1975-79 program graduates who reported at the time they had completed requirements for the doctorate that they had signed contracts or made firm commitments for postgraduation employment (including postdoctoral appointments as well as other positions in the academic or nonacademic sectors) and who provided the names of their prospective employers.
From page 20...
... The evaluators were selected from the faculty lists furnished by the study coordinators at the 228 universities covered in the assessment. These evaluators constituted approximately 13 percent of the total faculty population -- 14,898 faculty members -- in the social and behavioral science programs being evaluated (see Table 2.3~.
From page 21...
... As shown in Table 2.3, 1,686 individuals -- 88 percent of the survey sample in the social and behavioral sciences -- had been recommended by study coordinators.~° Each evaluator was asked to consider a stratified random sample of no more than 50 research-doctorate programs in his or her discipline-with programs stratified by the number of faculty members associated with each program. Every program was included on 150 survey forms.
From page 22...
... Anthropology 1,181 210 125 60 Economics 2,163 279 185 66 Geography 640 150 106 71 History 2,820 306 166 54 Political Science 1,880 249 152 61 Psychology 4,299 450 280 62 Sociology 1,915 276 181 66 Faculty Rank Professor 7,629 1,000 628 63 Associate Professor 4,014 611 383 63 Assistant Professor 2,984 299 179 60 Other 271 10 5 50 Evaluator Selection Nominated by Institution 4,543 1,686 1,082 64 Other 10,355 234 113 48 Survey Form With Faculty Names N/A* 1,728 1,072 62 Without Names N/A*
From page 23...
... Indeed, this dissatisfaction was an important factor in the Conference Board's decision to undertake a multidimensional assessment, and some faculty members included in the sample made known to the committee their strong objections to the reputational survey. As can be seen in Table 2.3, there is some variation in the response rates in the seven social and behavioral science disciplines.
From page 24...
... The following alternatives were provided: 2 Better than five years ago 1 Little or no change in last five years O Poorer than five years ago X Don't know well enough to evaluate Evaluators were asked to indicate their familiarity with the work of the program faculty according to the following scale: 2 Considerable familiarity 1 Some familiarity O Little or no familiarity In the computation of mean ratings on measures 08, 09, and 10, the "don't know" responses were ignored. An average program rating based on fewer than 15 responses (excluding the "don't know" responses)
From page 25...
... Data from another NRC survey suggest that the actual fraction employed outside academia may be significantly higher. The committee recognized that the inclusion of nonacademic evaluators would furnish information valuable for assessing nontraditional dimensions of doctoral education and would provide an important new measure not assessed in earlier studies.
From page 26...
... Since these awards have been made on the basis of peer judgment, this measure is considered to reflect the perceived research competence of program faculty. However, it should be noted that significant amounts of support for research in the social and behavioral sciences come from other federal agencies and from private foundations and other nonfederal sources as well, though it was not feasible to compile data from these other sources.
From page 27...
... PUBLICATION RECORD S Data from the 1978, 1979, and 1980 Social Science Citation Index22 have been compiled on published articles by faculty members in anthropology, economics, geography, history, political science, psychology, 20A copy of the survey instrument used to collect these data appears in Appendix E 2~National Science Foundation, Academic Science: R and D Funds, Fiscal Year 1979, U.S.
From page 28...
... The significance of book publication in the social and behavioral sciences should not be overlooked. A recently published list25 of the most frequently cited 2 3 The full names of individual authors are not available from the Social Science Citation Index.
From page 29...
... Although publication productivity and the impact of published articles tend to be correlated, previous investigation~ 7 indicates that they are quite different variables. Citation counts, had it been feasible to compile them, would have complemented measures 17 and 18 and been a highly desirable measure in assessing the publication records of program faculty measures.
From page 30...
... The initial table in each chapter also presents estimated standard errors of mean ratings derived from the four survey items (measures 08-11~. A standard error is an estimated standard deviation of the 28The conversion was made from the precise raw value rather than from the rounded value reported for each programe Thus' two programs may have the same reported raw value for a particular measure but different standardized values.
From page 31...
... The comparisons of survey ratings with measures of program size are presented as the first two figures in each chapter and provide evidence about the number of small programs in each discipline that have received high reputational ratings. Since in each case the reputational rating is more highly correlated with the square root of program size than with the size measure itself, measures 01 and 02 are plotted on a square root scale.3~ To assist the reader in interpreting results of the survey evaluations, each chapter concludes with a graphical 2 9The standard error estimate has been computed by dividing the standard deviation of a program's ratings by the square root of the number of ratings.
From page 32...
... ~~ -viny one mean ratings or two programs, if their reported confidence intervals of 1.5 standard errors do not overlap, one may safely conclude that the program ratings are significantly different (at the .05 level of significance) -- i.e., the observed difference in mean ratings is too large to be plausibly attributable to sampling error.32 The final chapter of this report gives an overview of the evaluation process in the seven social and behavioral science disciplines and includes a summary of general findings.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.