Skip to main content

Currently Skimming:

3 What Are Indicators?
Pages 27-39

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 27...
... The next step is to define what indicators are arid how they should be distinguished from such other data as simple descriptive statistics or various kinds of qualitative information. In its earlier report (Ralzen and Jones, 1985:27-28)
From page 28...
... All social indicator research represents, therefore, some social theory or model, however simplistic. Much research to date laying claim to the term "social indicators research consists either of descriptive social statistics, which some have argued are not social indicators at all, or of implicit postulations of causal linkages.
From page 29...
... supplementary indicators that are presently feasible or might be developed, and (3) research on hypothesized causal links among some important but poorly understood aspects of education in order to create and validate indicators related to these aspects.
From page 30...
... These problems of interpretation have to be faced before data collection can begin. Choice of Variables Even after the key domains to be monitored have been identified for our purpose, student learning, general scientific and mathematical literacy, student behavior, teaching quality, curriculum quality, and financial and leadership support the number of possible variables from which to choose in constructing indicators of science and mathematics education remains large; a partial list could well number over 100.
From page 31...
... Other differences are found when looking at different grade levels, or when indicators other than poverty are used to represent home background. For example, in Project TALENT, an indicator of socioeconomic environment based on home variables that were hypothesized to exert a more direct effect on achievement (mother's education, books in the home, child has own desk, etc.)
From page 32...
... and achievement at the student level is typically an oval-shaped swarm of points with few outliers. Given this fact, inferring from the within-district school-level correlation of .9 that most low-achieving students come from poor homes is an excellent example of what sociologists call the ecological fallacy: the error of using relationships at one level, such as school, to describe relationships at a lower level, such as student (Robinson, 1950~.
From page 33...
... For various reasons, including self-selection, the smaller the percentage of students taking the SAT, the higher their mean SAT scores. Thus, inconsistent aggregation leads to false and misleading comparisons.
From page 34...
... Federal, state, and local education bureaucracies are awash in numbers. The challenge taken up by the committee in this report is to go beyond an endless parade of statistical tables and focus on the key questions and subsequent indicators that will be credible to policy makers in state and local education agencies the major decision makers since education in the United States is overwhelmingly a state and local legal and fiscal responsibility.
From page 35...
... For example, closed-ended questionnaires produce standardized information comparable across space and time and are particularly suitable for collecting information on such matters as salaries and defined fringe benefits, for which comparability is critical, and the nature of the desired information is relatively clear-cut. Closed-ended questionnaires are poorly suited, however, to the collection of information dealing with such topics as how teachers
From page 36...
... The choice of how often to collect data for a particular indicator should depend on the importance of the indicator for inform~ng policy and on how ranidlv chances are likely to Cur in the distribution of the behavior, incentive, or outcome reflected in the indicator. Consequently, we argue for the assessment of student learning at given grade levels every four years, except for science achievement in elementary school, for which the current improvement efforts warrant assessment every two years.
From page 37...
... Because the use of experts is an often-used mechanism, we discuss the problems inherent in its application in some detail. Based in part on our experience with difficulties encountered in the experiment on reviewing the science content of science achievement tests (see Appendix B)
From page 38...
... whose assessments differ systematically, in either a positive or negative direction, from the true values is "biased." In experiments such as the science test review, the standards against which raters assign their scores are critical since they affect the accuracy of the scores as measures of the relative value of alternative tests. Depending on their biases, reviewers may give a poor test relatively high ratings and a good test relatively low ratings so that two tests that differ widely in their true value are judged on the basis of average ratings-to be equally effective.
From page 39...
... In Appendix E we discuss issues of coordination, pulling together recommendations from throughout the report that imply surveys, referring to ongoing efforts, and outlining suggestions for how desirable new survey efforts might be implemented. More intensive survey design planning including issues of sample size should be left to agencies national, state, or local that assume or are assigned responsibility for the indicators.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.