National Academies Press: OpenBook

Indicators of Precollege Education in Science and Mathematics: A Preliminary Review (1985)

Chapter: 2. The Selection and Interpretation of Indicators

« Previous: 1. Introduction and Summary
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 25
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 26
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 27
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 28
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 29
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 30
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 31
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 32
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 33
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 34
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 35
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 36
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 37
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 38
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 39
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 40
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 41
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 42
Suggested Citation:"2. The Selection and Interpretation of Indicators." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 43

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

2 The Selecton and Interpretation of Indicators In order to develop effective policy for precollege education in science and mathematics, information is needed on its current condition and on the effects of efforts to improve it. Given, however, that there are limitations to the resources that can be devoted to data collection, what aspects of science and mathematics education is it most important to monitor? And what kind of information is most useful for the lay governing bodies and professionals involved in making decisions about these critical areas of education? AVAILABLE DATA AND INFORMATION ON EDUCATION A large supply of statistical data and research information is available on education in general. At the national level, the National Center for Education Statistics (NCES) of the Department of Education has as its main responsibility "to report full and complete statistics on the conditions of education in the United States . . ." (General Education Provisions Act, as amended (20 U.S.C. 1211e-1)). The Center publishes two major compilations annually: The Digest of Education Statistics, issued since 1962, which provides an abstract of statistical information on United States education from prekindergarten through graduate school, and The Condition of Education, issued since 1975, which presents the statistics in charts accompanied by discussion. The NCES and other components of the Department of Education also sponsor periodic surveys, for example, High School and Beyond Study, a study of 1980 high school graduates and 1980 sophomores (National Center for Education Statistics, 1981a), which was extended to 1982 graduates, 25

26 and the earlier National Longitudinal Study of 1972 high school graduates (National Center for Education Statis- tics, 1981b). These studies provide information on student enrollment and achievement, although information specific to mathematics and science education is limited. The Department of Education also supports the National Assessment of Educational Progress (NAEP), which since 1969 has provided data on scholastic achievement and student attitudes, one of the few such sources that involve well-designed national samples. Another source of information is the International Project for the Evaluation of Educational Achievement (TEA) in which the United States has participated. A comparison of mathematics achievement and schooling variables in 12 countries was carried out using data collected in 1964 (Husdn, 1967); an assessment of science education involving 14 countries was done in 1970 (Wolf, 1977). New data on mathematics achievement in 24 coun- tries were collected in 1981-1982 and their analysis is in progress; a summary report on findings in the United States is available (Travers, 1984). The science assessment is also being repeated, with 30 countries participating. Both the Department of Education and the National Science Foundation (NSF) as well as private foundations have provided support for these international assessments. The NSF has special responsibility in the area of science and mathematics education, but most of its data collection activities focus on higher education and scientific and engineering personnel rather than on precollege education. However, NSF does support some of TEA ~ ~ work and has sponsored special studies on science and mathematics in the SChOOlS, most recently a national science assessment using the NAEP framework (Hueftle et al., 1983). Three landmark studies were carried out in 1977-1978 with NSF support: a review of the literature on science and mathematics improvement efforts between 1955-1975 (Helgeson et al., 1978; Suydam and Osborne, 1978); a survey in 1977 of the current status of education in these fields (Weiss, 1978), which will be repeated in 1985; and a series of case studies of schools (Stake and Easley, 1978). Some of the information resulting from these NSF-supported studies and data from other sources have been compiled in a data book (also covering higher education and employment in science and engineering), which was first issued in 1980 and revised in 1982 (National Science Foundation, 1980, 1982a).

27 Every state also has its own data collection system, much of it devoted to fiscal, demographic, and managerial information, but also including data on enrollments, personnel, and student achievement. There is, however, considerable variation in the types of data collected by states and in the manner of collection, which is not surprising in view of the organizational diversity among the states (and, within each state, among school dis- tricts) with respect to their educational practices and institutions (see Tables A2 and AS in the Appendix). Larger local education agencies also collect information that they find useful for their internal operation as well as data requested by the state agencies. The data from local education agencies exhibit an even greater diversity than do those of the state systems. In addition to the governmental sources of information, some data are available from private organizations. Educational associations collect relevant data, usually on the supply and demand, pay, and characteristics of teachers (see, for example, Graybeal, 1983). Some scientific societies occasionally survey or study the substance of what is taught in their disciplines at the precollege level and publish their findings. THE CONCEPT OF INDICATORS The existence of potentially relevant information does not necessarily make it possible to formulate conclusions about the state of mathematics education or science education--or any other field. For one thing, the data often are not comparable; see, for example, the critique by Gray (1984) of the comparison of state data made by the Department of Education (Bell, 1984). For another, the quality of the data is sometimes too low to permit robust findings. Lastly, due to the massive amount of data, it is difficult to summarize the information or draw implications. The use of suspect data or selective interpretations of data may lead to inappropriate policy, as pointed out by Peterson (1983) and by Stedman and Smith (1983) in their articles on the recent reform proposals for education. To provide focus to the problem of having to picture complex systems with massive amounts of diverse data, the concept of indicators has emerged. An indicator is a measure that conveys a general impression of the state or nature of the structure or system being examined. While

28 it is not necessarily a precise statement, it gives suf- ficient indication of a condition concerning the system of interest to be of use in formulating policy. Johnstone (1981) uses the analogy of the litmus test in chemistry, which gives an indication of the acidity or alkalinity of a liquid, but does not provide a precise measure of pH (the concentration of hydrogen ions, the condition that determines acidity or alkalinity). Optimally, an indi- cator combines information on conceptually related variables, so that the number of indicators needed to describe the system of concern can be kept reasonably small. Limiting the number of indicators is important for two reasons. First, individuals involved in making decisions about such a complex endeavor as education require information that is relevant and easily under- stood. To achieve the necessary clarity requires reduc- tion and simplification of pertinent information, together with a discussion of the selected indicators that inter- prets their values and explains their meaning and limita- tions. Second, since the progress of any field, such as science or mathematics education, can be tracked only if measures are repeated periodically, the feasibility and cost of indicators become critical factors. There are advantages, then, in adopting a small number of indica- tors, carefully selected to highlight major aspects of education in the areas of interest, so as to encourage continuing data collection. There are four stages in the development of indicators: identifying the central concepts relevant to the system in question; deciding what measurable variables best represent those concepts; analyzing and combining the data collected on the variables into informative indi- cators; and presenting the results in succinct and clear form. Regarding the first step, education systems have generally been modeled in terms of inputs, processes, and outputs. A conceptual framework that follows this model but more specifically maps the domain of science education has been proposed by Welch (1983) and is outlined in Table 1. While the use of such a framework highlights the major areas to be covered, it does not specify the combination of variables that will best portray prevalent conditions in each area. For this purpose, the most important out- comes desired from mathematics and science education must first be specified. Next, the schooling variables that are related to these outcomes and that can be affected by educational policy must be identified. Third, in order

29 TABLE 1 Domain of Science Education Context or Antecedent Transactions Conditions (Inputs) (Processes) Outcomes (Outputs) Student Student Student characteristics behaviors achievement Teacher Teacher Student characteristics behaviors attitudes Curriculum Classroom Career choices materials environment Teacher changes Public attitudes External Institutional Goals influences effects Advances in science . . . etc. . . . etc. School climate Home environment . . . etc. SOURCE: Adapted from Welch (1983). to assess current conditions and monitor changes, appro- priate measures for the identified variables must be selected (or developed). Using these measures and carrying out a variety of analyses will lead to results that can be displayed in the form of statistical indi- cators portraying the condition of mathematics and science education. The choice of analyses and indicators, like the selection of variables, should be guided by relevance to policy. The next three sections discuss the selection of variables; succeeding chapters discuss how the selected variables might be measured and analyzed. SELECTING INDICATOR VARIABLES Outcomes The outcome most clearly expected of instruction in mathematics and science is the acquisition of knowledge, abilities, and skills in those fields. Some degree of proficiency is deemed essential for all high school gradu- ates so that they can function effectively in society and manage their personal and family lives; additional prepar- ation may be needed to take advantage of further education

30 or to participate successfully in the world of work. The importance given to student achievement as an outcome of education is documented by the many measures developed to assess it, ranging from quizzes constructed by individual teachers to standardized tests with norms based on nation- ally representative samples of students, from minimum- competency tests that are expected to be passed by all students to tests of material in advanced curricula. Most states have their own student assessment programs (see Table 5, in Chapter 3, and Table As, in the Appendix), as do many of the larger school districts. As noted above, NAEP was established some 15 years ago to provide information on educational achievement for the country as a whole. Although not nationally representa- tive, the scores made by students from year to year on college entrance tests are frequently interpreted by the media and the public as indicators of academic perfor- mance. Public interest has extended to international data on student achievement; the results of the tests administered through the TEA have been used to document the achievement of U.S. students compared with that of students in other countries. Since these various means of assessing student achievement do not always yield consistent results, syntheses and interpretations are necessary; see, for example, the one done for mathematics and science achievement by Jones (1981). The emphasis and resources invested on assessing student achievement demonstrate the importance attached to this outcome--in fact, the acquisition of knowledge is the main reason for the existence of formal education. Hence, student achievement must be considered as the primary indicator of the condition of science and mathe- matics education. A second outcome often stated as a goal of science and mathematics education is the development of favorable attitudes of students toward these fields. Thus, for example, the most recent national science achievement assessment (Hueftle et al., 1983) included items on student attitudes toward science activities and science classes, science teachers, and science careers and about the usefulness of science. It is not clear, however, whether favorable attitudes are to be considered a desired outcome of schooling in and of themselves or whether they are considered important because they are believed to mediate such other desirable outcomes as increased involvement with mathematics and science activities and therefore increased achievement.

31 Research evidence on the relationship between psycho- logical factors and achievement indicates that classroom morale and encouragement at home correlate rather highly with student achievement (Walberg, 1985), but the correla- tion between favorable attitudes toward a particular subject and success in learning that subject is fairly low (Welch, 1983; Horn and Walberg, 1984). In an analysis of research results from a number of studies on the rela- tionship between science achievement and science attitude, Willson (1983) also found only a modest correlation of .16 across all grade levels, including college. In the same study, causal ordering results supported the hypothe- sis that achievement affects attitude rather than the other way around, at least for grades 3 to 8. One problem in the assessment of attitudes and interpretation of results is the lack of adequate theory: as a consequence, some of the instruments and test items that have been used to assess attitudes toward science have given incon- sistent and ambiguous results, raising doubt as to what is really being measured (Munby, 1983). Given the uncer- tainties about the significance of favorable attitudes toward a particular field of study and about some of the measures used, the committee in this report has not treated them as a primary indicator of science and mathematics education.* The committee believes that the question of developing and using an indicator representing student attitudes towards science and mathematics deserves reconsideration in any further work on indicators. Other outcomes of education generally considered to be important include college attendance, choice of college majors, choice of careers, and later career paths, includ- ing life income and job satisfaction. Each of these has received the attention of researchers seeking to assess the benefits of education; each is important to indi- vidual and societal goals and to the development of human resources. However, each is mediated by many variables other than those associated with schooling. For example, it has been suggested that plans for college attendance and field of study might be taken as a proxy for student attitudes, but economic conditions and perceptions of future employability strongly affect such plans. One school variable, additional years of schooling, has been found to be correlated with increases in overall *Wayne W. Welch dissents from this decision.

32 lifetime income and with job satisfaction, but neither of these outcomes has been tied to instructional variations within the precollege experience, given the same number of years of school completed. (Student achievement, how- ever, does predict years in school.) Despite the lack of strong correlations between school achievement and work performance, employers continue to resort to secondary indicators such as academic degrees achieved and schooling records for applicants without prior experience (Spence, 1973), because degrees and schooling records can be more readily assessed than nonschool variables that might be related to job performance. This use of school variables to select new employees does not imply that career out- comes should be used as an indicator of schooling quality. In general, the more distant an outcome from the immediate purpose of instruction, the more tenuous the link and the more likely that nonschool variables will affect that outcome. Pending research findings that more clearly link schooling variables to career achievement and other life outcomes, the committee has not chosen to include in this preliminary review indicators representing such outcomes. Schooling Inputs and Processes The selection of student achievement as the outcome variable of greatest interest determines to a consider- able extent what schooling input and process variables need to be selected, namely, those that seem to have some causal relationship to student achievement. The landmark study by Coleman et al. (1966) and several succeeding studies appeared to throw into question the intuitively obvious connection between differences in schooling and student performance. More recent work, however, has consistently shown significant positive associations between certain schooling variables and cognitive achieve- ment by students. The most robust effects are correlated with "opportunity to learn": that is, whether and for how long students are exposed to particular subject matter. Opportunity to learn in school consists of the instructional time spent on a subject together with the content of that instruction. To a considerable extent, both time and content are controlled by the teacher, although in secondary school students themselves decide at least in part how many units of a subject to study.

33 School Processes: Instructional Time Educational practice assumes that exposure to a subject will lead to students' acquiring knowledge and skills per- taining to that subject. Recent evidence supporting this assumption comes from major cross-sectional studies anct assessment efforts. One such assessment, an extensive study of elementary school teachers in California, found increases in academic learning time strongly associated with increases in student learning (Fisher et al., 1980). Similar results have also been found for mathematics and science. Using data from the 1977-1978 NAEP study of student performance in mathematics, Welch et al. (1982) found that, while background variables (such antecedent con- ditions as home and community environment and previous mathematics learning) accounted for 25 percent of the variance in mathematics achievement, exposure to mathe- matics courses explained an additional 34 percent. The study was replicated by the authors on three different national samples with similar results. Using another NAEP sample, Horn and Walberg (1984) also obtained a sizable correlation (.62) between the number of mathe- matics courses taken and student achievement for 17-year- olds. In a somewhat different analysis, using data from a special 1975-1976 NAEP study on mathematics achievement, Jones (1984) found that the average mathematics score of 17-year-olds varied from 47 percent correct for those having taken no algebra or geometry courses in high school to 82 percent correct for those having taken at least 3 of such courses. While some of the difference may years be accounted for by the fact that more proficient students tend to take more mathematics courses, part of the dif- ference remains even after adjusting for initial profici ency (see Wisconsin Center for Education Research, 1984). The relation between amount of schooling and science achievement is also positive. Welch (1983) has shown a correlation of .35 between achievement and semesters of science. Similarly, Wolf (1977) found a correlation of .28 between science test scores and course exposure. Based on educational practice and experience and the available research evidence, the committee believes that time given to a subject in elementary school and course enrollment in secondary school ought to be considered key process variables in developing indicators of mathematics and science education. This is not to say that instruc- tional time is the only factor affecting learning or that

34 increases in instructional time will yield equivalent increases in student achievement. Clearly, the quality of instruction as exemplified by such process variables as teacher behaviors, student behaviors, and classroom environment also influence student achievement to a considerable degree. However, given the limited knowledge available about these variables and the constraints inherent in this preliminary review, the committee does not recommend their use as indicators at this time. The process variable of instructional time or course enroll- ment can be considered a proxy for process variables in general until others can be documented and measured with greater certainty. Input Variables _ontent The content of instruction is obviously another dimension of opportunity to learn. The research that has been done confirms what common sense would predict: emphasis on specific subject matter increases student performance on tests of that subject. Thus, both Husen (1967) and Wolf (1977), summarizing the TEA mathe- matics and science assessments, report that student test scores in all participating countries are correlated with the teachers' ratings on whether the topics on the tests had been covered in instruction. The correlation of student achievement with number of mathematics courses taken becomes even stronger when the content of the mathematics courses is taken into account: with the variables controlled for one another, Horn and Walberg (1984) found that an index of the number of advanced mathematics courses taken correlated somewhat more highly with mathematics achievement than did just the number of all mathematics courses taken. The common-sense idea that subject matter content, not only amount of time, is important to student learning has been further documented in an analysis of 105 studies on the effects of alterna- tive curricula: Shymansky et al. (1983:387) found that students exposed to new science curricula (i.e., those developed during the school science and mathematics reforms that followed the launching of Sputnik in 1957) "performed better than students in traditional courses in general achievement, analytic skills, and process skills [i.e., the skills stressed in the materials]. . . . On a composite basis, the average student in new science curricula exceeded the performance of 63 percent of the students in traditional science courses."

35 Teachers The second schooling input deemed critical by the committee is the number and qualifications of teachers with instructional responsibilities in science and mathematics. Classroom teachers are the single most costly resource component in schooling. Although the teacher share of the school dollar has dropped in the last decade--in part because teacher salaries have not kept pace with inflation--those salaries still repre- sented 38 to 44 percent of total direct operating costs for public schools during 1982-1983, even without counting pension payments or fringe benefits (Feistritzer, 1983; Educational Research Service, 1984, personal communica- tion). Moreover, even though the extent of their control over instructional time and content may vary, teachers do determine the nature of classroom instruction. At the elementary level, the number of teachers is not now an issue, but it may become one as student enrollments increase again in the mid-1980s. Even now, however, the competence of elementary school teachers with respect to mathematics and science is of major concern. Assessing the competence of teachers for grades 7 and 8 poses a special problem. In several states, teachers certified for elementary school are automatically certified to teach those grades as well without the subject-matter preparation usually required of secondary school teachers; yet those are the grades when differentiation of the curriculum into disciplinary courses begins and one would expect the need for greater subject-matter knowledge by teachers than for grades 1 to 6. At the secondary school level, both the quantity and the qualifications of the teachers responsible for teaching mathematics and sciences determines what courses are offered and how well they are taught. Expenditures and Other Cost Factors In addition to content and the number and qualifications of teachers, other input variables were considered by the committee. One input variable often used to try to explain educa- tional outcomes is the amount of money invested in schools. An effort has been made to determine dollar costs of "adequate n education, state by state (Miner, 1983), that Differences bound to be perspective. shows wide variability over the states. among communities within states also are large, and are less tractable from a national

36 Some cost factors, especially per-pupil expenditures, teacher salaries, expenditures on books and materials, and acquisition of computers and laboratory equipment have been separately tracked as important inputs. Attempts to relate such expenditures to student achieve- ment have yielded mixed results. In a review of quanti- tative studies of school effectiveness, Murnane (1980:14) concluded that the primary school resources are teachers and students and that such other inputs as physical facilities and class size "can be seen as secondary resources that affect student learning through their influence on the behavior of teachers and students. n Little is known, however, about the ways in which teacher and student behaviors are related to alternative invest- ments, say, in teacher salaries, materials and equipment, school plant, specialist teachers, and the like. A major cost factor is class size, yet the evidence indicates that marginal (if costly) decreases in class size of two or three students (e.g., from 33 to 30) hardly affect achievement (Glass et al., 1982). In a study of achievement gains in grades 3 to 6, Summers and Wolfe (1977) found that large classes (more than 28 pupils) were detrimental for low-achieving students but were beneficial for high achievers, a finding that might explain the inconsistency of results of research on class size that fails to consider the achievement levels of students. Another major cost factor is that associated with teacher salaries. While salary level might be a good indicator of public attitudes about education, it has not consistently been found to be related to student achievement. Salary levels are related both to the seniority of teachers and to the extent of teachers' education beyond the B.A. level. But neither teacher seniority nor post-baccalaureate education seems to show a simple positive relationship to student learning. Indeed, under some circumstances, a negative relation between student achievement and post-baccalaureate education is reported (e.g., Summers and Wolfe, 1977; Hilton et al., 1984). Since teachers with advanced degrees command higher salaries than those without such degrees, this finding would lead to the expectation that teacher salaries would also relate negatively to student achievement. In a review of 130 studies that analyzed the relation- ship between student performance and school expenditures, Hanushek (1981:30) concluded that "higher school expendi- tures per pupil bear no visible relationship to higher

37 student performance. n Walberg and Rasher (1979) conjec- ture that it may not be total educational expenditure that may make a difference, but highly targeted and selective investments. Yet school budgets, whether local or state, are not constructed nor reported to provide the kind of detail needed to track expenditures for specific subject areas such as science or mathematics. Even if it were feasible--probably at considerable cost--to disaggre- gate budgets in this manner, the expenditures would still need to be related to student achievement before they could be accepted as a useful indicator. So far, adequate evidence is lacking. Another approach might be to track federal support. There is evidence that the post-Sputnik federal investment in science and mathematics education helped increase both enrollment and performance in those subjects. But while the programs supporting science and mathematics education within the National Science Foundation and the Department of Education are generally identifiable, some others of considerable magnitude--for example, those sponsored by the Department of Defense and by the National Aeronautics and Space Administration--are not. In the absence of relevant budgetary information and without further evidence on the relationship between educational spending and student performance, the com- mittee, in this preliminary review, decided not to recom- mend use of expenditure data as an indicator. Given interest in the funding of education, however, financial data and research on the economics of education should be . — _ _ . ~ ~ reexamined in any future consideration ot ~na~cators. Public Attitudes One other indicator of input was considered by the committee: =~ = science and mathematics education. Perception of these fields appears to have discernible effect on the emphasis they receive in school, as witness the current wave of increases in requirements for high school graduation (see Table 5, in Chapter 3). Federal funding may be another indication of public attitudes; for example, the share of Public attitudes coward the total NSF budget allocated for science education rose to nearly 50 percent in the late 1950s, decreased to about 30 percent in the 1960s, has been 10 percent or less over the last decade, and is now on the rise again (Klein, 1982). But these fluctuations are not mirrored in measures of public opinion. The results of 15 years of polling by the Gallup Organization on attitudes toward

38 education do not show parallel swings: mathematics has ranked high in importance as a school subject throughout this period; science generally has ranked near the average of school subjects (see, e.g., Gallup, 1981, 1983). Given little change in public attitudes over the last 15 years, at least as demonstrated by this measure, and the uncertainty of the relationship between public attitudes and schooling outcomes, the committee did not use this variable and is not recommending its development as an indicator. Conclusion In sum, the committee has identified a minimal set of key schooling variables that should be monitored, shown in Figure 1. Assessing the condition of each of these variables will set the stage for the development of indicators. For example, counting the number of cer- tified mathematics teachers actively teaching in a particular school year provides a datum that could be displayed against other pieces of information: total secondary school enrollment, enrollment in mathematics courses, total number of secondary school teachers, expected demand for mathematics teachers, numbers of mathematics teachers in some previous year, or--if there are separate counts for different geographic entities-- comparisons of the density of mathematics teachers related to student population. Education System INPUTS PROCESS OUTCOME Teachers quantity qual ity ~ Curricul urn content Instructional ti me/cou rse ~ ach tenement en rol I me nt Student FIGURE 1 Areas of science and mathematics education to be monitored.

39 COLLECTING INFORMATION Most of the information available on the variables selected by the committee in the first phase of its work has been collected through surveys and student tests, although occasionally case studies have been employed to describe classroom processes in greater detail (e.g., Stake and Easley, 1978). _ Some surveys and tests use whole populations, others are based on national (or state) samples, still others are characterized by self-selection of participants, as in the case of the College Board's Scholastic Aptitude Tests (SATs). Some surveys are planned to document conditions at a single point in time (e.g., Weiss, 1978); some, such as several of the NCES data collections, are repeated annually; others--IEA, for example--are repeated at irregular intervals; still others are designed as longitudinal studies that follow a cohort population over a number of years. Methods for collecting information pertinent to the selected variables depend on the nature of a particular variable and on the types of analyses appropriate for portraying values associated with it. For example, data on the time allocated to each subject in elementary school can be collected through questionnaires to school personnel, but the use of instructional time in the classroom can best be documented by observation. Since this entails time-consuming research procedures, only a limited number of cases can be studied in detail. Case studies are also useful for uncovering problems with data collected through surveys. Thus, data on enrollments in high school courses can be collected from student trans- cripts, self-reports by students on questionnaires, or reports by school personnel--likely with significant discrepancies among these three sources. Examination of individual course syllabi and observation of the subject matter actually taught under given course titles can clarify such discrepancies. In general, a mix between sample surveys, full population censuses, and case studies seems optimal, with studies linked over time by a consis- tent set of defined indicators. Periodic replication of studies is necessary if tem- poral trends are to be identified, but this does not necessarily mean annual surveys. Careful thought must be given to reducing the response burden entailed in surveys and the disruption that sometimes accompanies case studies. For some purposes, especially for preparing

40 budgets, annual data may be necessary, but for the purpose of documenting changes over time in the state of science and mathematics education, periods between surveys can be 2 or 3 years, or even 10 years, as in the case of the complex TEA studies. One way of limiting both the expense and the disruption and response burden of periodic surveys and case studies may be to set up a carefully selected panel of schools, with systematic rotation of schools into and out of the panel, to provide a consistent data base. DISAGGREGATING DATA Collecting Data at the State and Local Levels Much of the data used to document the several recent reports on education that have given impetus to various reform efforts come from national surveys or nationally administered tests. Such information may be useful for developing federal education policy and for following general national trends. However, education in the United States is decentralized and, despite some tendencies toward conformity, quite diverse in inputs, processes, and outcomes. Each state education system represents a unique combination of factors; so does each local system. The richness and sometimes even the mean- ing of information is obscured by reporting only national averages. Indeed, nationally aggregated statistics are of limited use in formulating state and local policy: it is states--and localities--that carry the authority for education. Therefore, if the condition of science and mathematics education is to be portrayed so as to inform all the people and policy makers involved in education, indicators must be selected to be useful at the state and local level as well as at the national level. Moreover, the appropriateness of the indicators must be tested against the burden of collecting the requisite information at each level. For these reasons, this report presents data relevant to the selected indicators for several states as well as nationally aggregated data. Each of the states cooperating with the committee already has good data systems in place; the inclusion of information from these states is intended to demonstrate both the feasibility of the committee's suggested indicators and some of the problems to be overcome in obtaining the pertinent data. In addition, even though the included

41 states were not selected on the basis of being representa- tive or exhibiting particular contrasts, the data show considerable variation from the national data as well as from state to state. By analyzing such variations, analogous data on the same indicators that come from different reporting groups greatly add to the value of the information available. Disaggregating Data by Demographic Descriptors To serve the national goal of equal educational opportunity, it is important to collect certain data by gender and minority status. The reason for this type of disaggregation is to obtain information on critical dis- tributional issues; for example, different enrollment rates by members of different minority groups in advanced mathematics and science courses may provide at least a partial explanation for different achievement levels. Data for a whole school population (or any age cohort) cannot be used to identify such distributional differ- ences. The underrepresentation in the sciences and mathematics of individuals from some minority groups and of females makes it important to collect data pertinent to input and process indicators in such a way as to illuminate existing differences. Other demographic descriptors may be important for a given indicator. Within a state, for example, the density of population may affect, say, the number of science teachers per number of students in different parts of the state, as may the economic characteristics of different communities. Separating Data by Educational Level Since the teaching of science and mathematics in elementary school is not generally provided by specialist teachers and enrollment is not recorded by specific courses, some indicators may have to be represented by different measures at different levels of education. Exposure to science instruction, for example, may be represented in minutes per week in elementary school and by student enrollment in physics, chemistry, biology, and other specific courses in secondary school. Similarly, measures of achievement will need to be different for elementary and for secondary education. A special prob- lem in this regard is the middle or junior high school,

42 which may comprise any 2, 3, or 4 years between grades 5 to 9 and may be considered part of either the elementary or secondary school. INTERPRETING INDICATORS An indicator acquires meaning according to the inter- pretation given to its measured value. There are several bases for interpretation, all using comparisons of some sort. Most commonly, the value of an indicator at a given time is compared with its value at some earlier time. For example, changes over time may be observed in the indi- cator "the percentage of students graduating from high school who have taken three or more years of science," or in the indicator "the percentage of students who achieve within a given range of scores on comparable tests. n Another basis of comparison is among groups or geographic entities: this basis is appropriate to address distribu- tional issues. Thus, it is illuminating to examine the supply in various states of certified teachers of science or mathematics as a proportion of the total number of teachers in each of these states assigned to science or mathematics classes, or the proportion of female students enrolled in high school physics classes compared with the proportion of male students. Changes in observed differ- ences among geographic entities or population groups can, of course, also be related to changes over time. A third basis for comparison is to establish an ideal value for an indicator and record the difference between it and the observed value; for instance, the number of qualified mathematics teachers available might be compared with the supply needed. The problem with this method is that determining the ideal value is usually difficult. For example, a higher demand for teachers might be estimated if it is assumed that higher teacher/pupil ratios are desirable because they yield higher student achievement than if the estimate is based on current teacher/pupil ratios. Establishing ideal values often involves judg- ments about goals and priorities; it is therefore best left to those making policy about education rather than to those providing information. For indicators for which ideal values cannot be established, international comparisons (a variation of comparing geographic regions) are sometimes used, as in the case of student achievement. Such comparisons are subject to major methodological criticism because of

43 social, cultural, economic, and political dissimilarities in the purposes and practices of education in different countries. Yet, in the absence of ideal values, student achievement in science and mathematics in other indus- trialized nations continues to be used as a benchmark against which to assess student achievement in this country. The most responsible of the international studies, including those carried out under the TEA auspices, have collected information on differences in cultural traditions, family variables, forms of educa- tional organization, and schooling processes, so that the ways in which these differences affect student achievement might be examined. Also, the tests used to assess achievement in science and mathematics (as well as in other fields) are carefully standardized. They are based as much as possible on a common core of the various cur- ricula in use in the different countries and thus repre- sent agreement on what students ought to know, even though much of the content of advanced courses may not be included in the tests. Hence, international comparisons of the performance scores on these tests are relatively free of the kinds of cultural bias that would vitiate comparability in other studies less carefully designed and controlled, and the wealth of accompanying information has served to explain some of the differences in results All three methods of interpreting indicator values-- comparisons over time, comparison among groups or geo- graphic entities, and comparison to an ideal value--are used in this report. These interpretations are accom- panied by commentary on their appropriateness and associated difficulties in given instances. .

Next: 3. Schooling Input to Science and Mathematics Education: Teachers and Curriculum Content »
Indicators of Precollege Education in Science and Mathematics: A Preliminary Review Get This Book
×
 Indicators of Precollege Education in Science and Mathematics: A Preliminary Review
Buy Paperback | $65.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Many studies point to the inadequacy of precollege education in the United States. How can it be improved? The development of effective policy requires information on the condition of education and the ability to measure change. This book lays out a framework for an efficient monitoring system. Key variables include teacher quality and quantity, course content, instructional time and enrollment, and student achievement.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!