National Academies Press: OpenBook
« Previous: 4. The Schooling Process: Instructional Time and Course Enrollment
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 110
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 111
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 112
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 113
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 114
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 115
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 116
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 117
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 118
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 119
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 120
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 121
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 122
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 123
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 124
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 125
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 126
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 127
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 128
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 129
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 130
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 131
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 132
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 133
Suggested Citation:"5. Student Outcomes." National Research Council. 1985. Indicators of Precollege Education in Science and Mathematics: A Preliminary Review. Washington, DC: The National Academies Press. doi: 10.17226/238.
×
Page 134

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

5 Student Outcomes The main reason for investing in formal education is to enable individuals to acquire knowledge, abilities, and skills needed for their working and personal lives and for functioning effectively in society. Proficiency in mathematics and science is deemed essential for both these objectives. Therefore, measures of student achieve- ment in those fields should be used as primary indicators of the condition of mathematics and science education. Most of this chapter is devoted to those indicators. A second goal of mathematics and science education, often stated by teachers and curriculum guidelines, is to develop positive attitudes toward those fields and toward careers in them. The first section of this chapter indicates some of the reasons that the committee decided not to emphasize indicators representing these variables in this report. STUDENT ATTITUDES Both NAEP and TEA collect information from students on their attitudes toward mathematics and science. Mathe- matics seems to be better liked than most subjects, but its average popularity drops as students grow older. Science appears to be one of the least liked subjects in school, but its average popularity increases somewhat as students grow older. Table 22 gives information on the relative popularity of the major school subjects. The relatively weak relationships established so far between the liking of a subject and achievement in it were discussed in Chapter 2. A second possible reason for tracking student attitudes is that they might affect choices of college majors and future careers. However, 110

111 TABLE 22 Percentages of Students Naming Various Subjects in School as Their Favorite, Ages 9, 13, 17 Subject Age 9 Age 13 ~- Age 17 Science 6 11 12 Mathematics 48 30 18 English/language arts 24 15 16 Social studies 3 13 13 Other 19 31 41 100 100 100 SOURCE: National Assessment of Educational Progress (1979:5). according to data from the 1981-1982 national assessment in science (Hueftle et al., 1983), attitudes toward science and choices of college majors may be formed somewhat independently and influenced by different factors. Between 1977 and 1982, favorable student attitudes toward science classes went up nearly 1 percentage point (from 46.8 to 47.7); favorable attitudes towards science teachers increased by over 2 percentage points (from 63.6 to 65.9); and favorable attitudes regarding science careers increased by over 4 percentage points (from 47.8 to 52.2). Yet favorable attitudes regarding the value of science fell nearly 7 percentage points (from 68.4 to 61.8). Choices of college majors may well be strongly influenced by the perception of students of labor market demands. For example, computer sciences and engineering have been increasing in popularity, while other sciences, mathematics, and education have been decreasing. Table 23 shows the responses of 1980 and 1982 high school seniors to the question: "Indicate the field that comes closest to what you would most like to study in college." All students were asked this question except those who responded that they were not planning to go to college any time in the future (19.8 percent in 1980 and 18.5 percent in 1982); of those asked the question, about 60 percent responded in 1980 and 63 percent responded in 1982. The changes from 1980 to 1982 appear to continue trends established in the 1970s. A comparison (National Center for Education Statistics, 1984c) of 1972 and 1980

112 TABLE 23 Choices of Field of Study in College by 1980 and 1982 High School Seniors Field Percent Naming Field 1980 1982 Biological sciences Computer and information sciences Engineering Mathematics Physical sciences psychology Social sciences Business Education Health occupations or health sciences Preprofessional (law, medicine, dentistry, etc.) Other 2.6 4.4 9.0 1.0 1.8 2.8 4.6 20.1 5.6 8.8 6.3 33.3 100.0 2.0 8.5 9.6 0.7 1.5 2.2 3.3 20.8 4.0 9.3 5.8 _32.3 100.0 SOURCE: Prepared for the committee by Lyle V. Jones, based on a special analysis of HSB data. high school seniors planning to go to college immediately after graduation shows an increase of more than 4 per- centage points for those selecting engineering as their college field of study (8 for males and 2 for females) and of almost 3 percentage points for those selecting computer sciences (almost equal for males and females). The selection of other sciences and of education dropped, decreasing by nearly 6 percentage points for the latter. Further research could help to establish the extent to which schooling affects student attitudes towards mathe- matics and science and preference for a college major, as well as the significance of attitudes for such goals as improved student achievement, future performance, and eventual career choice.

113 STUDENT ACHIEVEMENT In his examination of student achievement in mathe- matics and science, Jones (1981) found that the average test scores for all students had declined steadily between the early 1960s and the late 1970s, but that the average test scores in mathematics and science of high school seniors who intended to go to college and major in those fields had remained quite stable. Accordingly, in this section student achievement is discussed separately with respect to results of tests of nationally representative samples of students and of college-bound students. Before , the discussion of test results, however, measures of achievement and their limitations are considered. Measures of Achievement Grades The measure of achievement most widely used in American schools is the grade assigned by the teacher at the end of a course of instruction. Grade-point averages are used to assign class rankings to students and are given consideration by college officials in deciding who should be admitted to their institution. However, there are no established standards for the awarding of grades; therefore, while grades may provide some sense of the different performances of students within the same class, the meaning of a specific grade is likely to vary from class to class, from school to school, from region to region, from year to year. Students with high grades would be expected to have relatively high grades were they in different places or at different times, but identical grades clearly do not imply identical perfor- mances. Hence, grades are not satisfactory to compare the achievement of students in different geographic areas or over time. Some university admissions offices maintain data banks that compare high school grades with university performance and then use the results to calibrate the grading in the high schools. While such information might be a source of data about grading practices, at best the information would be applicable only to a highly selective sample of schools.

114 Test Scores One measure that has come to be used for assessing educational performance is the score attained on the College Board's Scholastic Aptitude Tests (SATs). SAT scores are not appropriate for use as indicators of school achievement for all students, however, because the tests are taken by only about one-third of any relevant student cohort. And since students select themselves!to take SATs, that one-third is representative neither of the student body as a whole nor even of that portion planning to enter post-secondary education. Moreover, the factors that affect student self-selection may well change over time, leading to difficulties in temporal comparisons of SAT scores. This possible variation may be true even for students who score in the top range (700-800), which would make questionable the use of the number of students in the top range as an indication of change in educational performance of the most able students (see The Chronicle of Higher Education, 1983). At the national level, there are three sources of information on general student achievement in science and in mathematics: the NAEP results; results from the longitudinal studies sponsored by NCES (including the High School and Beyond survey of 1980 seniors and. sophomores and of 1982 seniors and the longitudinal study of 1972 seniors); and, for college-bound students, the achievement tests administered by the College Board and the American College Testing Program. A variety of standardized tests and specially constructed tests are used in state assessments (see Table 5, in Chapter 3, for a listing of states that mandate assessment; see Table As, in the Appendix, for examples of such testing). Many of the larger local school districts also construct their own tests within state guidelines. At the international level, IEA conducts assessments at several age levels and for key instructional areas. Unfortunately, the most recent published IEA findings on mathematics achievement in various countries are 20 years old, and the IEA science results date back to 1970. A second round of assessments of student achievement in both fields is currently under way; preliminary results are reviewed below, following a brief discussion of the limitations of test scores as measures of student achievement.

115 Limitations of Achievement Tests The most serious criticism leveled against commonly used achievement tests is that they do not test knowledge that is considered by experts in the field to be important for students to know; for example, the kind of mathematics that will be needed in a society with universal access to calculators. A further criticism is that tests do not always cor- respond to the course content that students have had an opportunity to learn. It seems appropriate for tests to be based on contemporary knowledge and skills and also test what has been taught: yet these may be incompatible demands. With some tests, neither objective may be satisfied. The possible discrepancies between subject matter to which students have been exposed and topics included on commonly used standardized tests in mathe- matics have already been discussed. Many state- constructed tests do sample the curriculum, and NAEP and the High School and Beyond study also have tried to cover a common core of knowledge expected of students at the educational levels being assessed. The need to base tests on what almost all students are likely to have learned in their classes eliminates the possibility of assessing achievement not deemed part of the common core. Consequently, especially in the case of national assessments, few of the mathematics topics taught beyond 10th grade are included, and science topics tested tend also to sample what is deemed to be a common core in the biological and physical sciences. State-constructed tests may be more specific, but specificity introduces variability, which means that results cannot be compared or aggregated for purposes of reporting on a nationwide basis. Assessment programs often serve several different purposes, for which the tests used may be more or less appropriate. Tests are used to assess the level of student performance; to determine whether a defined degree of competency has been reached; to compare state or district results with national results or with results from other districts or geographic areas; to assess the performance of teachers and school systems; and to validate curriculum guidelines. For several of these purposes, comparisons over time are of interest; such comparisons require inclusion of some of the same test items from year to year. For other purposes, test items need to be changed to reflect new curricula, making the

116 results of such tests less appropriate for use in comparisons over time. Other issues have been raised with respect to widely used standardized achievement tests (Tyler and White, 1979; Wigdor and Garner, 1982). Norm-referenced tests, which relate individual raw test scores to the scores of a comparison group, provide data that make possible the ranking of test-takers. Such tests have been criticized because they tend to concentrate on items common to the instruction of large numbers of students and on items that result in maximum spread among scores. Hence, they are less useful in determining what indivuals do and do not know. Domain- or criterion-referenced tests--to sample the total domain of instruction in a given subject--have been advocated as an alternative. One difficulty with this approach is to construct test questions that will provide adequate coverage; another difficulty is to establish the criterion that indicates acceptable performance. Despite the different frames of reference, the distinction between these two types of tests is not sharp: many criterion-referenced tests have been normed, and recently published norm-referenced tests have been designed to meet instructional objectives in some depth (Gardner, 1982). The format of tests, whether norm-referenced or criterion-referenced, also may limit what is being tested. Frederiksen (1979) points out that, while multiple-choice tests can measure much of the knowledge and some of the skills needed for problem solving, they do not reflect all the thinking processes that an individual uses in solving problems of any complexity. Tests are needed that allow the student to exhibit those behaviors critical in doing mathematics or science. For example, students might be given hands-on tasks and their performance recorded in terms of process as well as the final answer, with the quality of the response assessed from several points of view. As another alternative, some researchers have experimented with computer simula- tion to combine assessment and diagnostic testing (Brown and Burton, 1978). Clearly, such alternative forms of testing imply a different level of investment in assess- ment than has typified past efforts. The influence of tests on what is being taught also merits consideration. Because they tend to emphasize traditional topics and neglect subject matter of greater currency and importance, tests may exercise a negative influence on the curriculum by discouraging changes. Even when tests are designed for assessment rather than

117 for evaluating a curriculum, they tend to influence instructional content, particularly when the same or analogous tests are used to make comparisons of student achievement over time. This may he desirable if the tests embody important learning areas, as is intended for the New York Regents examinations. However, if the tests do not tap higher-order skills, they may serve to trivial- ize instruction. In an examination of influences of testing on teaching and learning, Frederiksen (1984:195) found " . . . evidence that tests do influence teacher and student performance and that multiple-choice tests tend not to measure the more complex cognitive abilities The more economical multiple-choice tests have nearly driven out other testing procedures . . . the greater cost of tests in other formats might be justified by . . . encouragLing] the teaching of higher level cognitive skills . . ." The criticisms of current approaches to testing are not new, and to voice them here does not blunt the committee's recommendation that student achievement be considered the most important outcome of science and mathematics educa- tion to be monitored. Nor does it imply that current assessment programs should be Reemphasized, but rather that they should be implemented by the inclusion of improved forms of testing. Achievement: All Students Several assessments in both science and mathematics that are applicable to all students and designed to provide comparisons over time have been conducted by NAEP. Other evidence about student achievement comes from assessments in each field carried out by TEA; those assessments have been used to compare achievement in different countries and at different times. For mathe- matics, NLS and HSB data make possible comparison between the high school classes of 1972, 1980, and 1982. Mathematics NAEP assessed the mathematics achievement of 9-, 13-, and 17-year-olds in school in 1973, 1978, and 1982. The basic measure used was the percentage of students respond- ing acceptably to a given item. Most of the items used to assess 17-year-olds involved material typically learned by early 10th grade. For each age group, a number of

118 items were common to the tests used in the 3 years. Table 24 shows mean performance on the common items; Table 25 shows performance on all items and change in percentages of items answered correctly between 1978 and 1982. The results in Table 24 indicate that the average performance of 9-year-olds was relatively stable over the 9 years (1973-1982); the average number of right answers for 13-year-olds increased by about 8 percent (a gain of 4.2 percentage points) between 1978 and 1982 after an earlier decline, and the performance of 17-year-olds remained relatively stable between 1978 and 1982, also after declining between 1973 and 1978. (It should be remembered that the design of NAEP is cross-sectional rather than longitudinal: e.g., the sample of 13-year- olds tested in 1982 does not consist of the same students as the sample of 9-year-olds tested 4 years earlier.) Table 25 also gives an overview by selected character- istics of the participants, showing that greater gains were made between 1978 and 1982 by black and Hispanic students than by white students. While the gains made by the younger students are encouraging, a more detailed analysis by items that assess different types of skills led NAEP researchers (National Assessment of Educational Progress, 1983:9) to conclude that "students improved most on easier knowledge and skill exercises, least on those that required a more complete grasp of mathematics or more sophisticated skills." In particular, students appear to be able to perform arithmetic operations but do not know which algorithm to use or how to apply their answers to the solution of practical problems. Seniors participating in the 1972 National Longi- TABLE 24 Mean Performance Levels on Three Mathematics Assessments, Common Items, Ages: 9, 13, 17 Number Mean Percent Correct Percent Change, Age of Items 1973 1978 1982 1973-1982 9 23 39.8 39.1 38.9 -0.9 13 43 53.7 52.2 56.4 2.7 17 61 55.0 52.1 51.8 -3.2 SOURCE: National Assessment of Educational Progress (1983:2).

119 TABLE 25 Percentages of Success and Change on All Mathematics Exercises, 1978-1982: Selected Groups, Ages 9, 13, 17 (233 Items) Age 13 (388 Items) Age 17 (383 Items) 1978 1982 Change 1978 1982 Change 1978 1982 Change Nation 55.4 56.4 +1.0 56.6 60.5 +3.9 60.4 60.3 -0.1 White 58.1 58.8 +0.7 59.9 63.1 +3.2 63.2 63.1 -0.1 Black 43.1 45.2 +2.1 41.7 48.2 +6.5 43.7 45.0 +1.3 Hispanic 46.6 47.7 +1.1 45.4 51.9 +6.5 48.5 49.4 +0.9 Rural 51.2 52.7 +1.5 52.6 56.3 +3.7 58.0 57.0 -1.0 Disadvantaged-urban 44.4 45.5 +1.1 43.5 49.3 +5.8 45.8 47.7 +1.9 Advantaged-urban 65.0 66.3 +1.3 65.1 70.7 +5.6 70.0 69.7 -0.3 SOURCE: National Assessment of Educational Progress (1983:52). tudinal Study and in the 1980 High School and Beyond Study (HSB) were given a 15-minute mathematics test intended to measure their ability to solve problems involving quantitative skills; the test did not include any items involving algebra, geometry, trigonometry, or calculus. On the 18 items that were virtually identical in the two tests, the mean score declined by one-sixth a standard deviation between 1972 and 1980--about the same amount as the decline in average mathematics SAT scores over the same period (National Center for Education Statistics, 1984c). For the HSB test, as for NAEP scores, the gap in average mathematics scores between white and minority-group students narrowed. The only complete set of mathematics results available from IEA assessments dates back to 1964. Scores for selected countries from the first assessment carried out in 1964 are shown in Table 26. The scores for students in their final secondary year (13.8 items correct for U.S. students) were for students who took a mathematics course in their senior vear. The average score for all U.S. seniors, whether or not they had taken anv mathe- matics courses in their senior Year, was 8.3 items correct out of 69 items and was disproportionately lower than the scores for final-year students elsewhere who also did not specialize in mathematics: e.g.-, France, 26.2 items cor- rect; Germany, 27.7; Japan, 25.3. It must be remembered, however, that in 1964 the United States retained a much greater proportion of students through completion of secondary schools than did most other countries. Dif- ferences among countries in number of years spent in school and in the ages of students in their final year also may have affected the comparisons.

120 TABLE 26 Average Mathematics Test Scores, TEA, 1964 . Percent No. of Country Mean Score S. D. Correct Students 13-Year Olds (68 Items) - Australia 20.2 14.0 29.7 2,917 Belgium 27.7 15.0 40.7 1,686 England 19.3 17.0 28.4 2,949 France 18.3 12.4 26.9 2,409 Japan 31.2 16.9 45.9 2,050 Sweden 15.7 10.8 23.1 2,554 United States 16.2 13.3 23.8 6,231 Mathematics Students in Final Secondary Year (69 Items ) Australia 21.6 10.5 31.3 1,089 Belgium 34.6 12.6 50.1 519 England 35.2 12.6 51.0 967 France 33.4 10.8 48.4 222 Germany 28.8 9.8 41.7 649 Japan 31.4 14.8 45.5 818 Sweden 27.3 11.9 39.6 776 United States 13.8 12.6 20.0 1,568 SOURCE: Husen (1967) . Preliminary results from IEA's Second International Mathematics Study are available on achievement differ- ences of U.S. students between the first and second assessments and comparisons to international medians and medians of students in the two other areas that have completed their analyses, Japan and British Columbia (treated as a country by TEA). Students were tested during the 1981-1982 school year; the two populations tested were 13-year-olds and students in the final year of secondary school who were studying mathematics as a substantial Part of their program. In the United States, these populations consisted of 8th graders (7th graders in Japan) and 12th graders who had taken 3 years of college-preparatory classes in grades 9-11 and were enrolled in precalculus or calculus classes in the 12th grade. The U.S. assessment covered about 6,800 students in the 8th grade and 4,500 students in the 12th grade in public and private schools (191 precalculus and 46 calculus classes). For the 8th graders, 36 items from the 1964 assessment were included in the second study. There was a decline of 3 percentage points, from 48 percent of the items answered correctly to 45 percent answered correctly, between the two assessments. Achievement in arithmetic

121 and in geometry suffered the most, declining from 55 to 49 percent and from 40 to 34 percent answered correctly, respectively--perhaps as much as a half year's decline. There was a slight gain in algebra. For 12th graders, 20 items were the same in both assessments. The data show a slight overall increase in student performance (Travers, 1984:78): Win spite of all necessary qualifying remarks, the pattern that emerges is one of . . . stability and mild gains for precalculus students, especially in analysis and in comprehension, and of more marked and consistent gains for calculus students. Our best have become somewhat better in the last twenty years." Comparisons of the achievement of U.S. 13-year-olds with students in Japan and in British Columbia and with international achievement across the 24 participating countries are shown in Table 27. U.S. students scored on the international median in arithmetic, algebra, and statistics but much lower in geometry and measurement. There is evidence that U.S. students have much less opportunity to learn geometry than the other topics and are disadvantaged in the measurement items because the items are based on the metric system not commonly used in the U.S. Nevertheless, there is cause for considerable concern about the results for 13-year-olds. Table 28 compares U.S. 12th-grade achievement with international scores: the total sample of U.S. students performs at a level considerably lower than the median level of per- formance found in the terminal year of secondary school TABLE 27 International Achievement Comparisons, Second TEA Mathematics Study 13-Year-Olds (180 Items) Means: Means: International United Means: British 25th 75th Subject States Japan Columbia Percentile Median Percentile Arithmetic 51 61 58 45 51 57 Algebra 43 61 48 39 43 50 Geometry 38 60 42 38 43 4 5 Statistics 57 71 61 52 57 60 Measurement 42 69 52 47 51 58 NOTE: Test scores are expressed in percent of items answered correctly. SOURCE: Travers (1984).

122 TABLE 28 International Achievement Comparisons, Second TEA Mathematics Study Grade 12 (136 Items) Means: United States International Pre- 25th 75th Subject calculus Calculus Total Percentile Median Percentile Sets/properties 54 64 56 51 61 72 Number systems 38 48 40 40 47 59 Algebra 40 57 43 47 57 66 Geometry 30 38 31 33 42 49 Elementary 25 49 29 28 46 55 Functions/calculus 39 48 40 38 46 64 probability and s tatistics NOTE: Test scores are expressed in percent of items answered correctly. SOURCE: Travers (1984). mathematics in the participating countries. However, this comparison must be treated with considerable caution because at that level of education there are great differences among countries with respect to curricula, student populations, and a host of other factors. Preliminary IEA results for the last year of secondary school in Japan indicate continuing high achievement, with Japanese students ranking second of the 14 partici- pating countries in all areas of mathematics. A recent comparison of mathematics achievement of students in Illinois and Japan (Walberg et al., n.d.) provides further documentation. Since Illinois students perform at about the same level as U.S. students in general (see the Appendix), the study is likely to have broader implications than just for Illinois. For this study, a mathematics test was given in 1981 to 1,700 Japanese and 9,582 Illinois high school students. The Japanese sample was a representative mix of ages; the Illinois sample consisted of high school juniors. The test contained 60 items on algebra, geometry, modern mathematics, data interpretation, and probability. Information on level of mathematics completed (for Illinois students) or opportunity to learn (for Japanese students) was also collected. The achievement results are shown separately for males and females and the different age groups in Table 29. The authors conclude (Walberg et. al., n.d.:6): For all three age groups (15, 16, and 17 and older), the Japanese exceeded the Illinois students by two standard deviations. . . . Put in another \

123 way, the average Japanese student outranked about 98 percent of the Illinois sample. At the upper ranges, the differences are still more striking. Only about 1 in 1,000 Illinois students attained scores as high as the top 100 out of 1,000 (or ten percent of the) Japanese students. These differences in achievement cannot be ascribed to different retention rates; in fact, Japan now has a larger percentage of 17-year-olds still in school than does the United States. Undoubtedly, there are cultural differences that affect student performance. For example, pressure for academic achievement is high in Japan, as may be inferred from the amount of homework reported by students and the anecdotes on suicides among teenagers who do not gain admission to a university. Also, there is an important difference in the two educational systems: Japan has a uniform mathematics curriculum prescribed by the central ministry of education; students must take all the prescribed courses, while in the United States students may elect which courses to take. The number of courses taken was, in fact, strongly correlated with achievement for the Illinois students, but the investigators caution that requiring more mathematics courses for high school graduation will not necessarily increase achievement because of varying course content. It should be noted that there is TABLE 29 Mathematics Achievement Means and Standard Deviations for High School Students from Japan and Illinois Japan (N:1,700) Variable Percent Mean S.D. Illinois (N:9,582) - Percent Mean S.D. Sex Male 58 42.08 9.37 50 19.88 9.55 Female 42 36.17 7.73 50 19.32 8.54 Age 15 27 34.35 6.79 07 16.72 7.72 16 36 40.73 8.91 80 20.49 9.22 17 or older 37 42.58 9.34 13 15.87 7.34 SOURCE: Walberg et al. (n.d.).

1 124 essentially no difference in achievement between U.S. males and females. According to the investigators, the large difference between Japanese males and females is reduced considerably when topic coverage and motivation are taken into account. Differences in these covariates may be due to the fact that there are a sizable number of separate schools for girls and boys in Japan. Science There have been four NAEP assessments of the science achievement of 9-, 13-, and 17-year-olds; results are shown in Table 30. The only statistically significant change shown between 1976 and 1981 is the overall decline for 17-year-olds, largely brought about by a decline of 3.1 percentage points in earth sciences (data not shown). From 1976 to 1981 right answers for 9-year-olds increased by 1.0 percentage point on 30 common items related to science achievement (Hueftle et al., 1983:iv). (The items dealt with scientific inquiry and issues in science-technology-society; science content knowledge was not tested for 9-year-olds in 1981.) The authors note: "This represents the first [overall] positive change at any age level in four assessments." There was no statis- tically significant change overall on achievement items for 13-year-olds. As noted, scores for 17-year-olds have continued to decline. At all age levels, males continued to outperform females; racial differences also have per- sisted, but the gap appears to be narrowing at all age levels (data not shown). Data for the TEA assessment of science achievement were collected in 1970 (Comber and Reeves, 1973). Results for selected countries are given in Table 31. As is the case in mathematics, care must be taken to interpret the science achievement scores for final-year secondary students in light of student retention rates, which vary greatly from country to country. Wolf (1977) analyzed the changes in country rankings that result from comparisons of the science scores of seniors representing the top 9 percent of the total age group in each country rather than those of the scores of all students enrolled in the final year of secondary school. (Nine percent of the age cohort was selected as the cut-off because, at the time of the science assess- ment, it was the lowest enrollment rate of the senior age group in any of the participating countries.) When only

·e in OD ~ H O o 'C) in A \.4 O ~ _I a) ~ O a) ~ 3 ~ o c) ~ o o C) a' in . - en U U] o ~ Z s ·e U] o 1 sat 1 .,' s ~ ~5 o ~ 1 Hi; EN a, 125 H AD AD U] H to A C) Cut en H ~ C) ~5 C5N ~ an a, .,' o ~a o 1 ~4 :> 1 ~n ~ ~ ~ 0 .~' · ~ · · s o o ~ c~ o ~ ~ 1 1 1 1 1 ·,~ d · · · · · ~ u] a ·,1 0D · · · · · ~ o c~ ~ o ~ ~ ~ z u~ ~ ~ ~ co ~ a, ~r · · · · · · · · ~ o ~ ~ o o o ~ ~ ~ 1 1 1 1 1 1 1 c~ ~ ~ ~ ~ 0 ~ In c~ · · · · · · · · ~ cn u~ ~ ~ ~ ~ ~ ~ er u~ D · · · · · · · · ~ o u~ ~ ~ tn u~ ~ ~ d. u~ Ln o ~ ~ ~ oo · · · · · · · · ~ 1 1 1 1 1 1 1 1 1 C0 ~ ~ U~ ~ kD u~ · . · · · · · . ~ a, ~n ~ oo ~ Ln ~ u) o ~ ~ ~ ~ ~ ~ ~ ~ · · · · · · · · ~ kD O O ~ O ~ C~ ~ ~ ~ ~ \9 u~ ~ ~ ~ u) u c) u a) ~ a' s" S~ ~ ~ 0 tn 0 cn 0 eq U ~ ~ ~ · u' ~ ~ · ~n ~ ~ . · ~ c: 0 -1 · R:S ~ O ~ . ~ S~ V ~ ~ ~ ~ O ~ ~ ~ ~ V ~ c) ~ u~ ~ O ~ a) u~ V O V 0) u: 0 X u, 1 s~ X u, 1 ~ X tC · S~ a) a) . ~ a) a) . eQ · ~ ~ U] · ~ QJ [Q ~ ~ 0 a) ~ ~ 0 ~ ~ ~ 0 c~ S ~ ~ ~ ~ S ~ ~ ~ ~ S -- <: ~ ~ 1 ~ ~: P. ~ 1 a ~: :~: ·- _ r~ ·. _ ~n .,, Ul .,1 V) 0c "t ~5 O s" a) C) O ~ _ Z ~ ·e 0 U] ~ ~ ·e C~ O cn ~

126 TABLE 31 Average Score of Students in the TEA International Science Achievement Test, 1970 Country Percentage of Age Group Enrolled Mean Score Correct Percent 14-Year-Olds (80 Items) England -- 21.3 26.6 Germany -- 23.7 29.6 Italy -- 18.5 23.1 Japan -- 31.2 39.0 Netherlands -- 17.8 22.3 Sweden -- 21.7 27.1 United States -- 21.6 27.0 Final-Year Secondary Students (60 Items) England 20 23.1 38.5 France 29 18.3 30.5 Germany 9 26.9 44.8 Italy 16 15.9 26.5 Netherlands 13 23.3 38.8 Sweden 45 19.2 32.0 United States 75 13.7 22.8 SOURCE: Organization for Economic Cooperation and Development (1974:213). the top 9 percent of the age cohort was considered, 6 countries ranked above the United States; by comparison, 13 countries ranked above the United States when all seniors enrolled were considered. Figure 8 shows dif- ferent rankings of achievement scores by country as different proportions of the age-group are considered, together with school retention rates as of the time of the science assessment. IEA conducted a second international science study in 1983. In the United States, almost 5,000 students in more than 200 public and private schools took part--almost 3,000 in 5th grade and almost 2,000 in 9th grade. For both grades, scores improved on items common to the 1970 and 1983 assessments; there were 26 such common items for grade 5 and 33 for grade 9. Results are shown in Table 32. However, these results are open to question because of the low response rate--50 percent of the schools in the sample for 5th grade and 36 percent of the schools for the 9th grade.

127 Top 1 % 50 1 Top 5% r-~ i 1 1 L__1 1 Lit ~ 40 ~ :--: I - | Top 9% · - - -— ~ All seniors enrolle; ~F1 30 20 1 ^ ~ o Cat _ N He ~ ._ ~ . - 111 6 - ~ — _ c~ ·C o c) ~ in cn so - I LULL - C~ - - ~ I at . _ Z LL COU NTRY . _ - _ _ o Cat . _ _ I LO ~ - · (~, al cn ~ it, ~ lo - - . _ _ _ ~ LL N — _ a, C: .~ ~ a' 11 m L L t __' -.e.' r—— ...... 1 1 levels FIGURE 8 Mean scores in science of students in terminal year of secondary school for top 1, 5, and 9 percent of the total senior age group and for all seniors enrolled. NOTE: Figures in parentheses show percentage of the senior age group actually enrolled. SOURCE: Organization for Economic Cooperation and Development (1974). - v LL E . _ m

128 TABLE 32 Comparison of 1970 and 1983 IEA Science Study, U.S. Grades 5 and 9 Gr ade 5 Gr ade 9 Number Scores Number Scores Subject of Items 1970 1983 of Items 1970 1983 Biology 9 50 60 10 60 66 Physical science 17 59 64 23 52 5 7 Total 26 56 62.5 33 54 59.5 NOTE: Test scores are expressed in percent of items answered correctly. SOURCE: Kr ieger (1984 :27 7) . Achievement: College-Bound Students Because college-bound students are the group from which future scientists, engineers, and technical personnel will be drawn, their performance in mathematics and science is of special interest. Both the College Board and the American College Testing Program administer tests in mathematics and science. About one-fifth of the one million students who take the SATS each year also take the College Board's Achievement Tests in one or more of 13 academic areas, which include two levels in mathe- matics. Typically, they take three of these tests, one in English composition, one in mathematics (usually level I), and one in another subject area, most often (about 25 percent) in history and social studies. Test score averages over the last 12 years are given in Table 33 for all 14 tests and for the mathematics and science tests separately. No strong trends appear evident from the average achievement scores in mathematics and science. The rise of 10 points between 1973 and 1984 of the average score for all the tests is accompanied by an overall decline of one-third over the last decade in the total number of test-takers and may be attributable to the self-selected nature of this group. Interestingly, in the face of the decline, possibly due to changing requirements for college admission, the number taking the mathematics level II test has been increasing since it was first given, as has the number taking the physics test. In fact, in the last 2 or 3 years, registration for all the mathematics and science tests has been increasing again.

129 A a, a, o U: U] a) En a) .,, at: - - e o P' .,, in En a o .,, a a .,, e At: SO o m a) a _ Era 1 - , ED CO 00 CO Cry CO a a' a, rat 0\ CO al rat AD 0` US al ~ As ~ US ~ 00 al 0 CO O at rat ax CD ~ In an AD O al US ~ CO al Go O a a al ~ ~ cat ax at a Cal al ~ US CO 0D a al ~ up rat at a, ~0 AD ~ O C4 Lr al ~ ~4 at AD at ~ a' US ED ~ C4 US ~ AD C4 al ~ US at ~ at Cal ~ O 03 fir 0 ~ up ~ al a' C4 C4 ~ 0 AD In 0 ~ up ax at AD O - 0 ~ tD cry 0 L0 ~0 el. C4 up ~ at C4 /D a, Up up at ~ fir AD at AD ~ C4 AD AD at JO O US ~ up o AD fir fir ~ ~ 0 fir US ~ UP ~4 AD ~ O O ~ UP ~ O JO at 0 O US ~ ~ JO 1— ~ ~4 O O _. O al LO _I C`4 rat 0\ Up ~ ~ Or at ~ 01 Up ~ AD ED ~ C4 In ~ Up ~ al rat ~ ~ fir ~ 0 at Up O ~ up ~ fir up ~4 ah O Us Us _ ~1 — C4 1— O O 1~ _i O e14 C4 0` O O U~ _d O U~ ~ O O a~ 0 a~ a: ~D CO ~ ~ ~ O U~ ~ ~D C4 ~ _I a~ ~ a, as U~ U~ a~ 0 ~ U~ ~ CO U~ C~ O ~ ~ O U. O ~ tD 0o 0 ~ O U~ ~D CD a~ 0 0 Ln ~4 a~ 0 co CO U~ a) ~o ~ tD ~ ~ u~ O ~ ~ ~ ~ ~ ~ ~r c~ s C4 ~ o C4 ~D o u~ ~ _I ~ tD O ~ 0\ 0 ~ t~4 ~ _~ ~ ~ ~1 0 ~ _1 ~4 LO ~ \0 a:' C4 ·D N ~ ~ ~ ~ _I Ca4 _I _I ~ 1 U~ ~ 00 ~4 ~4 C`4 _1 O kD Lr, ~ O U~ O N ~ O k0 a~ C4 ~r a: U~ ~ ~ ~D ~ ~ ~D tD O ~ ~ ~ O Z Z Z U) ~D ~4 ~ ~ C4 ~ ~ ~ ~ C`4 ~ ~ O ~ U) ~ ~ C4 er O ~ Z Z Z ~ ~1 kD 0:~ _1 C4 u~ tD u~ ~ O ~ ~ ~ ~ ~ U~ tD C4 kD tD _I ~ 1— ~ rn —~ C4 ,- o l_ ~ ~ !_ _~ ~ ~ ~ ~ C4 ~1 ~ 4 ~ C4 ~ ~— ~ o ~ Z Z Z ~ _I C`4 r~ O kD ~ ~ ~ ~ ~ ~ u~ ~ a: L =- O O C4 O ~ —I 1 0 C~ C4 U) O ~ ~ :>' ~4 a) ~ a · S ~ ~ · S ~ `: . ~ ~ ~ · S U) ~ · S ~ ~a ~ ~ as ma ~ 0 ~ a ~ ~ ~ as ~ ~ a ~ · ~ s a' . ~ s ~ · ~ ~ ~ · ~ ~ ~ . ~ u: a' . ~ u~ Z ~ :z u~ Z ~ £ u~ z 0 5 u: Z a~ ~: U) Z >1 £ u~ Z · - s s m c~ ~ - CO a~ 1 - c m a' cr o C~ s o o P~ c ._, 00 a, E~ U] o .~' U) U) .~t ~a ~: ·. o U)

130 Somewhat inconsistent with these findings are trends in the top SAT scores. Between 1972 and 1982, while the total number of students taking the tests fell by only 3 percent, the number scoring above 700 in the mathematics test declined by 20 percent. At the same time, the erosion was much greater for the verbal test (SAT-V): the number of students scoring above 700 on SAT-V declined by more than 50 percent. Once again, however, the samples of test-takers are self-selected and may vary over time in unknown ways. Four types of tests are given by the American College Testing Program (ACT): English, mathematics, social studies, and natural science. Composite scores and separate scores for mathematics and science are shown in Table 34 for 10 percent samples of students who have taken the ACT tests between 1973 and 1984. Males gen- erally have higher average scores than females on three of the four tests: in 1982, the average difference between males and females was 2.5 ACT score units in mathematics, and 2.2 units in natural science, about one-third of a standard deviation in natural science and somewhat less than that in mathematics. In 1984, when all ACT scores went up, females made somewhat greater gains than males, but there was still a gap of 1.4 points in the composite scores (19.3 for males and 17.9 for females), a gap of 2.5 in mathematics, and a gap of 2.5 in natural science. In science, there has not been a significant pattern of increase or decrease of scores over time. In mathematics, there appears to be some decline, possibly being reversed or at least halted in 1984. It should be remembered that the ACT tests, as is also the case for the SATs, sample a common core of knowledge in each field rather than the subject matter of specific high school courses. Independent evidence on the quality of students who choose to go into the sciences and engineering comes from the American Council on Education (Atelsek, 1984). ACE conducted a sample survey of senior academic officials in 486 institutions with undergraduate programs and in 383 with graduate programs; the sample was designed to be representative of the more than 3,000 institutions of higher education in the United States. About 80 percent of the institutions responded. Of those responding, the majority, 60 percent, reported that there has been no significant change, compared with 5 years earlier, in the quality of undergraduate and graduate students in science and engineering; 25 percent thought there had been a

131 teC o m 1 a, o o tn U3e o '-/ z o ~n a) o CQ tQ U] ~Q ~Q E~ C) ~: o o '~4 ' - s4 1 U] ' - a ~ n ~: ~ E~ U) er CO a, CO a' C~ CO a~ a~ o CO ax a' 0` ~0 a~ U~ a~ r~ a, ~m ~ O O ~ ~ ~ e e e e a) un ~ oo ~ kD 0\ CD r~ 0 o' ~ o' u~ O e e e e · e ~ CO tD tD CO O ~D In CO 0o ~ O CO ~ e e e e · ~ eo u~ ~ oo O tD U~ o 0O CO ~ ~ O ~ ~ · . · ~ · · t— oo Lr~ ~ r~ ~ kD ~ CO ~n a~ ~ ~o ~ ~ 0 · - · - · . OD u~ r~ ~ ~ kD ~ 03 kD ~ U~ ~ ~ ~ ~ ~ ~ e · · · C~ CD ~ ~ ~ ~ \0 0 CO u~ a~ u~ ~ a~ ~ ~ ~ I_ co ~ ~ r~ 0 tD a~ ~D as ~ co ~ ~ . . · · · ~ CO U~ ~ ~ O ~ · e CD U~ ~D CO e e · CO ~ ~ ~ ~ ~ ~ ~r · . · ~ co ~n ~ ~ ~ ~ CO · . . . . . a, ~ ~ ~ 0 ~D U) -_1 . ~ W' ~ _ -_1 0 u~ C · ES 0 ~ a w ~ W · C ~ £ O ~ ~: w C · C) ~ a c W · W U] · - U) u~ tD eo kD · · · ~ ~ ~ O kD · . U) CD · . O kD C~ ~ a U) ._' cr, o P4 ~D U~ C ~ ·-, r~ u' W E~ W (D w ~ 0 kD ~r ~ er S ut ._ .. W' C) ~ 0 Z U]

132 significant improvement; and 15 percent thought there had been a decline. More than 40 percent of the officials in the largest science and engineering baccalaureate- producing institutions thought there had been an improve- ment; these officials and deans in doctorate-granting institutions also reported a shift of the most able undergraduates toward science and engineering fields. FINDINGS Tests · It has proved difficult with current test method- ology to construct tests that can be used for large numbers of students and yet are adequate for assessing an individual's cognitive processes, for example, the ability to generalize knowledge and apply it to a variety of unfamiliar problems. However, existing tests of mathe- matics and science of the kind employed by NAEP, HSB, and TEA are sufficiently valid for the purpose of indicating group achievement levels. Achievement All Students · Evidence suggests an erosion over the last 20 years in average achievement test scores for the nation's students in both mathematics and science. Results of the most recent assessements indicate a halt to this decline and, at some grade levels, even a slight increase in scores in mathematics. Much of this generally observed but small increase is due to increasing achievement scores for black students, especially for mathematics in the lower and middle grades. · Analysis of the most recent NAEP mathematics assessment yields evidence that gains have been made on computational skills but that there is either no improve- ment or a slight decrease in scores on test items that call for a deeper level of understanding or more complex problem-solving behavior. · Available information on how well U.S. students perform compared with students in other countries shows U.S. students generally ranking average or lower, with students in most of the industrialized countries perform-

133 ing increasingly better than U.S. students as they move through school. Taking account of different student retention rates in different countries changes this finding somewhat in favor of the United States, but the most recently available data, especially data comparing the United States to Japan, are unfavorable for the United States. College-Bound Students · There is evidence that college-bound students perform about as well on tests of mathematics and science achievement as they did a decade or two ago. CONCLUSIONS AND RECOMMENDATIONS Assessments of Achievement · Systematic cross-sectional assessments of general student achievement in science and mathematics, such as the ones carried out through NAEP, should be carried out no less than every 4 years to allow comparisons over relatively short periods of time. The samples on these assessments should continue to be sufficiently large to allow comparisons by ethnic group, gender, region of the country, and type of community (urban, suburban, rural, central city). · Longitudinal studies such as High School and Beyond are important for following the progress of students through school and later and should be maintained. · International assessments in mathematics and science education such as those sponsored by TEA need to be carried out at least every 10 years. Tests · Developmental work on tests is needed to ensure that they assess student learning considered useful and important. Instruments used for achievement testing should be reviewed from time to time by scientific and professional groups to ensure that they reflect contempo- rary knowledge deemed to be important for students to learn. Such reviews may lead to periodic changes in test

134 content--an objective that must be reconciled with the goal of being able to compare student achievement over time. · Work is needed on curriculum-referenced tests that can be used on a wider than local basis, especially for upper-level courses. This work will require careful research on the content of instruction, tests constructed with a common core of items, and alternative sections of tests to match curricular alternatives. · Assessments should include an evaluation of the depth of a student's understanding of concepts, the ability to address nonroutine problems, and skills in the process of doing mathematics and science. Especially for science, it is desirable that a test involve some hands-on tasks.

Next: References »
Indicators of Precollege Education in Science and Mathematics: A Preliminary Review Get This Book
×
 Indicators of Precollege Education in Science and Mathematics: A Preliminary Review
Buy Paperback | $65.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Many studies point to the inadequacy of precollege education in the United States. How can it be improved? The development of effective policy requires information on the condition of education and the ability to measure change. This book lays out a framework for an efficient monitoring system. Key variables include teacher quality and quantity, course content, instructional time and enrollment, and student achievement.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!