National Academies Press: OpenBook

Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12 (1988)

Chapter: Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies

« Previous: Appendix B: Review of Science Content in Selected Student Achievement Tests
Suggested Citation:"Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies." National Research Council. 1988. Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12. Washington, DC: The National Academies Press. doi: 10.17226/988.
×
Page 181
Suggested Citation:"Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies." National Research Council. 1988. Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12. Washington, DC: The National Academies Press. doi: 10.17226/988.
×
Page 182
Suggested Citation:"Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies." National Research Council. 1988. Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12. Washington, DC: The National Academies Press. doi: 10.17226/988.
×
Page 183
Suggested Citation:"Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies." National Research Council. 1988. Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12. Washington, DC: The National Academies Press. doi: 10.17226/988.
×
Page 184
Suggested Citation:"Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies." National Research Council. 1988. Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12. Washington, DC: The National Academies Press. doi: 10.17226/988.
×
Page 185
Suggested Citation:"Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies." National Research Council. 1988. Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12. Washington, DC: The National Academies Press. doi: 10.17226/988.
×
Page 186
Suggested Citation:"Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies." National Research Council. 1988. Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12. Washington, DC: The National Academies Press. doi: 10.17226/988.
×
Page 187
Suggested Citation:"Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies." National Research Council. 1988. Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12. Washington, DC: The National Academies Press. doi: 10.17226/988.
×
Page 188
Suggested Citation:"Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies." National Research Council. 1988. Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12. Washington, DC: The National Academies Press. doi: 10.17226/988.
×
Page 189
Suggested Citation:"Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies." National Research Council. 1988. Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12. Washington, DC: The National Academies Press. doi: 10.17226/988.
×
Page 190
Suggested Citation:"Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies." National Research Council. 1988. Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12. Washington, DC: The National Academies Press. doi: 10.17226/988.
×
Page 191
Suggested Citation:"Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies." National Research Council. 1988. Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12. Washington, DC: The National Academies Press. doi: 10.17226/988.
×
Page 192
Suggested Citation:"Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies." National Research Council. 1988. Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12. Washington, DC: The National Academies Press. doi: 10.17226/988.
×
Page 193
Suggested Citation:"Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies." National Research Council. 1988. Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12. Washington, DC: The National Academies Press. doi: 10.17226/988.
×
Page 194
Suggested Citation:"Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies." National Research Council. 1988. Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12. Washington, DC: The National Academies Press. doi: 10.17226/988.
×
Page 195
Suggested Citation:"Appendix C: Summaries of Meetings with Representatives of State and Local Education Agencies." National Research Council. 1988. Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12. Washington, DC: The National Academies Press. doi: 10.17226/988.
×
Page 196

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Appendix C Summaries of Meetings with Representatives of State and Local Education Agencies SUMMARY OF MEETING WITH REPRESENTATIVES OF STATE EDUCATION AGENCIES APRIL 16, 1986 San Francisco The purpose of the meeting between representatives of state edu- cation agencies (see the list of participants below) and members of the committee was to provide an opportunity to discuss mutual interests concerning the assessment of the quality of science and mathematics education. The committee presented some preliminary ideas on six indicator areas and asked for reactions from the state representatives as well as discussion of additional concerns they wished to raise. Committee members summarized draft statements that had been circulated before the meeting on assessment of the quality of the curriculum, teacher effectiveness, student learning, investment of resources, student attitudes and motivation, and scientific literacy. Following each presentation, the state representatives commented on the feasibility and desirability of the suggested indicators and proposed other indicators that might be considered. The comments and discussion are summarized below under each indicator area. 181

182 APPENDIX C Quality of the Curriculum . A framework for assessing the quantity and quality of curriculum content in each subject area would be very useful and desirable at this time. The response to the construction of such frameworks would be positive on the part of those concerned with educational improve- ment because more direction is needed on priorities in curriculum content. In that connection, it might be worthwhile to review curric- ular frameworks used in other nations, for example, West Germany, France, Japan, and Great Britain. . The coherence of the curriculum across grade levels is important. The quality and quantity of subject matter to which a student is exposed should not be assessed within a grade level or course only, but over a reasonable period of schooling, e.g., primary grades. In that way there could be some latitude regarding the sequencing of units, for example, a core topic might be taught in either third or fourth grade. The framework idea might lead to a useful "national grid" of science and mathematics subject matter that identifies key concepts and processes to be included in the curriculum but without specifying the exact placement. . It may be difficult to capture quality in science curricula through the framework concept as outlined by the committee, because there are different approaches and philosophies that prevail in the teaching of science, often having to do with the sequencing of topics. But if the sequence or grade level for introducing a particular topic is not highly specified in the framework, teaching approach may not be an Issue. . It is critical to maintain the distinction between a "national" curriculum framework and a "federal" curriulum framework-that is, between a set of guidelines developed by one or more nationally recognized groups and a prescribed course of studies mandated by a central authority. A national framework could have an important function in making possible comparisons and evaluations of the con- tent of various state assessment tests and commercial achievement tests in specific subjects. South Carolina has developed a science curriculum framework for grades 1 through 8 that may be of use to the committee as an example and for comparison with other frameworks. South Carolina would have found the product of a national effort, such as the cur- rent one by the committee, valuable when they were working on a state framework. California also has developed a science curriculum framework in conjunction with the new state science assessment test

APPENDIX C 183 for the eighth grade. New York is an example of a state with a science curriculum for grades K-12. An additional perspective on curriculum assessment could be offered by people who are external to the education system but who have certain expectations of students with respect to their science and mathematics education. Groups to be consulted might include employers, college-level scientists, and scientists in industry, all of whom are influential in the determination of the intended curriculum, i.e., what the schools should be held responsible for in science and mathematics. Teacher Effectiver~ess . Some measure of subject matter preparation should continue to be considered an indicator of teacher effectiveness. Agreement on specifics may be difficult, however, since no satisfactory determina- tion may be possible at this time of optimal preparation for teaching a subject at a given grade level or teaching a particular course. ~ Even if the relationship between subject matter preparation and effective teaching of a subject were better understood, there would still be problems with current teacher tests. Tests for elementary teachers lack science content altogether; typically, they are dominated by questions on general pedagogy. The low expectation for instruction in science at the elementary level may be a contributory factor, as may be the absence of any agreement as to what the science content of the elementary school curriculum should be even when science is being taught. A more general criticism of teacher testing is the extent to which coaching can and has been used to improve test scores, thus decreas- ing a test's validity. One approach, used in South Carolina, is to disseminate test specifications that indicate the areas to be tested, but not to distribute or coach on sample test items. The impact of teacher tests on preservice and in-service educa- tion must be considered, analogous to the impact of student achieve- ment tests on the school curriculum. The implication is not to do away with teacher testing, but to improve the tests so that they assess important rather than trivial knowledge and process skills, again analogous to the improvement needed in student tests. If there were national curriculum frameworks for science and mathematics, they could guide the content of the teacher tests as well as of student tests.

184 APPENDIX C ~ How a teacher actuary delivers the curriculum to the students importantly affects what subject-matter content they are likely to learn. Therefore, the quality of curriculum delivery needs to be assessed, and appropriate indicators need to be developed. Given the disparity in science and mathematics learning among different student groups, the indicators must be sensitive to variations in delivery according to the range of students in a classroom or a school. At present, the two methods used to assess curriculum delivery are classroom observation and "opportunity-to-learn" questionnaires administered to teachers and older students, as in the lEA and NAEP assessments. Although costly, observation should be included as a recommended method. The Tennessee assessment of teacher efl.ectiveness for the state's career ladder program included three outside observers. An important benefit of the observations was that teachers were able to reflect on their behavior and techniques in the classroom. Items that differentiated outstanding teachers in Tennessee were the extent of planning, use of a variety rather than just one or two teaching strategies, and instruction in higher-order thinking skills. It is also important to observe the teaching of a range of students, not just the better students in science and mathematics. . The notion of adding intellectual curiosity to the other two fac- tors that make for teacher effectiveness (subject-matter knowledge and ability to get knowledge and intellectual curiosity across to stu- dents) is important. The difficulty of assessing this factor should not deter the committee from including it; rather, work on developing useful indicators of intellectual curiosity needs to be encouraged. Observation should include some higher-inference items, partic- ularly to assess adequately the teaching of higher-order skills. Many observation instruments concentrate on lower-inference items be- cause observers can be trained more easily, and they yield higher reliability. . Assessment of Learning . The provisional draft [an early version of Chapter 4] developed during the committee's workshop on learning assessment provides an exciting, forward-Iooking statement on cognitive processes and test- ing. It is useful at this time when states are considering possibilities for computerized testing.

APPENDIX C 185 Statewide tests can and do have a great impact on curriculum and teaching. The committee could provide very useful advice and models on how to measure higher-order skills through statewide tests. . Matrix sampling is a possible approach to testing higher-order skills through new testing methods. However, some states have man- dated individual testing of ad students. There are two primary rea- sons for individual rather matrix-sample testing: (1) students and parents want individual test scores for external uses and (2) compar- isons between schools are more difficult with matrix sampling unless there is a sufficient number of students tested in each school. Cal- ifornia uses matrix-sample testing and has obtained reliable school comparisons with testing 30 students per school. Florida, Tennessee, and Virginia have used matrix sampling of students with a regional study of eleventh grade reading that will produce state comparisons. ~ Matrix sampling can involve selection of different combinations of items for each student, and, if desired, all students can be tested. This approach increases the content covered and tested, a consid- erable advantage for assessing the quality of programs in a school. Matrix sampling places more pressure on the school staff and de- creases reliance on student variables rather than school variables to explain success or failure on the tests. . Most current achievement tests do not test what an individual student knows, since they sample only a small portion of the cur- ricuTum. Computerized methods of testing would allow much greater coverage of what a student knows and does not know and thus permit teaching to student deficiencies. . Nevertheless, the highest priority for developing and using test information should be to assess the electiveness of a school program or curriculum. Although individualized programs for students are often discussed, it is unrealistic to place priority for use of testing on individual student diagnosis and design of individualized instruction. ~ The item bank concept is difficult to put into effective practice. Access to the items is crucial, and that will entail a good deal of careful planning. A number of states have item banks for science, such as Oregon, Minnesota, and North Carolina, and other states, including Florida, are considering item banks. Some of the current item banks are not well utilized. A national library of items, like the one the committee outlined, would provide a framework for classifying items that are compiled by states. Another item bank may not be needed, but a conceptual model for use of items is needed.

186 APPENDIX C Models of good items that assess process and higher-order skills are also urgently needed. An enormous amount of scientific exper- tise is necessary in test development and validation. There is large potential for misinformation in poor item stems and distractors, and too many item reviewers are not expert in the areas of science for the items they are reviewing. The committee could serve an impor- tant role in developing an item library that concentrated on creating high-quality items and on models for use of the items. It is difficult to move away from such simple quantitative indica- tors as test scores or the "science dropout rater toward qualitative indicators that would report more information. One view expressed was that multiple test scores would be better than one score. An- other view was that it knight be possible to construct a scale of science learning that could be compared with desired curriculum outcomes. Such scales do exist for reading and mathematics. If qualitative in- dicators are to be reported to state and local policy makers to give greater depth of information, a common "language" for qualitative indicators would need to be specified, i.e., consensus would need to be established on the meaning and interpretation of words used to express the indicators. . Experts that develop and recommend indicators to policy makers should have a clear idea of what Is important to know and the purpose of the information. Parsimony with indicators is crucial. Much of the data currently collected by state education agencies is not used. Use of Resources ~ Indicators for resource use at the local level should focus on availability of resources in the classroom and resource use from the teacher's perspective. It is too difficult to interpret such centrally collected measures as full-time-equivalent staff with respect to pro- grammatic significance, i.e., resource investment in, say, physics or mathematics. States are quite aware of the decline in federal resources for science. For example, the NDEA grants (in the 1960s) were the last major federal funds for equipment and supplies in science. The waxing and waning of federal resources for science (and other pro- grammatic areas) should be tracked. State agencies generally have not committed funds for resup- plying equipment and materials for science, even though these are urgently needed in districts. Often, other school funding priorities

APPENDIX C 187 take precedence, such as raising teacher salaries. Since 85 percent of the typical school budget is allocated to staff salaries and benefits, there is little wiggle room in the budget. In any case, states prefer to let local school districts allocate funds by program, thus moving competition for funding to the local level. ~ Recent changes in state graduation requirements in science and mathematics are having important impacts on local resources. Re- quirements that each school offer advanced science courses are being instituted in a number of districts and states; such courses are espe- cially costly to teach and may draw resources (e.g., the best teachers) from other science instruction. Student Attitudes and Motivation . The committee's statement focused mainly on indicators of sci- entific attitudes possessed by students. Another approach is to assess student attitudes toward science classes, science teachers, or the sci- entific disciplines themselves. For example, the NAEP 1982 survey revealed that only 35 percent of students think their teachers like science. Student images of science and scientists may be important factors in motivating students to learn science and in career decisions. ~ Some states include items on student attitudes in their assess- ments. For example, the California eighth-grade science test includes 30 such items; initial results were made available in August 1986. Further information on what states are doing in this area should be available from the UCLA Center on Evaluation, which has reviewed state assessment instruments, including attitude items. General Science Literacy The committee's statement on scientific habits of mind bears some similarity to its statement on student attitudes and motiva- tion, particularly with respect to learning to think about natural phenomena as do scientists. ~ The committee's perspective on science literacy is an excellent general statement of the role and importance of science education; it provides a good rationale for science preparation for all citizens, not just preparation of scientists. The committee should consider introducing its report with this statement.

188 APPENDIX C Participants: State Educatior~ Agencies Dale CarIson, Director, California Assessment Program, California Department of Education David Donovan, Assistant Superintendent for Technical Assistance, Michigan Department of Education Janice Earie, Maryland State Department of Education Gordon Ensign, Supervisor of Testing and Evaluation, Washington Superintendent of Public Instruction Pascal D. Forgione, Jr., Office of Research and Evaluation, Connecticut Department of Education Steven Koffler, Bureau of Cognitive Skills, New Jersey Department of Education Windsor Lott, Director, Division of Education Testing, New York State Department of Education George Malo, Tennessee Department of Education Wayne Neuburger, Director, Assessment and Evaluation, Oregon State Department of Education Paul Prowar, Office of Research and Evaluation, Connecticut Department of Education Edward Roeber, Michigan Department of Education Paul Sandifer, South Carolina State Department of Education Ramsay Selden, Director, State Education Assessment Center, Council of Chief State School Officers Janice Smith, Assessment, Evaluation, and Testing, Florida Department of Education Zack Taylor, Science Unit, California Department of Education Suzanne Triplett, State Education Assessment Center, Council of Chief State School Officers Marvin Veselka, Assistant Commissioner of Assessment, Texas Education Agency

APPENDIX C SUMMARY OF MEETING WITH REPRESENTATIVES OF LOCAL SCHOOL DISTRICTS JUNE 6, 1986 Washington, D.C. 189 The purpose of the meeting between representatives of local school districts (see the list of participants below) and members of the committee was to provide an opportunity to discuss mutual interests concerning the assessment of the quality of science and mathematics education. The committee presented some preliminary ideas on six indicator areas and asked for reactions from the local representatives as well as discussion of additional concerns they wished to raise. Committee members summarized draft statements that had been circulated before the meeting on assessment of teacher effectiveness, the quality of the curriculum, student learning, investment of re- sources, scientific literacy, and student attitudes and motivation. Following each presentation, the local representatives commented on the feasibility and desirability of the suggested indicators and pro- posed other indicators that might be considered. The comments and discussion are summarized below under each indicator area. Teacher Effectiveness Indicators of teacher effectiveness need to be tied to clearly stated assumptions about the goals of science and mathematics education; e.g.: student achievement test scores need to be raised; the number of college students majoring in scientific fields needs to be increased; or the overall science literacy of all 18-year-olds needs to be raised. These goals are not necessarily mutually exclusive, but they may require different teacher competencies. Possible Indicators . Teacher effectiveness is not a unitary variable that can be mea- sured along a single dimension. It needs to be assessed in the context of specific subject matter, at particular grade levels, and with respect to groups of students with different levels of ability and coming from different socioeconomic backgrounds. . Related to the first comment is the need to appraise the ef- fectiveness of a teacher in organizing and presenting instruction to meet the needs of students. Twenty years ago, when students were

190 APPENDIX C differently motivated, it may have been appropriate to emphasize subject-matter knowledge of teachers as a prime requisite for teach- ing. The needs and backgrounds of many students are more varied today; teachers must have empathy and understanding as well as subject-matter knowledge in order to teach most students. Varia- tions In teacher effectiveness, however, should not be explained away by the characteristics of students i.e., the background and ability level of students should not be used as an excuse for ineffectiveness of the teacher or the school. Also related to the first comment, any indicator of intellectual curiosity should use different measures for elementary and secondary teachers, given different responsibilities and expectations for teach- ers at each of these levels. Measures might also differ for teachers of advanced placement versus basic skills classes, although having different standards for teachers may be a subtle form of failing to hold teachers responsible for low student performance. . The attitudinal or motivational aspect of teacher effectiveness should be discussed by the committee as a potential indicator, anal- ogous to consideration of indicators of student attitude and motiva- tion. Some local school districts prefer an outcomes-based mode] for measuring teacher effectiveness, as opposed to assessing teacher char- acter~stics (e.g., intellectual curiosity) or using process measures. An outcomes-based mode] provides for assessment of the contribution of a teacher to student learning and educational attainment over time, while taking into account the effects of student background and school and teacher characteristics. Several kinds of outcomes measures, in addition to test scores, can be included, for example, graduation and dropout rates, proportion of students going to col- lege, and various honors and awards earned by students. A potential difficulty in implementing this kind of mode! is the high degree of student mobility between schools, districts, and states. Use of Dedicators . Any recommendation to test teachers for subject-matter knowI- edge should specify that test results not be used for evaluating in- dividual teachers, either for entering or advancing in a teaching job. Items asking for demographic information on teachers should be excluded from subject knowledge tests to ensure that the results are used only to assess the overall quality of the teaching staff of

APPENDIX C 191 a district. If demographic information is collected, some means of ensuring anonymity of responses should be provided. Within these considerations, a test of teachers' minimum level of competence in their subject would be a useful indicator for local school districts. . A standard for minimum competence in a teacher's subject should be considered a threshold level of competence. Testing of teachers' subject knowledge probably needs to extend slightly be- yond the level at which they teach. That is, teachers need to know what a student will be learning at the next level and how instruction at the two levels is related. . Recommendations for indicators of teacher effectiveness should be accompanied by recommendations on the appropriate level of analysis of the indicators, i.e., individual teacher, school, district, state, nation. This is important for the design of specific measures and the use of indicators. ~ The interest of teachers in indicators, as reported by one of the LEA representatives, relates mostly to aspects of their job that they perceive need improvement, e.g., time available for professional development and planning of instruction. Regarding the committee's work, conflicting views were ex- pressed on the usefulness for constructing indicators of the existing research on teacher effectiveness and school effectiveness. One view was that the committee's report should take account of the main findings coming from this research, even if the indicators recom- mended by the committee are not necessarily based on the findings. Many districts have designed programs to improve instruction based on school effectiveness research. A second view was that much of the research on school effectiveness and teacher effectiveness is flawed methodologically, and thus the committee need not worry about citing the findings. Quality of curriculum Analogous to teacher effectiveness indicators, recommendations on assessing curriculum quality also need to be tied to assumptions about educational goals, i.e., the expected performance level of stu- dents in science and mathematics. Curricular frameworks cannot be constructed nor core concepts specified without knowing what level of knowledge is expected of students minimum competency, science literacy, or college preparation. If that is its intent, the report

192 APPENDIX C should state clearly that the comm~ttee's goal is to assess science and mathematics curricula, and learning, for all levels of students. Possible Indicators . Frameworks for assessing the quality of curriculum are very im- portant and urgently needed; they would be especially useful if they connect "strands of curriculum objectives between the grades. A framework or set of core concepts needs to be fairly specific to pro- vide a means of assessing differences between programs and schools. Local districts would like to be able to provide evaluative information of this kind for their curriculum specialists. Given the current state of curriculum development, frameworks are more applicable to the mathematics curriculum than the science curriculum. In assessing the quality of the curriculum, factors in addition to the framework or set of core concepts should be considered, including community needs and interests. Frameworks must allow for local variations in the curriculum. ~ A potential indicator of the quality of the curriculum in high schools is "holding power" the extent to which students continue to enroll in courses within a subject area. . According to some LEA representatives, the proposed method of measuring the "taught curriculum" through self-reports by teachers will not produce a valid indicator of the curriculum that is actually taught. Teachers will tend to overreport what they cover, especially if they think their response will be used to evaluate their performance. It was pointed out that self-report measures have been used on previous studies, e.g., the TEA Mathematics Assessment. In that case, when coverage was being tied to student performance, it might have been in the teachers' interest to underreport what topics have been covered. In either case, teacher self-reports may not yield accurate estimates of what is taught in classrooms. Self-reports could be corroborated by random auditing procedures. . Assessment of 1,earning . . Several points were raised concerning the feasibility of the rec- ommended national library of test items and how it might be im- plemented. Quality control of the items is a major issue that will need to be resolved. Also, the library should have a method of track- ing the use and effectiveness of items, possibly by monitoring which

APPENDIX C 193 items are requested and asking LEAs to return information on their experience with items including statistical data on scores. For science, hands-on assessment items should be included in the materials in the library. NAEP is currently testing out some hands-on items. Many larger districts have developed their own criterion- referenced tests because sufficiently comprehensive item banks to allow choices to match curricula were not available. Some districts are using items that were developed for the high schools in Dade County, Florida. Locally developed tests have the advantage of- giv- ing teachers a feeling of ownership and involvement in the curriculum and testing process. Data obtained from locally developed criterion-referenced tests could be used more extensively for diagnostic purposes with students, comparisons of schools and classes, and analyses of grades that are assigned to students. Local districts and schools need to make bet- ter use of existing tests and data for assessment of learning, while development of improved tests and assessment methods continues. Resources . Indicators of resources for science and mathematics should be based on actual use, for example, the number of students in a school using the science laboratory and how it is being used. The mere presence of laboratories, or even their availability to the teacher, is not really important. Their value is in the extent and quality of use with students. A much more important resource issue than laboratories, facili- ties, or supplies for science and mathematics is the use of resources for teacher training and teacher development, i.e., preparing teachers to improve their teaching by more effective use of such resources as laboratories. . Information on resources for science and mathematics could be very valuable, but a major question is how the data should be col- lected. One option suggested was to use the accreditation process to identify availability and use of resources. However, accredita- tion is already burdensome for schools and accrediting committees. Moreover, accreditation tends to be based on subjective reviews and assessment rather than collection of quantitative data. The method and organization selected for collecting information on resources is likely to have considerable ejects on how the information is used.

194 APPENDIX C ~ The committee should not ignore the level of federal investments in recommendations for indicators of resources. For example, current initiatives to encourage retraining of teachers for shortage areas have implications for federal policies, and additional funds wait be needed. Scientific Literacy ~ A question was raised concerning the possibility of using a com- posite measure of scientific literacy rather than several different mea- sures, as suggested in the comm~ttee's statement. However, a com- posite measure is likely to mask differences on the several dimensions of scientific literacy discussed in the statement, and the interpreta- tion of separate measures matching these dimensions would be more straightforward and valid. . The committee's draft statement calls for "flexible indicators. A better description of the desired attribute might be to call for indicators that are "sensitive to change." The committee should consider defining scientific literacy, in- cluding aspects of technological literacy, from the perspective of employers. Opinions differ on what constitutes effective education for current and prospective job markets: one view emphasizes knowI- edge and understanding of technology; another view holds that the basics of science and mathematics are more important, given the rapid changes in technology (e.g., the shift from transistors to micro- processors). No matter how technological literacy is defined, it is hardly taught at all at present. Hence, increasing the technological liter- acy of students would involve high costs for developing appropriate curricula and-even more so the needed skills and knowledge of teachers. ~ Assessment of scientific literacy should include students still in school as well as adults in order to measure change over time, i.e., what people retain of what they have learned during their school years and what new concepts, information, and skills they have acquired. Student Attitudes and Motivation ~ The committee considers student attitudes to be an outcome of instruction in science. However, student attitudes toward science can be strongly affected by the attitudes of peers and adults. In

APPENDIX C 195 particular, attitudes can be shaped by teachers at a very early point in education. This reinforces the suggestion made above on assessing teacher attitudes and motivation as well as those of students. In- trinsic interest and motivation toward science is needed by teachers for good science teaching, just as it is needed by students for science learning. Student attitudes and motivation might be analyzed in relation to teacher attitudes and motivation. It is very important to learn more about the affective component of science and mathematics education. This is particularly important for local school districts at the present time as requirements for the number of science and mathematics courses are being raised in the face of demonstrated low student interest in these subjects. Better information on attitudes and motivation may yield clues as to the reasons why most high school students avoid science and mathematics courses if not forced to take them. General Suggestiorls . The term precollege is too narrow, given the goals of science education assumed in the report, i.e., improving science and mathe- matics education for all students. Precollege implies interest only in college-bound students. . Indicators should provide the capacity for assessing the long-term · . ~ ~ . · ~ 1 · ~ 1_ ~ _ 1 _ ~ ~ impact ot education on sucn coals as 1ncreaslng sclen~luc 1l~erm;y or · · · . . . ~ _ Increasing Interest In science and mathematics, not just immediate results, for example, outcome measures that reflect the goal of raising test scores. . The scaling of indicators is important. Measures need to be expressed in terms of distribution or range, not simply averages or means. The committee should consider recommending that more re- search on indicators be conducted involving large school districts because many have large, accessible data bases for carrying out re search. . Recommendations for new indicators are likely to require differ- ent kinds of evaluation and research on elementary and secondary education than in the past. The development of indicators useful at state and local levels may well affect the current roles and practices of local and state agencies in collecting, analyzing, and using data.

196 APPENDIX C Participants: Local School Districts Alan Barson, Curriculum Division, Mathematics and Science, School District of Philadelphia Milton Binns, The Council of Great City Schools Frances M. Cuipepper, Science Coordinator, Atlanta Public Schools Stephen H. Davidoff, Research and Evaluation, School District of Philadelphia Steven Frankel, Department of Educational Accountability, Montgomery County Public Schools, Maryland Joy Frechtling, Department of Educational Accountability, Montgomery County Public Schools, Maryland LaMarian Hayes-WalIace, Office of Research and Evaluation, Atlanta Public Schools Paul Hovsepian, Divisional Director, Mathematics and Science, Detroit Public Schools Sam Husk, Executive Director, The Council of Great City Schools Joseph P. Linscomb, Office of Associate Superintendent of Instruction, Los Angeles Unified School District Joy Odom, Coordinator, Secondary Mathematics, Montgomery County Public Schools, Maryland Joyce Pinkston, Coordinator of Curriculum Development, Memphis City Schools Harold Pratt, Science Coordinator, Jefferson County Public Schools, Colorado Kathy Pruett, Director, Research Services, Memphis City Schools Stuart C. Rankin, Deputy Superintendent, Educational Services, Detroit Public Schools Thomas Rowan, Coordinator, Elementary Mathematics. Montgomery County Public Schools, Maryland Nicholas Stayrook, Director, Evaluation Services Department, Seattle Public Schools Floraline I. Stevens, Director, Research and Evaluation, Los Angeles Unified School District Gary Thompson, Department of Evaluation Services, Columbus Public Schools Ray Turner, Assistant Superintendent for Educational Accountability, Dade County Public Schools, Florida Robert Wright, Secondary Science, Curriculum Specialist, Seattle Public Schools

Next: Appendix D: Current Projects on Indicators »
Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12 Get This Book
×
 Improving Indicators of the Quality of Science and Mathematics Education in Grades K-12
Buy Paperback | $65.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This book presents a carefully developed monitoring system to track the progress of mathematics and science education, particularly the effects of ongoing efforts to improve students' scientific knowledge and mathematics competency. It describes an improved series of indicators to assess student learning, curriculum quality, teaching effectiveness, student behavior, and financial and leadership support for mathematics and science education. Of special interest is a critical review of current testing methods and their use in probing higher-order skills and evaluating educational quality.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!