The indicators suggested for each of the educational stages reflected many perspectives regarding which aspects of education are most important to track and how they can best be measured. The indicators suggested do not represent any consensus about what ought, ultimately, to be included in a system of education indicators, but do suggest the wide range of possibilities. The issues raised were the backdrop for a closing discussion in which five panelists, Emerson Elliott, Ronald Ferguson, Eugene García, Patricia Graham, and Marshall Smith, reflected on the task of selecting indicators for education. Each offered thoughts about the purposes for an indicator system and the criteria that should guide the selection, the structure or framework by which a set of education indicators might best be organized, their own candidates for the final list, and the issues they thought the presenters had overlooked.
The primary audience for an indicator system, in Emerson Elliott’s view, is the general public. Researchers and policy makers may find them useful, he observed, but data that could be used in this sort of indicator system are not detailed enough to support decision making in schools, colleges, or classrooms. He suggested that effective indicators are based on three elements: research about measures, knowledge of what the intended audience needs, and understanding of how to communicate effectively with that audience.29 In Elliott’s view, the essential topics for national education indicators are (1) individual outcomes and behaviors and (2) providers and public investments.
For Ronald Ferguson the overarching purpose of an indicator system is to influence people’s priorities. Some knowledge and skills tend to be easy to measure, he observed, and those received the most attention at the workshop. But more amorphous attitudes, values, goals, dispositions, and mindsets that are more difficult to measure are also important, he observed. He also noted that an indicator system should distinguish between in-school and out-of-school learning: he noted his particular interest in the ways that young people use their discretion in choosing out-of-school learning. A third priority for him is a system that tracks not only the availability of positive learning experiences, but also exposure to such stressors as poverty and such social toxins as violence and
29Elliott credited Connie Citro, director of the Committee on National Statistics of the National Academy of Sciences, for this observation.
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 60
KEY NATIONAL EDUCATION INDICATORS 7 Concluding Thoughts The indicators suggested for each of the educational stages reflected many perspectives regarding which aspects of education are most important to track and how they can best be measured. The indicators suggested do not represent any consensus about what ought, ultimately, to be included in a system of education indicators, but do suggest the wide range of possibilities. The issues raised were the backdrop for a closing discussion in which five panelists, Emerson Elliott, Ronald Ferguson, Eugene García, Patricia Graham, and Marshall Smith, reflected on the task of selecting indicators for education. Each offered thoughts about the purposes for an indicator system and the criteria that should guide the selection, the structure or framework by which a set of education indicators might best be organized, their own candidates for the final list, and the issues they thought the presenters had overlooked. PRIMARY PURPOSES FOR AN INDICATOR SYSTEM AND CRITERIA FOR SELECTION The primary audience for an indicator system, in Emerson Elliott’s view, is the general public. Researchers and policy makers may find them useful, he observed, but data that could be used in this sort of indicator system are not detailed enough to support decision making in schools, colleges, or classrooms. He suggested that effective indicators are based on three elements: research about measures, knowledge of what the intended audience needs, and understanding of how to communicate effectively with that audience.29 In Elliott’s view, the essential topics for national education indicators are (1) individual outcomes and behaviors and (2) providers and public investments. For Ronald Ferguson the overarching purpose of an indicator system is to influence people’s priorities. Some knowledge and skills tend to be easy to measure, he observed, and those received the most attention at the workshop. But more amorphous attitudes, values, goals, dispositions, and mindsets that are more difficult to measure are also important, he observed. He also noted that an indicator system should distinguish between in-school and out-of-school learning: he noted his particular interest in the ways that young people use their discretion in choosing out-of-school learning. A third priority for him is a system that tracks not only the availability of positive learning experiences, but also exposure to such stressors as poverty and such social toxins as violence and 29 Elliott credited Connie Citro, director of the Committee on National Statistics of the National Academy of Sciences, for this observation. 60
OCR for page 60
substance abuse. He emphasized that if a major purpose of having an indicator system is to influence learning outcomes, then measures of the experiences that help to produce the outcomes are as important as the outcomes themselves. Eugene García identified as the primary purpose of an indicator system informing the American public about the educational well-being of all and revealing the ways learning is associated with other indices of well-being (such as health, economic status, etc.). Such information could be used to evaluate current policy and guide new policies and, by identifying gaps and trends, highlight domains that may need specific policy and practice attention. It could also be used for international comparisons. García highlighted the importance of making sure that learning outcome measures are fleshed out with rich qualitative and contextual information. Such information may be difficult and costly to obtain, he acknowledged, but in his view it would significantly enhance understanding of the nation’s educational well being. He also suggested that longitudinal measures are more informative than cross-sectional assessments alone. Longitudinal assessment allows for more robust analysis of individual and group progress, or lack thereof, over a specified period, he observed.30 “We are much better at static measures than at ones likely to stimulate positive change,” suggested Patricia Graham. The easiest approach is to focus on educational institutions and their role in helping students learn, using measures of academic achievement. Assessments of academic achievement provide useful information, in her view, but she also asked “how raising test scores fits into the broader purposes of schooling,” for which the metrics are less obvious. Expectations of U.S. schools have changed markedly in the past century, she noted, and promoting academic achievement for all is actually “a new assignment for them.” A century ago, public schools were expected primarily to assimilate immigrant children into American society. In the ensuing decades other goals were added: fostering social adjustment and creativity; desegregating public institutions; and creating special programs for the poor, the disabled, the gifted, and English language learners, for example. It was only toward the end of the 20th century that the primary emphasis became high achievement for all students. The available metrics have been very useful, Graham noted, in revealing obstacles to meeting that ambitious goal—achievement gaps and inequities in the education system. But achievement tests, in her view, should not be the sole, or even the principal indicators of learning. “We need more and better indicators of intellectual initiative, cooperative learning with others, and the ability to generate and assess new ideas and processes,” she suggested. Also important, she added, are indicators of what students have learned regarding “their role in this democracy of majority rule and minority rights, and their capacity to respect others, play fair, and to learn the traditional adult virtues of hard work, accuracy, and responsibility.” These attributes are much broader than simply getting and keeping a job, which is viewed as a key goal of schooling, but are nevertheless consistent with that desirable outcome, she noted. “We need metrics that reflect the education we 30 In a discussion of longitudinal indicators, a participant emphasized that they are most suitable for tracking such benchmarks as graduation rates, where there is a stark difference between achieving it and not achieving it. When tracking the percentages of students who meet particular standards, for example, this person argued, the result should be treated as a continuous measure, not as a longitudinal benchmark. 61
OCR for page 60
KEY NATIONAL EDUCATION INDICATORS really want our population to have, and highlight the conditions or procedures that are likely to [produce] that result,” she concluded. Marshall Smith noted that the United States has a complex and loosely structured education system and such systems are difficult to change through policy. The diffuse policy authority results in reduced coherence and predictability, he suggested. It is possible for new ideas to permeate such a system, but the knowledge base in education has often suffered from weak credibility. A respected indicator system, in his view, could provide useful leverage for innovation and improvement. He identified several specific goals for the indicator system: Each indicator should be made up of multiple measures or statistics, each of which is an important component. For example, academic growth for students from 6 to 18 could be one indicator and might include results from NAEP at 4th grade, PISA at age 15, and college admissions tests between ages 15 and 18. The set of indicators should reflect a vision of the future, should tell a story, and should be very easy to understand. The indicators should be flexible and able to change subtly over time as conditions change (in schools, the workplace, and with technology of all sorts, for example). Statistical procedures could be used to maintain trends) Whenever possible, the indicators should be aligned with international indicators so that comparisons can be made. INDICATOR SYSTEM STRUCTURE The way in which the indicator system is ultimately structured will reflect the goals it is designed to serve and, as Elliott observed, those goals have not yet been set. The indicators that were used to monitor progress toward the National Education Goals established in 1990, in Elliott’s view, illustrate the importance of considering the goals carefully. Among those goals was that U.S. students would be first in the world in mathematics and science achievement by 2000, he noted, but “the effort lost steam as it became clear that action was not being taken to reach the goals.” Elliott noted that the workshop demonstrated that the presenters and participants each brought their own values to the question of what is most important to measure and also that rapidly changing contexts will need to be taken into account. There are at least four models to consider, Elliott added: the committee’s framework: the Lifelong Learning indicators (ELLI) (see Chapter 6); the broad goals defined by the European Union (Economic Security, Social Cohesion, and Sustainability (Bertelsman Stiftung, 2008); and the four strands suggested by Marshall Smith, which were: 1. Human Outcomes—indicators of learning and doing; 2. Byproduct outcomes of learning, both for individuals and for society—research, innovation, the arts (music, dance, painting etc.), literature, sports, tolerance for others, etc.; 3. Formal infrastructure for learning—quality and availability of, for example, preschools, schools, certification systems, and formal networks; and 4. informal infrastructure for learning—quality and availability of supports within 62
OCR for page 60
the family and neighborhood, civil society, (social capital), structured learning on the job, independent and collective use of technology. García observed that the steering committee’s framework was structured around the current organization of formal education opportunities in the United States and that this may not be the best way to assess the overall well-being of lifelong learning processes. An alternative would be to use age as the organizing structure, so that indicators would be used to ask how well a particular age group is faring with respect to learning opportunities and outcomes in a variety of venues. In his view, this approach would fit more naturally with the way learning develops over time and would also align better with international indices that use age as the fundamental benchmark for comparisons. ADDITIONAL CANDIDATES FOR A FINAL LIST The five panelists identified a few ideas they believed had been overlooked by the presenters (see Table 7-1). Elliott listed specific opportunities as well as a few key issues. He noted that school and college data systems include longitudinal data that could be mined for a national indicator system, and also that there are examples of performance measures that could be useful models for U.S. indicators. He was concerned that three important issues—public school financing, equity issues, and the cost of higher education—were not well addressed, and also that the discussion did not pay sufficient heed to the importance of international comparisons. He also observed that the K-12 indicators did not address what happens to students after graduation from high school. Another gap was technology—though it was discussed at several points in the workshop it did not show up in the indicators for K-12 or postsecondary education. Ferguson stressed the importance of making sure that data could be grouped by age, race, and socio economic status (as did many others throughout the discussion). He also noted the importance of examining trends and disparities, and particularly patterns of relationships within and across topic or issue categories. García stressed that the measures must be inclusive, reliable, valid, adaptable over time, and fiscally and practically feasible. It is also important, he added, that they have “face validity,” meaning that they should be simple enough to be understood, even though they may be statistically complex and may capture complex phenomena. Graham acknowledged the importance of existing measures of academic achievement, but added that measures of school conditions that support learning and achievement are also very important. It is also true that “school or college may not be the primary activity of youth,” she suggested. Educational institutions are “in deep competition with considerably more compelling aspects of youth culture, especially technological enhancements and amusements,” she observed. It will be important to understand them and also to understand how they can be utilized to enhance education, in her view. Graham also observed that economists have contributed much to understanding of learning and schooling, but that anthropology, sociology, and other disciplines will be needed to fully explore new questions about what explains past successes and what will make improvement more likely. 63
OCR for page 60
KEY NATIONAL EDUCATION INDICATORS García also was concerned that most existing measures focus on specific outputs of educational interventions, and that greater attention is needed to the circumstances and contexts in which learning takes place. He emphasized the importance of attention to both language and immigration issues, noting that indicators of fluency in English and in other languages would be important not only for preschool years, but also throughout the K-12 experience. He noted as well that measures for the early childhood years as well as the later years of the life cycle are significantly more limited than the measures available for K-12 and higher education. In closing, Chris Hoenig reemphasized the importance of the effort. He noted that the rapid pace of innovation and technological advances is continually reshaping education, both nationally and globally. As a result, the selection of key national indicators must take into account current conditions as well as anticipated future conditions. He thanked the steering committee and workshop participants for a thoughtful exploration of the issues and for providing the critical first step toward accomplishing this goal. TABLE 7-1 Additional Indicators Suggested by the Final Panel Attendance, progression, and completion—for all levels of education Emerson Elliott Readiness for each subsequent level Classroom climate (preschool and K-12) Spending at all levels, but per-pupil spending not sufficient. The metrics should capture state effort in relation to state GDP, distribution to individual schools distribution of benefits for college across income group. Placement or other indicator of outcomes after high school or after college Teacher evaluation and/or measure of professional status and progress (perhaps associated with school climate, as part of a composite indicator) Learning: NAEP and international comparisons; additional modifications to capture critical thinking and problem-solving skills. Also include measures of knowing how to learn, if possible, and measures of practical application of learning (as in PISA) College remediation College access and individual student knowledge of access and ways to match the options with each students’ own interests College affordability and debt Preschool NAEP-like assessment College NAEP-like assessment Civic engagement measures, such as voting and volunteering in relation to education Ronald Learning outcomes (skill, knowledge, and orientations) Ferguson Measures of skill and knowledge (basic skills, critical thinking, other categories); also include measures of mindsets, dispositions/values/goals, “ideas about possible selves” 64
OCR for page 60
School-based learning experiences Measures of quantity and quality of key inputs (teaching quality, peer supports, parenting quality) and methods (e.g. curricula, resources such as technology) Out-of-school learning experience, learning, and lifestyles or behaviors, such as leisure reading, civic engagement, media use for news, media for informal learning, social networking, consumption of the news, habits of knowledge sharing, including access and home learning resources Contexts and opportunities (availability of resources) Exposure to stressors, accumulated social toxins, such as poverty Eugene Issues related to the diversity of the U.S. population—an index García that captures demographic change and related assets and vulnerabilities Measures of educational performance such as NAEP and longitudinal studies (e.g., ECLS-B, ECLS-K, and High School and Beyond) Qualitative measures that assess the opportunity to learn— context measures would address issues of quality that could be tied to outcome measures Patricia Measures of academic achievement, e.g., NAEP and state Graham standardized tests Measures of school conditions, especially effective teaching The role of technology in young people’s lives Marshall Indicators of human outcomes: learning and doing Ages 0-6: quality of parent and family support for learning, Smith including physical health and school readiness Ages 6-18: academic growth (assessments, attainment/graduation); participation in community, students’ belief that have “learned how to learn” and enjoy it Ages 18-35: attendance and graduation tertiary education; participation in civil society (voting, networks, coaching) Ages 25 and up: evidence of continued learning (formal, occupational, and informal) and of participation in civil society Indicators of nonacademic outcomes (byproducts of the system) Ages 0-6: opportunity to participate in arts, sports, and group activities. Ages 6-18: opportunity to participate in arts, sports, and group activities; experience with research, evidence, and research practice; and opportunity to be innovative in supportive environments. Age18 and up: opportunity to create and use research and 65
OCR for page 60
KEY NATIONAL EDUCATION INDICATORS evidence in decision making, innovation, everyday work. Exposure to and participation in arts, literature, group work. Indicators for infrastructure for formal education system Research productivity of institutions For institutions and national, state, and local systems: o quality, including degree of innovation, commitment to society, and fostering of learning o costs for each participant in the system (e.g. students, taxpayers, government), and measures of efficiency o availability and opportunity (including technology use to reach new students) Indicators for infrastructure of nonformal education system and employment training Ages 0-6: quality of family/community support and learning systems Ages 6-18: opportunities for extra support in school work (after school classes, summers, tutors). Use of study groups and networks, peers, web based tools and other independent study aides. Ages 18 and up: opportunities for and utilization of employment training, engagement in nonformal learning networks, clubs, professional associations, community, web-based tools 66