National Academies Press: OpenBook

Early Childhood Assessment: Why, What, and How (2008)

Chapter: 10 Thinking Systematically

« Previous: Part IV: Assessing Systematically
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 301
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 302
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 303
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 304
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 305
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 306
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 307
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 308
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 309
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 310
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 311
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 312
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 313
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 314
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 315
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 316
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 317
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 318
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 319
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 320
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 321
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 322
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 323
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 324
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 325
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 326
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 327
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 328
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 329
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 330
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 331
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 332
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 333
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 334
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 335
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 336
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 337
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 338
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 339
Suggested Citation:"10 Thinking Systematically." National Research Council. 2008. Early Childhood Assessment: Why, What, and How. Washington, DC: The National Academies Press. doi: 10.17226/12446.
×
Page 340

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

10 Thinking Systematically I n this volume we have discussed the dimensions of assess- ment, including its purposes, the domains to be assessed, and guidelines for selecting, implementing, and using information from assessments. Beyond this, however, one cannot make use of assessments optimally without thinking of them as part of a larger system. Assessments are used in the service of higher level goals—ensuring the well-being of children and their ­ families, ensuring that societal resources are deployed productively, distributing scarce educational or medical resources equitably, facilitating the relevance of educational outcomes to economic challenges, making informed decisions about contexts for the growth and development of children, and so on. Assessments by themselves cannot achieve these higher goals, although they are a crucial part of a larger system designed to address them. Only when the entire system is considered can reasonable decisions about assessment be made. This chapter argues that early childhood assessment needs to be viewed not as an isolated process, but as integrated in a sys- tem that includes a clearly articulated higher level goal, such as optimal growth, development, and learning for all children; that defines strategies for achieving the goal, such as adequate fund- ing, excellent teaching practices, and well-designed educational environments; that recognizes the other elements of infrastructure 301

302 EARLY CHILDHOOD ASSESSMENT instrumental to achieving the goal, such as professional develop- ment and mechanisms for monitoring quality in the educational environment; and that selects assessment instruments and pro- cedures that fit with the other elements in service of the goal. We begin by noting the multiple state and federal structures in which early childhood assessments are being implemented. These structures have emerged from different sources with different funding streams (e.g., federally funded Head Start, state- funded prekindergarten, foundation-funded intervention pro- grams) and rarely display complete convergence of performance standards, criteria, goals, or program monitoring procedures. Thus, referring to “a larger system of early care and education” is slightly deceptive, or perhaps aspirational. Furthermore, even the well-established programs in the “system” may lack key components—for example, they may assess child outcomes but not relate those outcomes to measures of the environment, or they may not have a mechanism in place for sharing child outcome data in helpful ways with caregivers and teachers. We use recent National Research Council reports, state expe- riences with the No Child Left Behind Act, and the recent work of the Pew Foundation–sponsored National Early Childhood Accountability Task Force—a national effort focused on account- ability in early childhood—as a basis for articulating the compo- nents needed in order for early childhood assessment to be part of a fully integrated system. We also provide some examples of progress toward this goal at the state level. Although we did not find any examples of fully integrated systems, in which services are provided by a single source and the assessment infrastructure is fully aligned and developed, the three states we describe are moving toward integrating early childhood assessment in a well- articulated system. WHAT DO WE MEAN BY A SYSTEM? The idea of a system comes up often in education discussions and analyses—there are education systems, instructional systems, assessment systems, professional development systems—but it is not always clear what the word actually means. Systems have a number of important features, which are enumerated in Systems

THINKING SYSTEMATICALLY 303 for State Science Assessment (National Research Council, 2006). In particular, they are organized around specific goals; they are made up of subsystems, each of which serves its own purposes; the subsystems must work well both autonomously and in harmony with one another for the larger system to work well; and a miss- ing or poorly operating subsystem may cause a system to function poorly or not at all. In our use of the term with reference to early childhood assessment, the committee intends • that assessments be seen as a part or subsystem of a larger system of early childhood care and education, which addresses the multiple aspects of child development and influences discussed in this volume; • that selection of assessments be intimately linked to goals defined by that larger system; • that procedures for sharing information about and using information from assessments be considered as part of the process of selecting and administering assessments; and • that different parts of the assessment system itself (stan- dards, constructs, measures, indicators) work together. Systems need to have well-developed feedback loops to pre- vent over- or undercompensation for changes in a single part. Feedback loops occur whenever an output of some subsystem connects back to one of its inputs. For example, a fundamental feedback loop occurs in the classroom when a teacher identifies problems that children are having with an idea or skill and adjusts his or her instructional techniques and the learning environ- ment in response. When this causes the children to learn the idea or skill successfully, one would say that the feedback loop has worked effectively. Implementation of a similar feedback loop at the level of the program takes child performance as the input for identifying classrooms in which teachers need additional This section and the following one on infrastructure draw heavily on the content of the National Research Council’s 2006 report, Systems for State Science Assessment. Although assessment is here defined as a subsystem of a larger system, through- out this chapter we refer to the “assessment system” for the sake of simplicity, except when the distinction is important.

304 EARLY CHILDHOOD ASSESSMENT assistance in implementing instructional activities. These two sub- systems—the individual- and the program-level feedback from child performance to teacher supports—function well as part of a larger system if the same or consistent information is used in both loops. However, if, for example, the teacher is responding to child performance so as to enhance creative problem solving, whereas the institution is encouraging teachers to focus on children’s rote memorization capacity, then the subsystems conflict and do not constitute a well-functioning system. In a well-designed program, the assessment subsystem is part of a larger system of early childhood care and education comprised of multiple interacting subsystems. These other sys- tems include the early learning standards, which describe what young children should know and be able to do at the end of the program; the curriculum, which describes the experiences and activities that children will have; and the teaching practices, which describe the conditions under which learning should take place, including interactions among the teachers and children as well as the provisioning and organization of the physical environ- ment (National Research Council, 2006). The relationships among these four subsystems are illustrated in Figure 10-1, adapted from the “curriculum, instruction, assessment (CIA) triangle” commonly cited in the educational assessment community. Each of these sub­systems is also affected by other forces, for example, laws intended to influence what children are expected to learn, professional development practices, and teacher preparation policies influenced by professional organizations and accredit- ing agencies. We argue in this chapter that all these components must be thought of as part of a larger system, and that they must be designed so as to be coherent with one another, as well as with the policy and education system they are a part of, and with the goals for child development that the entire system is meant to be promoting. We reframe these arguments as a conclusion to this chapter. Infrastructure for an Assessment System An early childhood assessment subsystem should be part of a larger system with a strong infrastructure that is designed to

THINKING SYSTEMATICALLY 305 FIGURE 10-1  A schematic relationship (the “CIA Triangle”) among early learning standards, curriculum, instructional practices, and assessment. Figure 10-1, bitmapped provide high-quality early care and education. The infrastructure R01340 is the foundation on which the assessment subsystem rests and is critical to its smooth and effective functioning (National Early Childhood Accountability Task Force, 2007). The infrastructure encompasses several components that together form the system: 1. Standards: A comprehensive, well-articulated set of stan- dards for both program quality and children’s learning that are aligned to one another and that define the constructs focused on in assessment as well as the performance levels identified as acceptable. 2. Assessments: Multiple approaches to documenting child performance and reviewing program quality that are of high quality and connect to one another in well-defined ways, from which strategic selection can be made depend- ing on specific purposes. 3. Reporting: A procedure, defined on the basis of the standards and the assessments, designed to maintain

306 EARLY CHILDHOOD ASSESSMENT an integrated, user-accessible database of assessment results, provide for quality assurance and integrity of data, and generate reports for the varied user audiences and purposes. 4. Professional development:  Ongoing opportunities pro- vided to those at all levels (practitioners, assessment administrators, program directors, policy makers) to understand the standards and the assessments and to learn to use the data and data reports with integrity for their own purposes. 5. Opportunity to learn:  Procedures for ensuring that the environments in which children are spending time offer high-quality support for development and learning, as well as safety, enjoyment, and affectively positive relationships. This is crucial when decisions about children or programs are based on assessment outcomes. 6. Inclusion:  Methods and procedures for ensuring that all children served by the program will be assessed fairly, regardless of their language, culture, or disabilities, and with tools that provide the most useful information for fostering their development and learning. 7. Resources:  Assurance that the resources needed to ensure the development and implementation of the system com- ponents are available or will be recruited. 8. Monitoring and evaluation:  Procedures for continuously monitoring the system itself to ensure that it is operating effectively and that all elements are working together to serve the interests of the children. This infrastructure must be in place to create and sustain an assessment subsystem within a larger system of early child- hood care and education. Ensuring the adequacy of each of these components raises some critical challenges. A challenge to the a ­ doption of systems-level thinking about early childhood care and education, and thus about early childhood assessment, is the absence, under current U.S. policies, of a unified structure for early care and education. The current variety of separate programs seg- regated by setting, by agency, and by funding streams, with their numerous challenges to delivering uniformly high-quality early

THINKING SYSTEMATICALLY 307 care and education services, also serves as a barrier to developing a unified system of assessment. While the suggestion that these many barriers to an integrated system must be vaulted may seem unrealistic, we argue that a vision of a well-integrated, coherent system is needed to guide the development of policy for young children. We expand on the importance of each component of a well-organized system below. Standards The most fundamental aspect of the assessment system is the set of explicit goals for children’s development and learning around which the larger system is organized, thus providing the basis for coherence among the various elements. In most educa- tional settings, these are referred to as “standards,” but in early childhood education sometimes other terms, such as “guidelines” or “foundations,” have been used. Whatever they are named, these standards direct the design of curriculum, the choice of teaching practices, and the priorities of teachers in setting instruc- tional goals, planning activities and experiences, and organizing the environment. They are the starting point for developing assessments, judging performance levels, and rating children’s and the program’s growth and performance. Standards are also the framework for reporting children’s performance to educators and the public and for focusing pro- gram improvement efforts. Note that, although these standards are to be applied to children’s performance, they can be used as one input in establishing accountability for teachers, centers, and states (National Research Council, 2006). Thus, while some may see holding teachers, early care and education settings, and states to these standards for children’s performance as potentially punitive, others argue that they constitute a defense of the right of children to a high-quality and fair early childhood environment. Note that when applying the same logic to the programs in which children are to be educated, an equivalent set of statements can be made regarding program standards. For example, consider the No Child Left Behind Act (NCLB), which requires states to have reading, mathematics, and science standards for K-12 education that must be of “high quality,”

308 EARLY CHILDHOOD ASSESSMENT although the act says relatively little about what characterizes standards of high quality. While we are emphatically not recom- mending that the NCLB regime be extended to early childhood education, it is important to understand the NCLB framework, as it is the most common reference point on standards in the United States, and states are being asked by the federal government to align their preschool standards with their K-12 standards. Under the act, the word “standards” refers both to content standards and to achievement standards. The law requires states to develop challenging academic standards of both types, and a federal guid- ance document describes them as follows (U.S. Department of Education, 2004): • Academic content standards must specify what all children are expected to know and be able to do; contain coher- ent and rigorous content; and encourage the teaching of advanced skills. • Academic achievement standards must be aligned with the state’s academic content standards. For each content area, a state’s academic achievement standards must include at least two levels of achievement (proficient and advanced) that reflect mastery of the material in the state’s academic content standards, and a third level of achievement (basic) to provide information about the progress of lower- a ­ chieving children toward mastering the proficient and advanced levels. Note that achievement standards are often also referred to as performance standards. The NCLB-driven standards apply to children in grades 3-12 and link directly to the explicitly defined academic content areas that are also assessed in determining adequate yearly progress for schools. It would be inappropriate to borrow this model unchanged and apply it to early childhood settings, in which explicit instruction in well-defined academic content areas is not characteristic of excellent care and education. The Council of Chief State School Officers defines ­common standards and assessment-related terms in language rele- vant to the early childhood community (http://www.ccsso.

THINKING SYSTEMATICALLY 309 org/­projects/SCASS/projects/early_childhood_education_ a ­ ssessment_­consortium/publications_and_products/2838.cfm). It defines standards as “widely accepted statements of expecta- tions for children’s learning or the quality of schools and other programs.” Of critical importance in this definition is the inclu- sion of program standards on equal footing with expectations for children’s learning. The report Systems for State Science Assessment (National Research Council, 2006) examines the role of standards in certain educational assessments and recommends that they be designed with a list of specific qualities in mind: standards should be clear, detailed, and complete; be reasonable in scope; be correct in their academic and scientific foundations; have a clear conceptual framework; be based on sound models of learning; and describe performance expectations and proficiency levels. State standards that have been developed for K-12 education do not meet these requirements as a whole, although some come closer than others. Recent analyses of states’ early childhood standards also suggest some misunderstanding of the difference between content and performance (Neuman and Roskos, 2005; Scott-Little, Kagan, and Frelow, 2003a). Appendix C presents a brief description of the cur- rent status of state standards for early childhood education, and includes some discussion of the efforts to align early childhood with K-12 standards. Standards should be arranged and detailed in ways that clearly identify what children need to know and be able to do and how their ideas and skills will develop over time. Learning progressions (also called “learning trajectories”) and learning performances are two useful approaches to arranging and detail- ing standards so as to guide curriculum, teaching practices, and assessment. Learning progressions are descriptions of successively more sophisticated ways of thinking and behaving that tend to follow one another as children mature and learn: they lay out in text and through examples what it means to move toward more mature understanding and performance. A useful example of the ideas of learning progressions and learning performances in the preschool years is California’s Desired Results Developmental Profiles-Revised (DRDP-R) and

310 EARLY CHILDHOOD ASSESSMENT its learning progression for interpersonal skills. This learning progression has been viewed as being composed of six areas, for each of which a measure (or observational guide) has been constructed: 1. expressions of empathy, 2. building cooperative relationships with adults, 3. developing friendships, 4. building cooperative play with other children, 5. conflict negotiation, and 6. awareness of diversity in self and others. The learning progression itself is summarized in the DRDP-R Preschool instrument (California Department of Education, 2005). Taking the interpersonal skills example further, we can examine one of the measures to see what the learning progression looks like. For example, consider the measure “building cooperative play with other children.” For the chosen measure, the progres- sion, expressed as four successive levels, is as follows (starting from the lowest): (a) interacts with other children side-by-side as they play with similar materials, (b) engages with another child or children in play involving a common idea or purpose, (c) shows preference for particular playmates but plays c ­ ooperatively with a variety of children, and (d) leads or participates in planning cooperative play with other children. This measure in the learning progression is brought to life by examples of learning performances that could illustrate the differ- ent levels. Examples for the lowest level (a in the list above) are: (i) plays blocks side-by-side with other children, (ii) hands another child a toy that he or she is looking for, and (iii) hands a bucket to a child sitting next to him or her in the sandbox.

THINKING SYSTEMATICALLY 311 Note that the teachers are encouraged to develop their own examples, so that these three do not become canonical. To illus- trate changes to the second level in this measure, examples for the next level (b in the list) are as follows: (i) plays with blocks with another child, (ii) plays in sand to build a castle with several other children, and (iii) joins another child to help look for a lost toy. More examples of learning performances are shown in Figure 10-2, which is a copy of the scoring guide for the measure “building cooperative play with other children.” Learning progressions should be developed around the orga- nizing principles of child development, such as self-regulation. Such organizing principles—which are sometimes referred to as the “big ideas” of a curriculum—are the coherent foundation for the concepts, theories, principles, and explanatory schemes for child development (National Research Council, 2006). Organizing standards around these big ideas represents a fundamental shift from the more traditional organizational struc- ture used in K-12 standards, in which standards are grouped under discrete topic headings. For example, instead of listing “knowledge of 10 letters” as a desirable outcome for a 4-year-old, one might list letter recognition and phonological awareness as examples of performances under a heading such as “emergent understanding of literacy forms.” A likely positive outcome of reorganizing standards from many discrete topics to a few big ideas is a shift from breadth to depth of coverage, from long lists of goals to a relatively small set of foundational values, principles, and concepts. If those values, principles, and concepts are the target of instruction, they can develop naturally and be extended over time. Specifying learning performances is a technique for elabo- rating on content standards by describing what children should be able to do if they have achieved a standard. Some examples of learning performances: children should be able to interact

Desired Result 1: Children are personally and socially competent 6 Indicator: SOC – Preschoolers demonstrate effective social and interpersonal skills Preschool 312 6 Measure 6: Building cooperative play with other children Definition: Child interacts with other children through play that becomes increasingly cooperative and oriented towards a shared purpose 1. Mark the highest developmental level the child has mastered. Exploring Developing Building Integrating Interacts with other children Engages with another child Shows preference for Leads or participates in side-by-side as they play with or children in play involving a particular playmates, but plays planning cooperative play with similar materials common idea or purpose cooperatively with a variety of other children children Not yet at fi rst level Examples } Plays blocks side-by-side with other } Plays with blocks with another child. } Plays in blocks area with whomever } Succcessfully organizes playmates to children. } Plays in sand to build a castle with happens to be there, then moves on build a city out of blocks. } Hands another child a toy that he or several other children. to play with particular playmates on } Participates in pretend play with she is looking for. } Joins another child to help look for a the climbing structure. peers, following the agreed-upon } Hands a bucket to a child sitting next lost toy. } Gets along easily with various roles. to him or her in the sandbox. playmates in different parts of the } Successfully helps to negotiate where room or playground. and how a small group of children } Participates in short pretend play can play. with several peers, but mostly } “We can make one big spaceship interacts with one of them. with the LEGOS. Want to try?” 2. Record evidence for this rating here. 3. Mark here if the ch ild is emerging to the next level. 4. If you are unable to rate this measure, explain why. Measure 6 Building cooperative play with other children SOC 4 (of 6) PS DRDP-R Manual © 2007 California Department of Education FIGURE 10-2  An excerpt from the Desired Results Developmental Profile-Revised. Reprinted by permission from the California Department of Education, CDE Press 1430 N. Street, Suite 3207, Sacramento, CA 95814. Alternate Figure 10-2, downloaded from some source, with type as editable type broadside R01340

THINKING SYSTEMATICALLY 313 with their peers in a positive way, express their wishes, follow c ­ ommon teacher instructions, carry out basic personal hygiene, use different media for art. A clear understanding of what perfor- mance demonstrates that a child has attained a standard allows assessment developers to design activities or tasks to elicit those performances, and it provides teachers with explicit goals for instruction. This approach helps build coherence between what is taught and what is assessed (National Research Council, 2006). Assessments Assessment, which includes everything from systematic child observations to nationally standardized tests, is an organized process for gathering information about child performance and early care and education environments. Assessments of all kinds make available information vital in allowing the early childhood education system to make decisions about choosing content and learning experiences, to hold preschool programs accountable for meeting development and learning goals, and monitor pro- gram effectiveness. Assessment is also a way for teachers, school administrators, program directors, and state and national educa- tion policy and decision makers to operationalize the goals for children’s development and learning articulated in the standards. Although assessment can serve all of these purposes, no single assessment can. To generate valid inferences, every assessment has to be designed expressly to serve its functions. An assessment designed to provide information about a child’s problems with a single idea or skill, in order to guide a teacher in helping that child learn, would be constructed differently from an assessment designed to provide data to policy makers for evaluating the effectiveness of a statewide program. The former requires that children’s under- standing of the selected idea or skill be tested rigorously and completely; the latter requires that the assessment sample all of the topics the program is designed to teach. Results from either of these assessments would not be valid for the purposes of the other, although they may share certain characteristics as part of a common system of assessment.

314 EARLY CHILDHOOD ASSESSMENT Reporting The reporting of assessment results is frequently taken for granted, but deliberation on this step is essential in the design of assessment systems and for the sound use of assessment-based information. In fact, decisions about the scope and targets of reporting should be made before assessment design or selection proper begins, and, most importantly, before the assessment data themselves are collected (National Research Council, 2006). Information about children’s progress is useful for all tiers of the system, although different tiers need varying degrees of assessment frequency and varying degrees of detail. Parents, teachers, early childhood program administrators, policy makers, and the public need comprehensible and timely feedback about what is taking place in the classroom (Wainer, 1997). Furthermore, taking a systems perspective, many kinds of information need to be accessible, but not all stakeholders need the same types of information. Thus, very early in the process of system design, questions need to be asked about how various types of informa- tion will be accessed and reported to different stakeholders and how that reporting process can support valid interpretations. Individual standards or clusters of standards can define the scope of reporting, as can learning progressions if they have been developed and made clear to the relevant audiences. Reports can compare one child’s performance, or the performance of a group, with other groups or with established norms. They can also describe the extent to which children have met established criteria for performance (the current No Child Left Behind or NCLB option). If descriptions of the skills, knowledge, and abili- ties that were targeted by the tasks in the assessment are included, users will be better able to interpret the links between the results and goals for children’s learning. It is important to recognize that many states lack the resources to design assessments that are perfectly aligned with their standards. They may have to resort to selecting existing assessments and cross-walking them to stan- dards. While this may lead to a period of only partial alignment, the exercise leads to useful opportunities to refine both standards and assessment portfolios. The reporting of assessment outcomes can take on many

THINKING SYSTEMATICALLY 315 appearances—from graphical displays to descriptive text, and from numbers to a detailed analysis of what the numbers mean. In some states, NCLB assessment results are reported on a s ­ tandard-by-standard basis; others provide information keyed to learning objectives for a specific class. In some states in Australia, where learning continua serve as the basis for assessment at all levels of the system, progress maps are used to describe child achievement. Figure 10-3 is a progress map from a Government of Western Australia website (http://www.curriculum.wa.edu.au/ ProgressMaps/english.htm). During the early childhood years, assessment results should be conveyed to parents in accessible Outcomes Foundation Descriptions Level Descriptions The outcome from the Intended for students for whom For each level of achievement (Foundation, Curriculum Framework. development of, or beyond, this Levels 1 to 8) a description of student achievement may be a long-term goal. achievement at that level is provided. Aspects Aspect Descriptions The aspects that comprise student For each level of achievement (Foundation, achievement of the outcome are listed Levels 1 to 8), descriptions of achievement for beneath each outcome. each of the aspects at the level are provided. FIGURE 10-3  Progress map and descriptive information. SOURCE: http://www.curriculum.wa.edu.au/ProgressMaps/english.htm. Figure 10-3, inner grid is bitmapped, 5 outside boxes are editable type R01340

316 EARLY CHILDHOOD ASSESSMENT ways; this may occur during individual periodic conferences or by sending written reports. It seems clear that interpretive material should always be included in reports. Interpretive material is accompanying text that explains, in a way that is appropriate to the technical knowl- edge of the intended audience, the relevance and importance of the results. According to Systems for State Science Assessment, interpretative material should • specify the purposes of the assessment. • describe the skills, knowledge, and abilities being assessed. • provide sample assessment items and activities and sample child responses keyed to performance levels. • provide a description of the performance levels. • describe the skills, knowledge, and abilities that a child or group of children either have achieved or have not yet achieved. • describe how the results should be interpreted and used, with a focus on ways to improve children’s performance. • describe and ward off common misinterpretations of results. • indicate the precision of scores or classification levels. Samples of children’s work are a useful way of illustrating their accomplishments. When reports include such samples, users can gain further insight as to what it means for a child to be clas- sified at a particular achievement level. Samples can also be used to illustrate the ways in which a child or group of children should improve (and, of course, following Figure 10-1, all of these should relate back to the early learning standards). Background information—for example, about the charac- teristics of education and opportunities afforded to children, even such information as children’s motivation—can further enhance the usefulness of assessment results. The Internet offers the possibility of making information available to stakeholders on a scale that might be impractical for paper-based reports. Information can be presented with guidance about its use and interpretation, and if the presentation is interactive, users can focus on the areas of greatest relevance to them. Any such facil-

THINKING SYSTEMATICALLY 317 ity must be designed with effective safeguards to protect the confidentiality of information and the privacy of the children being assessed, as well as to ensure that only authorized users have access to information. Users of results need to recognize the degree of uncertainty or measurement error associated with all assessment results. This is an area that is critically misunderstood by many audiences of assessment data, and it is particularly important if a variety of measures are used in a system. Measurement error can be con- veyed using standard error bands, a graphic display, or statements regarding the probability of misclassification (American Educa- tional Research Association, American Psychological Association, and National Council on Measurement in Education, 1999). No matter how this is done, each time a score is reported, it should be accompanied by an indication of its margin of error or other indicators of the measure’s degree of precision. This information should be supported by text that makes clear how the precision of the scores should be factored into inferences based on the results. While there has been a great deal of research on the design of technically sound assessments, there is little research on ways of reporting results that promote accurate and meaning- ful interpretations (Goodman and Hambleton, 2003; Hambleton and Slater, 1997; Jaeger, 1998). Research has indicated that users’ preference for a data display and their comprehension of it do not always coincide (Wainer, Hambleton, and Meara, 1999). Different reporting formats should be evaluated with usability studies to determine which are best understood and most likely to be used accurately by typical audiences. Professional Development Professional development recognizes that all adults need ongoing opportunities to improve their skills and competencies as they carry out their roles and responsibilities. Recognizing the particular challenges facing the early childhood workforce, educators have designed many different kinds of professional development opportunities, most of them focused on the higher level goals of improving instruction and curriculum. The aim of professional development as related to assessment is to create

318 EARLY CHILDHOOD ASSESSMENT consistency across the various practitioners working with young children, in a program or in a state, in their understanding of children’s development and learning and in their expectations and goals for their accomplishments. Professional development usually links informal training with formal education, seeks to improve the quality of training content through a training approval process, provides incentives (including compensation) for training, and offers training pass- ports or career registries that chronicle the cumulative training and education individuals receive (e.g., the National Registry Alli- ance at http://www.registryalliance.org). According to Kagan, Tarrant, and Berliner (2005), 10 elements of high-quality profes- sional development in a systems approach have been articulated by the early childhood community of practice: core knowledge; career path; professional development delivery mechanism; quality approval and assurance system; qualifications and cre- dentials; incentives for professional development; access and outreach; financing; governance; and evaluation. Irrespective of the particular components one espouses, all elements of the pro- fessional development system must work together and reinforce each other. Professional development is a crucial support for all forms of early childhood assessment. Successful implementation demands orientation and ongoing training of a host of contributors to the elements of early learning and program standards, assessment administration, and management of databases. Teachers and pro- gram managers require education and support to become capable and adept at understanding and using the varieties of reports and analyses of child assessment data. Early care and education pro- grams, like school districts, require individuals with higher levels of expertise in assessment, data management, and data analysis than are widely available in the labor market at the moment. More broadly, each audience and consumer group can benefit from some form of support to enhance their assessment literacy as they strive to comprehend and interpret the implications of child assessments, program assessments, and other forms of data. Moreover, an assessment system should provide for ongoing professional development opportunities to equip managers and practitioners to improve the quality of their services, implement

THINKING SYSTEMATICALLY 319 best practice strategies, and foster children to enhanced progress and levels of accomplishment in relation to the standards. And as feedback reports roll out to different levels and units of the system, it is incumbent for system administrators to provide these same opportunities, resources, and supports for managers and practitioners. Opportunities for Development and Learning An assessment of children’s well-being cannot be understood without knowing the circumstances in which they reside, the oppor- tunities afforded for development across the assessed domains, and the interaction of the individual child with those opportunities (Pianta, 2003). Therefore, relevant to any assessment of develop- ment and learning in an early childhood assessment system are program quality indicators to assess information that is uniform across programs despite their different funding, sponsorship, and regulatory standards. Although not implemented in most states currently, program assessments at various levels (the facility, the staffing, the social and intellectual features) can provide data that pertain to all programs and could serve as a vast repository of infor- mation in a systems approach. Moreover, linking the collection of program quality information to child-level assessment information would assist in the more appropriate interpretation and analysis of those assessments. This vision for an assessment systems approach requires attention to the entire range of children’s opportunities for devel- opment and learning. Participation in program quality reviews is one means to accomplish this for early childhood centers and pro- viders. In addition, articulated linkages between quality ­ levels, program standards, and development and learning standards are necessary. In this approach, currently disparate systems of pro- gram standards would be connected through a single comparable set of quality rating levels. It would then be possible to link the opportunities to develop and learn to an assessment that is more targeted at the child level. Certainly, efforts should be made to simplify and consolidate separate systems of program monitor- ing and licensing reviews to eliminate duplicative assessments, without sacrificing the capacity necessary to certify local program

320 EARLY CHILDHOOD ASSESSMENT compliance with applicable legislative mandates and regulatory requirements. There is a need for states and programs to continually examine and update the scope and quality of criteria and assessment tools for determining key elements of program quality. This is particu- larly important in a time when early learning and program quality standards require attention to growing populations of children with disabilities as well as of children and families from language and cultural minorities. Ensuring the appropriate assessment of quality of learning environments, instructional practices, and learning opportunities for the full range of children being served is crucial, and as these populations grow and shift in character (e.g., increased numbers of children identified as on the autism spectrum, new waves of immigration from parts of the world that did not historically send emigrants to North America), adapta- tions in the learning environments and instructional approaches may also be needed. In short, an ongoing and linked system of appropriate assess- ments of development and early learning opportunities provides a central stream of assessment information on the quality of pro- gram services and supportive management practices, crucial for a number of reporting formats for different audiences and uses. It serves as a linchpin in the infrastructure for a system of early childhood assessment. Inclusion In this report, we have articulated the challenges of early childhood assessment across the full range of development ­levels and emphases, and with populations that are culturally and lin- guistically diverse and those characterized by various aspects of disability. A solid system of early childhood assessment is inclusive of all children receiving services. As we have said, the assessments and the system must be concerned that: • children’s cultural and developmental variations are respected; • the full range of developmental challenges—physical, social, emotional, linguistic, and cognitive—are embraced; and

THINKING SYSTEMATICALLY 321 • children’s learning and development are not compromised for the sake of assessment. Meeting these challenges requires a wide range of tools and requires individuals at all levels of the system to recognize when standard tools need to be adapted or substituted with more appropriate tools. The natural variability in children’s performance on assessments is extended when the diversity of the ­ learners increases. Developmental, situational, experiential, economic, cultural, linguistic, and measurement factors may d ­ iffer across cultural and language groups (Espinosa and Lopez, 2007; García, 2005). This has implications for selecting types of assessment strategies to use as well as for the conditions under which those strategies should be implemented. Despite the inher- ent challenges and limitations associated with the assessment of young children, early childhood professionals agree that, if con- ducted properly, a good assessment can play a constructive role at various levels (Meisels, 2006) and that the challenges in using the existing assessment tools with subgroups of children do not justify excluding them from the assessment system. Resources Early childhood programs use human, intellectual, and finan- cial resources to address in some manner each of the elements described above. Clearly, existing early childhood systems benefit from the prior investment of resources in (a) articulating learning and program standards; (b) selecting, procuring, and developing assessments; (c) training individuals to administer and interpret assessment data; and (d) devoting time to the administration and analysis of those assessments. Some programs also manage to find the resources to provide professional development around assessment and to design or implement effective and inclusive early education opportunities. However, these resources of time, money, and effort are distributed unevenly and not integrated in a systems approach (Bruner et al., 2004). Bringing these strands together systemati- cally requires that resources be directed appropriately and that they be distributed over the various demands in the system in a strategic way. A systems approach must therefore include

322 EARLY CHILDHOOD ASSESSMENT investment in the infrastructure necessary to provide timely, useful, high-quality assessment data. On the financial side, policy makers at all levels should anticipate and be prepared to support budget requests to cover the costs of enhanced child and program assessment efforts, data management, and profes- sional development. With a focus on state-level accountability, the National Early Childhood Accountability Task Force (2007) estimated that 2 to 5 percent of all state program funding would be needed to provide such an infrastructure. Acting systematically particularly requires identifying opportunities to invest resources to improve the technical ­quality of assessments and data systems, including the validity and reliability of tools, capacity for inclusion and appropriate assess- ment of special populations, and adequacy of quality assurance safeguards and supports (Espinosa, 2008). Similarly, resources are necessary for exploring opportunities to improve the coher- ence of standards, assessments, and other accountability ele- ments across state and federal programs, as well as for gathering information about the multiple sets of standards, assessments, m ­ onitoring/licensing reviews, and reporting requirements imposed on programs with multiple funding streams. Resources are required to ensure consistency in defining and measuring pro- gram quality—the opportunities for development and learning, including child care licensing, state pre-K program standards, Head Start program performance standards, and federal legisla- tive mandates and regulations (Mitchell, 2005). When resources are severely ­limited, difficult decisions about prioritization will be necessary. Monitoring and Evaluation Any system will need monitoring and evaluation to maintain good functioning. We outline the role these important functions play in a systems approach: ensuring that the system is coherent, clearly communicates valued standards for teaching and learn- ing, and provides accurate data for decision making (National Research Council, 2006). An assessment system must, above all, provide sound and

THINKING SYSTEMATICALLY 323 useful information. Users expect the information to be valid, which is the term discussed in Chapter 7 and used by measure- ment experts for a quintessential feature of any assessment: the extent to which an assessment’s results support meaningful infer- ences for certain intended purposes. Collecting relevant data and carrying out appropriate v ­ alidity studies for the specific types of decisions that are typical in a certain assessment system are imperative for justifying the continuation of that system, and any significant changes in the operation of the system should restart the process of data collec- tion and validity review. We list below some of the specific challenges of evaluating and monitoring each element of the assessment system. Discussing each aspect of the evaluation and monitoring system is beyond the scope here—see Systems for State Science Assessment (National Research Council, 2006) for a more comprehensive account. Some of the salient issues are • alignment of assessment frameworks and specifications with standards. • field testing of assessment tasks and tests, to include item analyses and investigations of evidence of score and inter- rater reliability; fairness; quality of scaling; and validity of scores. • alignment of assessment tools with standards. • maintenance of alignment and quality of the assessment tools over time. • monitoring the success of the reporting system. • monitoring the effects of the system, including investiga- tions of whether it builds the capacity of staff to enable children to reach standards, builds the capacity of teachers to be effective assessors, influences the way resources are allocated to ensure that children will achieve standards, supports high-quality teaching aligned with standards, and supports equity in children’s access to quality early child- hood education (Baker et al., 2002). • examining the feasibility of the system as a whole, includ- ing the burden on teachers, administrators, and children.

324 EARLY CHILDHOOD ASSESSMENT The process described here may go beyond the resources avail- able in many programs. In particular, some programs may need to rely on selecting existing assessment tools and reporting strategies rather than developing new ones. Nonetheless, we describe here an ideal toward which programs should be moving. The current Landscape of Early Childhood Systems An analysis of a systems approach for early childhood assess- ment starts with the somewhat utopian view presented in the previous section, but it also requires careful review of the current terrain: How are current early childhood assessment efforts linked to standards, learning opportunities, or both? The early child- hood landscape reveals multiple forms and targets of service and assessment, varied sources of standards and mandates, numerous ways of reporting and using data, and different approaches to linking consequences with patterns of performance by children and programs (Gilliam and Zigler, 2004); in other words, it is at this moment very far from constituting a single system. The National Early Childhood Accountability Task Force (2007) con- cluded that early childhood agencies are implementing a great variety of child and program assessments. Table 10-1 displays nine different forms of child and pro- gram assessments, including four forms of assessment used to document the quality of early childhood programs, four forms of assessments of young children, and one form of assessment that gathers information on both program quality and children’s learning. Each form carries its own distinctive purposes, its pro- cedure for reporting to different audiences, and its specific ways of using assessment data. Taken together, these multiple assess- ments are generating many different types of data on children and programs. They also require substantial time and effort from local practitioners and program administrators (National Early Child- hood Accountability Task Force, 2007). Beyond drawing attention to the large number of different forms of assessment, the Accountability Task Force Report notes that current assessment models, with the single exception of pro- gram evaluation studies, separate reports about child outcomes

THINKING SYSTEMATICALLY 325 TABLE 10-1  Current Forms of Early Childhood Assessments Form Population Assessed Uses of Data Program Assessments Quality rating Providers seeking Consumer information on systems recognition for varied quality status levels of quality Higher reimbursement rates for higher quality Program improvement Program Providers seeking Consumer information on accreditation recognition as above a quality status threshold of quality Program improvement Program monitoring Providers receiving Program improvement state/federal program Funding decisions funding Program licensing All providers serving Determine compliance young children with health and safety standards Child Assessments Kindergarten All children at • Report to public readiness kindergarten entry • Planning early assessment childhood investments State/federal pre-K Children enrolled in a Reporting to funding child assessments state or federal program sources Assessment for All children Planning curriculum instruction Informing parents Developmental All children Referral to assess for screening eligibility for special education Child + Program Assessments Program evaluations Representative samples • Report to legislatures of children and local and the public on programs program quality, outcomes, impacts • Informs program improvement and appropriations decisions SOURCE: National Early Childhood Accountability Task Force (2007).

326 EARLY CHILDHOOD ASSESSMENT from reports on program quality. This means that information about the quality of a program’s services is rarely integrated with information about progress and outcomes for the children served in that program and, conversely, data on children’s learning is rarely juxtaposed with information about the quality of services, teaching, and learning opportunities provided to those children. This chapter summarizes bold goals for early childhood assessment systems that transcend most contemporary practice in supporting both accountability and children’s learning and development. Experience with the design requirements of effec- tive assessment systems based on standards is still developing. Even in the K-12 system, which has a longer history of assessment and accountability, the methods for designing and guaranteeing alignment of assessments to standards and to learning opportuni- ties are still evolving, with only a limited amount of research guid- ance. The research base on current theories of learning that should guide the development of assessments is also evolving (but see National Research Council, 2006). Thus, while current account- ability practice is based on the premise that continuous cycles of assessment and improvement are key to helping all learners reach high standards, the means of making that goal a reality are still underspecified. Because very young children are at even greater risk than older ones of negative consequences from the misuse of assessment, great care must be taken not to impose the incomplete understandings in the K-12 system on this vulnerable population (National Research Council and Institute of Medicine, 2000). Recent years have witnessed significant investments at the state and federal level in early childhood programming. Con- comitantly, state and federal program offices are managing sepa- rate and varied approaches to standards and assessments for the growing populations of children they serve. Table 10-2 highlights different standards and assessments established by four major funding sources for early childhood services: child care, Head Start, state pre-K, and early childhood special education. These standards include frameworks of learning goals for young chil- dren and standards for programs. The table also provides infor- mation on the number of states that are currently implementing various types of standards and assessments. This table highlights the fact that the nation’s approach to

THINKING SYSTEMATICALLY 327 TABLE 10-2  Standards and Assessments for Young Children by Funding Source Early Childhood Special Child Care Head Start State Pre-K Education Standards Early Head Start Early learning 3 functional for learning Child guidelines goals children’s guidelines Outcomes (49 states) (federal) learning (49 states) Framework (federal) Child No current National Pre-K States report assessments requirements Reporting assessments percent of System* (12 states) children in 5 (federal) Kindergarten categories on 3 assessments goals (16 states) *The National Reporting System was discontinued after this table was published. SOURCE: National Early Childhood Accountability Task Force (2007). early childhood public policy and management entails multiple systems of assessment requirements and mechanisms. Each con- nected set of standards and assessments generates different infor- mation on the characteristics and performance of publicly funded early childhood services. Many local provider agencies receive funding from multiple state and federal sources and therefore are required to manage their programs to meet several different forms of standards for program quality; implement reporting or assessment procedures to respond to the demands of each funding source; and orient their curricula, teaching, and learning strategies to several overlapping frameworks of learning goals for children. Early childhood assessment efforts in “systems” include a mix of long-standing and newly emerging strategies (Scott-Little, Kagan, and Frelow, 2003b). For example, two major clusters of new initiatives are state and federal efforts to articulate frameworks of learning goals for young children and efforts to develop, organize, and manage for varied purposes new large-scale assessments of young children. Frameworks of learning goals for state-initiated

328 EARLY CHILDHOOD ASSESSMENT early learning guidelines, federal efforts in Head Start, and early childhood ­special education have all been generated in the past 8 years. During the same time period, assessment and reporting efforts have been launched by states collecting information on children participating in state pre-K programs or entering kinder­ garten, by the Head Start Bureau, and by the Office of Special Education Programs These newer child-focused standards and assessments complement long-standing policies defining standards, assess- ments, and monitoring systems geared to aspects of program quality, program inputs, and management practices. Federal and state program offices as well as local provider agencies are thus currently engaged for the first time in explaining and interpret- ing child outcome standards and the potential uses and misuses of newly expanded child assessment data sets. These federal, state, and local managers have extensive experience and greater shared understanding of how program quality standards are applied in the context of various forms of licensing and monitor- ing reviews and enforcement decisions. Assessment strategies related to program quality standards have longer track records, a greater accumulation of data, and support systems that have been implemented and fine-tuned over the course of many years of research. Child-focused assessment systems are, in contrast, still in diapers. In summary, an overview of current childhood assessment efforts reveals an array of different forms of child and program assessments, multiple sources of policy mandates in the areas of learning and quality standards, and a series of systems operating in parallel, based largely on the structures of state and federal programs or funding streams. Nonetheless, some states are work- ing to confront these challenges and to develop coherent systems for early childhood care and education, supported by assessment systems and focused on promoting the development of all the children in the state. STATE EFFORTS We briefly summarize the efforts of three states that are attempting to put systems together, documenting the progress

THINKING SYSTEMATICALLY 329 they have made and the challenges they have encountered. C ­ alifornia, Nebraska, and New Jersey have been chosen because they have focused in recent years on developing systematic approaches to early childhood education and assessment. We certainly do not mean to suggest that these three states constitute exemplars or models, although each does display some strengths (and some weaknesses), and all have made efforts to think sys- temically about early childhood. These brief portraits illustrate the general points made in this chapter. California The California Department of Education (CDE) has revised its process-oriented compliance-based approach to evaluating the child development services it provides to focus on the results desired from the child care and development system. A strength of the new approach is its compatibility with CDE’s accountability system for elementary and secondary education. Desired Results for Children and Families (DRCF) is a system by which ­educators can document the progress made by children and families in achieving desired results and by which managers can retrieve information to help practitioners improve child care and develop- ment services (California Department of Education, 2003). A desired result is defined as a condition of well-being for children and families (e.g., children are personally and socially competent). Desired results reflect the positive effects of the child development system on the development and functioning of chil- dren and on the self-sufficiency and functioning of families. The desired results system has several goals: • Identify the measures that demonstrate the achievement of desired results across the development areas for chil- dren from birth to age 13 in child care and development programs. • Use the measures for monitoring children’s progress in programs. • Provide information that reflects the contributions made to child development by each of the various types of CDE- funded child development programs.

330 EARLY CHILDHOOD ASSESSMENT • Hold programs accountable to program standards that support the achievement of desired results and are used to measure program quality. • Provide a data collection mechanism for evaluating the quality of individual child development programs. • Create a base of information on the relationships between processes and results that can be used to target technical assistance to improve practice in all child development programs. At the state level, educators use the desired results system to identify successes and areas for improvement so that CDE can provide support and technical assistance to increase program quality. At the program level, practitioners use the desired results system to determine the extent to which children and families are achieving the desired results, so that quality improvement activities may be effectively targeted to directly benefit program participants. The desired results system encourages differences in the structure and objectives of individual child development programs. It is culturally sensitive and linguistically responsive to the diverse populations of children and families served. Including Children with Disabilities The desired results system is also being coordinated with a concurrent project, Desired Results: Access for Children with Disabilities Project (DR Access, http://www.draccess.org/index. html). The DR Access project is funded by the CDE Special Educa- tion Division and coordinates with the DCRF system in two ways. First, DR Access staff members worked with CDE staff members and CDE’s contractors during the development of the desired results system to make the Desired Results Developmental Profile as inclusive and appropriate as possible for assessing the progress of young children with disabilities. Second, DR Access staff mem- bers have also developed a system of adaptations and guidelines for the Desired Results Developmental Profile that allows practi- tioners to assess children with disabilities in an appropriate man- ner within the structure of the desired results system. Through these two approaches, DR Access staff members

THINKING SYSTEMATICALLY 331 ensured that the desired results system was responsive to the needs of young children with disabilities and was applicable to all settings in which they and their families were served. The vision held by the contributors to desired results and DR Access was that, through collaboration, a continuity of outcomes would be achieved for all children in CDE programs. Components of the System The desired results system has six basic components: desired results, indicators, themes, measures, criteria for success, and measurement tools. 1. Desired results: The six desired results, to which all CDE- funded child care and development programs are expected to contribute, are that children are personally and socially competent, are effective learners, show physical and motor competence, are safe and healthy, and have families that support their learning and development, and achieve their goals. These desired results encompass the four develop- mental domains—cognitive, socioemotional, language, and physical development. 2. Indicators: An indicator defines a desired result more specifically so that it can be measured. For example, an indicator of the desired result “children are personally and socially competent” is that “children show self-awareness and a positive self-concept.” Desired results are gener- ally better measured by using multiple indicators; no single indicator gives full information on all aspects of achievement. 3. Themes: A theme describes the aspect of development that is being measured for each indicator (e.g., self-awareness: dependence and interdependence, understanding that one’s self is a separate being with an identity of its own and with connectedness to others). 4. Measures: A measure quantifies achievement of a particular indicator and developmental theme (e.g., a preschooler can communicate easily with familiar adults).

332 EARLY CHILDHOOD ASSESSMENT 5. Criteria: The criteria for success define the acceptable level of achievement for each indicator (e.g., English language learn- ers who entered the program with no comprehension of Eng- lish now participate in read-alouds by repeating key words). 6. Measurement tools: A measurement tool is the actual instrument or procedure used to capture or track informa- tion on indicators and standards of achievement (e.g., the Desired Results Developmental Profile). Professional Development The training and implementation phase of desired results for center-based programs and family child care home networks is being carried out in a series of regional training sessions for local program administrators. Assisted by the California Institute on Human Services, CDE is providing comprehensive training designed to facilitate implementation of the desired results system in programs at the local level and to build the capacity of local programs to train staff members who work directly with children. Participation in the training is by invitation only, and sites are selected one year before they are due for a Coordinated Compli- ance Review or Contract Monitoring Review. Nebraska Results Matter (http://ectc.nde.ne.gov/special_projects/ results_matter/results_matter.htm) is designed to improve pro- grams and child and family outcomes for all children in Nebraska from birth to age 5, whether they are served through school dis- tricts, the Early Development Network (Part C of the Individuals with Disabilities Education Act), newly implemented infant and toddler programs funded through the Early Childhood Endow- ment, or community partners. The system grew out of earlier efforts to monitor and evaluate state-funded preschool programs. Its broader application came as a result of recent federal require- ments for reporting outcomes for children with disabilities. The system employs both program quality assessment and child outcome assessment to accomplish several purposes: improve experiences, learning, development, and lives of young children

THINKING SYSTEMATICALLY 333 (birth to age 5) and their families; inform program practices; demonstrate program effectiveness; guide the development of local and state policies and procedures; and provide data to dem- onstrate results. The system is administered through the Nebraska Department of Education. Major partners include the state’s Early Childhood Training Center, Health and Human Services, the Munroe-Meyer Institute at the University of Nebraska Medical Center, and multi­ county educational service units. The system operates with the advice of the Results Matter Child Measurement Task Force. Child Assessment Child assessment tools were selected based on whether they employ ongoing observation of children engaged in real activities, with people they know, in natural settings; reflect evidence-based practices; engage families and primary care providers as active participants; integrate information gathered across settings; are individualized to address each child’s unique ways of learning; inform decisions about day-to-day learning opportunities for children; and reflect the belief that development and learning are rooted in culture supported by the family. The selected tools also reflect optimal congruence with Nebraska’s Early Learning Guidelines (Birth to Three and Three to Five; http://ectc.nde.ne.gov/ELG/elg.htm) and are congru- ent with the program standards found in Rule 11, Regulations for Early Childhood Programs (http://www.nde.state.ne.us/ LEGAL/RULE11.html). These tools are the High/Scope Child Observation Record (COR), the Creative Curriculum Develop- mental Continuum, and the Assessment, Evaluation and Pro- gramming System (AEPS). The state has purchased licenses for the use of these tools; programs complete the assessment online. Some districts have chosen to use more than one assessment and thus more than one online system. Districts began entering data in 2006, and the first data were reported to the Office of Special Education Programs in the U.S. Department of Education in February 2008. The use of these tools supported through the online data system provides the state with unprecedented opportunities to compile needed data,

334 EARLY CHILDHOOD ASSESSMENT not only for the required state and local reporting functions, but also for ongoing program improvement and curriculum planning. Nebraska’s system is responsive to the federal mandate of the IDEA Part C (birth to age 3) and Part B, 619 (ages 3 to 5), as well as the state requirements of Nebraska Department of Education Rule 11, Regulations for Early Childhood Programs (http://www. nde.state.ne.us/LEGAL/RULE11.html), which apply to all pre-K programs operated through public schools. Program Quality Assessment The system also includes regular evaluation of programs to ensure that they achieve and maintain overall high ­ quality, employ qualified staff, and operate in compliance with fed- eral and state guidelines. Programs receiving state funding are required to conduct an annual evaluation using one of the envi- ronment rating scales, such as the Infant/Toddler Environment Rating Scale-Revised, ITERS-R (Harms, Clifford, and Cryer, 1998); Early Childhood Environment Rating Scale-Revised, ECERS-R (Harms, Cryer, and Clifford, 1990); or the Early Language and Literacy Classroom Observation, ELLCO (Smith and Dickinson, 2002), and complete Nebraska’s Rule 11 reporting and approval processes. Data obtained from these tools are used to develop improvement plans. In addition, programs are strongly encour- aged to participate in the accreditation process of the National Association for the Education of Young Children and receive technical and financial assistance to do so. Professional Development Programs receive continuous support to ensure that their participation in Results Matter does generate the highest ­quality data and knowledge about how to use it to improve program quality and child and family outcomes. The state’s Early Child- hood Training Center, in cooperation with the organizations that provide the program and child assessment tools, regularly offers training in their use. The state maintains a cadre of professionals who have achieved reliability in the use of the environment rating scales. In addition, each program provider is required to submit

THINKING SYSTEMATICALLY 335 a Fidelity Process Plan to address how the reliability and validity of the child observational data will be monitored and recorded. These plans describe initial training and subsequent activities to strengthen the validity of the data. New Jersey New Jersey’s Abbott Preschool Program is designed to pro- vide high-quality preschool education to children ages 3 and 4 in 31 of the state’s poorest districts. The program has a mixed deliv- ery system and is conducted in school districts and community- based centers, including Head Start programs. Curriculum and Instruction The New Jersey State Department of Education (NJDOE) has developed a set of early learning standards—the Preschool Expec- tations: Standards of Quality (2004)—which outline what children should know and be able to do at the end of their preschool pro- gram across a comprehensive set of domains. Five curriculum models have been approved: Creative Curriculum, High/Scope, Tools of the Mind, Curiosity Corner, and Bank Street. Each is aligned to the Preschool Expectations. Each district is required to select one of these approved curriculum models and to provide early childhood educators with professional development related to appropriate curriculum implementation. Assessments The NJDOE designed two performance-based assessments in the areas of literacy and mathematics that were linked directly to the Preschool Expectations: the Early Learning Assessment System-Literacy (ELAS-L; New Jersey Office of Early Childhood Education, 2004) and the Early Learning Assessment System for M ­ athematics (ELAS-M; New Jersey Office of Early Childhood Education, 2006). In the initial years of the preschool program, the state provided professional development for teachers in the observation and documentation of young children’s learning and in administering and scoring the ELAS assessments. While these

336 EARLY CHILDHOOD ASSESSMENT measures were originally intended to be used both for program evaluation and to inform instructional practice, state officials decided that they would be used only for instructional planning. In the ninth year of Abbott preschool implementation, the districts must select a commercially produced performance-based assess- ment that ­ covers the entire range of domains in the Preschool Expectations. The ELAS instruments may still be used in the areas of literacy and mathematics. Assessment at Various Levels At the classroom level, teachers administer a performance- based assessment that covers the range of domains outlined in the Preschool Expectations. These formative assessments are intended to inform instructional practice and to give teachers direct infor- mation on the learning and development of individual children. Up one level, a sample of the community-based and school district classrooms is assessed for quality on the ECERS-R. The results of these measures are used for teacher professional development. ECERS-R scores are also reported at the district level and used to monitor classroom quality across the 31 districts. Statewide, a longitudinal study is tracking the progress of a sample of children who have participated in the Abbott preschool program on nationally normed measures of language, literacy, and mathematics. In addition, a regression discontinuity design is being used to estimate the impact of preschool on the performance of children who received it in comparison to those who did not. Thinking about assessment as a system Despite the clear advantages of a systems approach to early childhood care and education, there is no doubt that the move toward systematicity will encounter many obstacles. The states and the federal government often effect change in the early childhood system by introducing new programs, local or limited innovations, and underfunded mandates. These might well con- stitute good models or useful efforts, but they undermine efforts to build coherence across programs and funding sources at the same time.

THINKING SYSTEMATICALLY 337 Similarly, laudable efforts to increase accountability can lead to consequences that undermine coherence. For practical reasons, accountability efforts typically involve selection of a small number of assessment instruments that carry high stakes for the program. Concentrating attention on a specific test rather than on building a system can lead to unintended consequences. When the results of that test have significant repercussions, one consequence is often that the prevailing instruction and curriculum will come to be significantly affected by the particulars of that test—­specifically, by the details of material tested or the formats used. In this situ- ation, gains observed in test results may not represent true gains in learning or progress toward meeting standards. Instead, they may primarily reflect children’s improved ability to respond to items on a particular kind of test. A typical pattern is that test scores in the first years after a new test is introduced will show significant—and publicly celebrated—increases, particularly if high stakes are involved, but these improvements tend to level off after that initial uplift (see Linn, 2003, for a general survey; see also Herman and Perry, 2002, for an example from California). Further evidence of this phenomenon comes from cases in which alternate indicators of the tested skill fail to match the gains shown by the high-stakes test. If children have indeed improved in mathematics, for example, gains should be evident on other indicators of mathematical skill; if not, the gains are suspect. The disjunction between the high-stakes and alternate tests of the same skill has been observed with older children for mathematics (e.g., see Koretz and Baron, 1998, for an example from Kentucky) and is the typical pattern seen when comparing results on state tests to those on National Assessment of Educational Progress (Linn, 2003). Some observers believe that such patterns as these illustrate the limits of what can be achieved primarily through test preparation, and that continuing improvement over the long term will require more meaningful changes in the teaching, learning process, and assessment. These findings suggest the need for a systematic approach in which it is possible to validate gains and the meaning of test scores continuously over time. Assessment by itself cannot improve children’s learning—it is the correct use of assessment information that can bring about that aim. If they are to improve learning, assessments must be based

338 EARLY CHILDHOOD ASSESSMENT on the early learning objectives and be set in contexts that relate to curriculum and teaching practices that are common in early childhood education. Assessment should appraise what children are being taught, and what is taught should embody the aims of learning described in the standards. Thus, all of the elements in the early childhood education system have to be built on a shared vision of what is important for children to know and understand, how teaching practices affect that knowledge and understanding over time, and what can be taken as evidence that learning and development have occurred (National Research Council, 2001). The following criteria, developed by the committee, opera- tionalize these somewhat abstract principles in important char- acteristics that child outcome measures should have if they are to provide useful evidence for the improvement of early care and education systems. 1. A clearly articulated purpose for the testing. 2. Identification of why particular assessments were selected in relation to the purpose. 3. A clear theory connecting the assessment results and q ­ uality of care. 4. Observation of quality of instruction and specification of what would need to be focused on for improvement. 5. A clear plan for following up to improve program quality. 6. Strategizing to collect the required information with a mini- mum of testing. 7. Appropriate preparation of testers to minimize disruptive effects on child responses. Assessment systems must operate at multiple levels— i ­ ndividual child, classroom, center, school district, state, and national levels. An assessment system is thus sensitive to a variety of influences—some that originate from the top and spread down, and others that work from the bottom up (National Research Council, 2001). Assessments of children must be based on an appreciation of the development and learning of typically developing children and of the typical range of variation for children of any age. This knowledge must be based on the best scientific evidence available,

THINKING SYSTEMATICALLY 339 must be sensitive to the values inherent in choosing to concentrate on specific areas rather than others, and must be completed by sound professional expertise (National Research Council, 2001). An example of an instrument designed according to these prin- ciples is the Desired Results Developmental Profile-Revised, a part of which is illustrated in Figure 10-2. Thus, a successful system of assessments must be coherent in a variety of ways (National Research Council, 2001, 2006). It will be horizontally coherent when the curriculum, instruction, and assessment are all aligned with the early learning standards, target the same goals for learning, and work together to support children’s developing knowledge and skill across all domains. It will be vertically coherent when there is a shared understanding at all levels of the system (classroom, center, school or program, and state) of the goals for children’s learning and development that underlie the standards, as well as consensus about the purposes and uses of assessment. And it will be developmentally coherent when it takes into account what is known about how children’s understanding develops over time and the content knowledge, abilities, and understanding that are needed for learning to progress at each stage of the process. Developmental coherence should extend across the boundaries between preschool and K-12 schooling, to ensure that the goals for young children’s learning and development are formulated by taking into account later goals and expectations and with an understanding of how early accomplishments do and do not predict later achievement. These coherences are necessary in the interrelationship of all the subsystems. For example, the development of early learning standards, curriculum, and the design of teaching practices and assessments should be guided by the same framework for under- standing what is being attempted in the classroom that informs the training of beginning teachers and the continuing professional development of experienced teachers. The reporting of assess- ment results to parents, teachers, and other stakeholders should also be based on this same framework, as should the evaluations of effectiveness built into all systems. Each child should have an This section on coherence draws heavily upon the content of the National Research Council’s 2006 report, Systems for State Science Assessment.

340 EARLY CHILDHOOD ASSESSMENT equivalent opportunity to achieve the defined goals, and the allo- cation of resources should reflect those goals. We emphasize that a system of assessment is only as good as the effectiveness—and coherence—of all of its components.

Next: 11 Guidance on Outcomes and Assessments »
Early Childhood Assessment: Why, What, and How Get This Book
×

The assessment of young children's development and learning has recently taken on new importance. Private and government organizations are developing programs to enhance the school readiness of all young children, especially children from economically disadvantaged homes and communities and children with special needs.

Well-planned and effective assessment can inform teaching and program improvement, and contribute to better outcomes for children. This book affirms that assessments can make crucial contributions to the improvement of children's well-being, but only if they are well designed, implemented effectively, developed in the context of systematic planning, and are interpreted and used appropriately. Otherwise, assessment of children and programs can have negative consequences for both. The value of assessments therefore requires fundamental attention to their purpose and the design of the larger systems in which they are used.

Early Childhood Assessment addresses these issues by identifying the important outcomes for children from birth to age 5 and the quality and purposes of different techniques and instruments for developmental assessments.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!