Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 19
Advancing Scientific Research in Education 2 Promoting Quality Rigorous studies of how students learn, how schools function, how teachers teach, and how the different cultural, political, economic, and demographic contexts in which these and related investigations are framed can provide (and have—see National Research Council, 2002, for examples) important insights into policy and practice. And yet poor research is in many ways worse than no research at all, because it is wasteful and promotes flawed models for effective knowledge generation. High-quality research is essential. As described in Chapter 1, the questions of what constitutes high-quality education research and to what extent current scholarship meets those standards has taken on a high profile. Indeed, there is no shortage of answers. It is beyond the scope of this report to provide a fair and comprehensive description of the many important issues that have been raised in recent years with respect to how to define quality in scientific education research, or to comment on how the committee views them. Rather, in this chapter we begin with a brief discussion of how we define quality, taking our cue from Scientific Research in Education, and provide illustrations of select elements of quality that emerged in the committee’s workshops. This cursory treatment of definitional issues is intended to provide the context for consideration of specific mechanisms for promoting high-quality scientific research in education.
OCR for page 20
Advancing Scientific Research in Education ELEMENTS OF QUALITY Scientific Research in Education was an attempt to articulate what is meant by quality with respect to scientific research in education. That book offered six principles that underlie all fields of scientific endeavor, including scientific research in education (National Research Council, 2002, p. 52): Pose significant questions that can be investigated empirically. Link research to relevant theory. Use methods that permit direct investigation of the question. Provide a coherent and explicit chain of reasoning. Replicate and generalize across studies. Disclose research to encourage professional scrutiny and critique. In the scientific study of education, several features of teaching, learning, and schooling shape the ways in which the guiding principles are instantiated (e.g., the mobility of the student population). Together, the principles and the features provide a framework for thinking about the quality of scientific education research. We adopt this framework as our working definition of quality. Recently, much attention has been focused on the methods used in education studies (most closely related to the third principle above), with a particular emphasis on randomized field trials to help establish cause-and-effect relationships (see, e.g., U.S. Department of Education, 2002, 2004; What Works Clearinghouse, 2004). Methods are the tools that researchers use to conduct their work; their appropriate use is essential to promoting quality. Scientific Research in Education makes a number of important arguments related to methods. Specifically, the choice of method or methods must be driven by the question posed for investigation: no method can be judged as good, bad, scientific, or otherwise without reference to the question it is being used to address. In addition, scientific inferences are strengthened if they hold up under scrutiny through testing using multiple methods. A related and final point made in the book is that both quantitative and qualitative methods are needed to fully explore the range of questions about educational phenomena that are ripe for scientific study. The tendency in the current debates—in research, policy, and practice communities—to align with either quantitative or qualitative approaches is there-
OCR for page 21
Advancing Scientific Research in Education fore neither sensible nor constructive. Indeed, working to integrate the two types of methods is likely to accelerate scientific progress. Although these and related points about methodology are essential to understanding and promoting high-quality scientific research in education, an important conclusion of Scientific Research in Education is that scientific quality is a function of all six of these principles. Thus, in our view the national conversation about methodological quality is but the beginning of a broader dialogue that is necessary to fully address issues of scientific quality in education research. Here we provide a few examples of how discussions at the workshops illustrate the importance of other principles. While not exhaustive, they suffice to make the point that understanding and promoting high-quality scientific research in education requires attention to all principles. Pose significant questions that can be investigated empirically. A key idea embedded in this principle is that research questions should address important issues of practice, policy, and processes. During the peer review workshop, for example, participants highlighted the importance of ensuring that diverse groups of stakeholders be involved in developing federal agencies’ research agendas, prioritizing research questions, and conducting the actual research. Without a range of scholarly perspectives and individuals traditionally underrepresented in education research, the types of questions addressed in a research portfolio will be necessarily limited in scope and are unlikely to hone in on significant questions across the broad swath of issues and populations in education. Link research to relevant theory. The workshop on building a knowledge base in education highlighted the critical role of theoretical constructs in research. Several workshop speakers discussed the process of relating data to a conceptual framework as guiding research and providing the support for scientific inference. Data enable assessments of the explanatory power of theoretical frameworks for modeling real-world phenomena; similarly, theories provide meaning for data. In Appendix B, we summarize an example from cross-cultural psychology and sociolinguistics that traces how related lines of inquiry developed as researchers moved back and forth between periods of empirical investigation and theory building, building on each other over time. Replicate and generalize across studies. The workshop on building an accumulated knowledge base in education also brought into sharp relief the core ideas of replication and generalization in science. No study is an island
OCR for page 22
Advancing Scientific Research in Education unto itself: scientific progress is achieved when results from multiple studies are interpreted jointly and the generalizability of theoretical concepts explored and articulated. Replication involves both application of the same conditions to multiple cases and replication of the designs, including cases that are sufficiently different to justify the generalization of results in theories. Without convergence of results from multiple studies, the objectivity, neutrality, and generalizability of research is questionable (Schneider, 2003). Appendix B includes more detail on these ideas. MECHANISMS FOR PROMOTING QUALITY There is no centralized place that ensures quality control in education research or any other scientific endeavor. Quality standards are often informal and their enforcement upheld by the norms and practices of the community of researchers (National Research Council, 2002). The diverse and diffuse nature of the investigators in the field of education research make common standards elusive; however, the workshops highlighted three leverage points for actively promoting high-quality research: peer review processes within federal agencies, implementation of research designs in educational settings, and partnerships between education research and practitioners. Recommendation 1: In federal agencies that support education research, the criteria by which peer reviewers rate proposals should be clearly delineated, and the meaning of different score levels on each scale should be defined and illustrated. Reviewers should be trained in the use of these scales. Earlier this year, the committee issued a report titled Strengthening Peer Review in Federal Agencies That Support Education Research (National Research Council, 2004b). That report details our conclusions and recommendations regarding the peer review processes of federal funding agencies and includes suggestions, among other recommendations, for how these systems can promote high-quality education research. In this recommendation, we highlight a critical mechanism for identifying and supporting high-quality scientific research in education through peer review: defining clear standards for the review and ensuring reviewers are trained in their use. The process of peer review, in which investigators judge the merits of proposed new work, offers a natural place to engage the field in the con-
OCR for page 23
Advancing Scientific Research in Education tested but crucial task of developing and applying high standards for evaluating the merits of proposed research. The federal agencies represented at our workshop1 all used different evaluation criteria in their peer review processes. The extent to which the criteria were defined, as well as the nature and intensity of training for reviewers on how to apply those criteria, varied as well. Given differences in mission and other factors, it is reasonable to expect variation in review criteria; however, we recommend that attention be paid to ensuring that criteria are clearly defined and based on valid and reliable measures. We also recommend that the development of training materials and the implementation of tutorials for reviewers become standard operating procedure, and that high-quality descriptive feedback associated with scores and group discussion be provided to applicants. Research shows low levels of consistency in initial ratings of proposals across peer reviewers (Cicchetti and Conn, 1976; Kemper, McCarthy, and Cicchetti, 1996; Daniel, 1993; Cicchetti, 1991; Cole and Cole, 1981). There is potential for significant improvement in the reliability of ratings across reviewers through careful training on the rating scale criteria and on the rating process itself. This finding is consistent with a large literature on job performance ratings (Woehr and Huffcutt, 1994; Zedeck and Cascio, 1982) indicating the importance of careful definition of scale “anchors” and training in the rating process. The training of reviewers should focus deeply on the criteria used to evaluate research by defining those criteria very clearly, and training people to use them reliably. If reviewers do not have a clear understanding of what the criteria are, they carry their own frame of reference as a defining point into the review process, resulting in lower reliability of peer review, whether for manuscripts submitted to professional journals or for research grant proposals submitted for funding (Cichetti, 2003). Not only could training improve the consistency of initial ratings across reviewers on a panel, but it also could facilitate group discussion that leads to stronger consensus and reliability of group ratings. It can have the added benefit of improving the validity of the feedback provided to applicants by better aligning it with the specific evaluation criteria, both in terms of the particular scores given and the descriptions of a proposal’s strengths and weaknesses. 1 The workshop included officials and staff from the Department of Education, the National Science Foundation, the National Institutes of Health, and the Office of Naval Research.
OCR for page 24
Advancing Scientific Research in Education BOX 2-1 Training Peer Reviewers to Improve Research Quality Teresa Levitin of the National Institute on Drug Abuse presented an example of a training program developed by agency staff for their peer reviewers. In describing the program, Levitin said that much of the training provided to reviewers takes place at the front end of the process, and that staff must work to diagnose potential issues starting with the initial contact with reviewers through the final submissions of scores and written comments at the conclusion of the panel. The training is both formal and informal, and focuses on general principles and policies. The program Levitin described is provided on-line in advance of the peer review meeting and takes about 10 minutes to complete. Key elements of this training include: Orientation to the role of the peer reviewers in the grant-finding process. Instructions for identifying potential conflicts of interest in applications. Factors to consider or to ignore in making technical merit ratings. Guidance for providing specific comments and feedback for applicants. Expectations for participation in the panel meeting itself. Throughout the mini course there are short “quiz” questions that present scenarios and prompt reviewers to apply ideas from the Training is important to ensure that reviewers understand how to approach the evaluation of proposals and how to assign specific ratings to each criterion. At the workshop, Teresa Levitin of the National Institute on Drug Abuse provided several useful ideas for how to illustrate key concepts to reviewers about the review criteria in a relatively short amount of time (see Box 2-1). To our knowledge, there are few such models from which to learn about effective training practices in the specific context of peer review of education research proposals in federal agencies. Our recommendation is that agencies place strong emphasis on developing, evaluating, and refining training programs to ensure that reviewers are applying criteria in ways
OCR for page 25
Advancing Scientific Research in Education course to real-life situations that often arise in reviewing applications. Levitin also described a process for monitoring reviewers, from start to finish, and taking action when needed to correct inaccurate or inappropriate comments. Monitoring begins with embedded questions in the training course and continues through analysis of the resulting ratings and feedback comments. No assessment of the effectiveness of the training and monitoring program described by Levitin was presented at the workshop. The design of this program, however, is highly consistent with long-established findings from industrial psychology on effective ways to improve the reliability and validity of job performance ratings (Borman, 1979; Pulakos, 1986; Hauenstein, 1998). The course is available for viewing online at http://www7.nationalacademies.org/core/review_course_nih_version.pdf. Workshop: Peer Review of Education Research Grant Applications Implications, Considerations, and Future Directions February 25-26, 2003 Transcript available at: http://www7.nationalacademies.org/core/ Key Speaker: Teresa Levitin, National Institute on Drug Abuse Related Product: Strengthening Peer Review in Federal Agencies That Support Education Research http://books.nap.edu/catalog/1054.html that are intended, contributing to the process in effective ways, and learning from the experience. Delivering feedback to applicants can also be an effective way to signal the field’s (often implicit) standards of quality, reinforcing them in a formal context. Indeed, one workshop participant argued that “peer review is not just about judging scientific merit, it is about defining it and creating it” (Redish, 2003). Finally, the role of peer reviewers is typically to provide advice to the head of the agency about the relative merits of proposals they considered—usually in the form of a slate of ranked proposals. The decision makers in
OCR for page 26
Advancing Scientific Research in Education these agencies must be responsive to the results of the peer review process, yet they do play a role in ensuring quality in what they choose to fund based on that advice. It could well be that few proposals submitted in a particular competition will lead to research of the highest quality. In this case, the most important way to improve the quality of education research is to fund those few and then have appropriate agency staff work with unsuccessful applicants to improve the quality of their future proposals. Such decisions can be politically risky—if appropriators see that funds have not been spent at year’s end, they very well may decide to cut funding levels in the next fiscal year. Effectively balancing these very real potential consequences against quality concerns will take courage and leadership at the highest ranks of the agency. Quality standards used to vet manuscripts for publication by peer-reviewed journals are similarly important. They are likely to be different from those used in the peer review of proposals because the products (manuscripts rather than proposals) are different. To some degree, standards will vary because each journal has its own niche in the scholarly community. A roundtable of editors and other participants in manuscript review and publication featured at one of the workshops made this clear: some journals are primarily theory-based; others exclusively empirical. Some are archival; others publish syntheses of related work over a period of time. In addition, reviewers of manuscripts submitted for publication in a journal rarely interact to discuss the strengths and weaknesses of submissions. Nonetheless, explicit attention to standards for manuscript review, along the same lines as for proposal review, is essential for promoting high-quality education research. Recommendation 2: Federal agencies that support education research should ensure that as a group, each peer review panel has the research experience and expertise to judge the theoretical and technical merits of the proposals it reviews. In addition, peer review panels should be composed so as to minimize conflicts of interest, to balance biases, and to promote the participation of people from a range of scholarly perspectives and traditionally underrepresented groups. Deciding who counts as a peer is central to quality considerations: the peer review process, no matter how well designed, is only as good as the people involved. Judging the competence of peers in any research field is a
OCR for page 27
Advancing Scientific Research in Education complex task requiring assessment on a number of levels. In education research, it is particularly difficult because the field is so diverse (e.g., with respect to disciplinary training and background, epistemological orientation) and diffuse (e.g., housed in various university departments and research institutions, working on a wide range of education problems and issues). The workshop discussions brought out several related issues and illustrated the difficulties in, and disagreements associated with, assembling the right people for the job. The first priority for assembling a peer review panel is to ensure that it encompasses the research experience and expertise necessary to evaluate the theoretical and technical aspects of the proposals to be reviewed. For agencies that fund education research, we define “theoretical and technical aspects” to refer to three areas: (1) the substance or topics of the proposals, (2) the research methods proposed, and (3) the educational practice or policy contexts in which the proposal is situated. Relevant experience and expertise should be determined broadly, based on the range of proposal types and program priorities. If, for example, a specialized quantitative research design is being proposed, at least some of the reviewers should have expertise in this design; if a specialized qualitative research design is proposed, some reviewers should have expertise in this design. In addition, it is the range of proposal types and program priorities, not their frequency or conventionality, that should determine the scope of the panel’s experience and expertise. In most cases, individual panelists will have relevant experience and expertise in one or more, but not all, of the topics and techniques under review. It is the distributed expertise of the review panel as a whole, and not the individual members, that establishes the appropriateness of the panel for the task (Hackett and Chubin, 2003). In this way, peer review is “intended to free [decision making] from the domination of any particular individual’s preferences, making it answerable to the peer community as a whole, within the discipline or specialty” (Harnad, 1998, p. 110). Reviewers should not harbor biases against other researchers or forms of research, nor should they have conflicts of interest that arise from the possibility of gaining or losing professionally or financially from the work under review (e.g., they work at the same institution). It is critical that reviewers can be counted on to judge research proposals on merit. But in practice, it is not possible to avoid researchers in the same field knowing one another’s work and each other personally. They may have biases for or against a certain type of research. They may be competitors for the same
OCR for page 28
Advancing Scientific Research in Education research dollars or the same important discovery or have other conflicts of interest associated with the research team proposed in a study (e.g., a past student-faculty adviser relationship). In such situations, impartiality is easily compromised and partiality not always acknowledged (Eisenhart, 2002). However, Chubin and Hackett (1990) argue that increases in specialization and interdisciplinary research have shrunk the pool of qualified reviewers to the point at which only those with a conflict of interest are truly qualified to conduct the review. Potential conflicts of interest must be minimized, and biases balanced. Both are serious limitations of peer review and can probably be addressed in the long term only by expanding the pools of qualified reviewers, through training and outreach to experts traditionally underrepresented in the process. In assembling peer review panels, attention to the diversity of potential reviewers with respect to disciplinary orientation as well as social background characteristics also is important to promote quality. Peer review panels made up of experts who come from different fields and disciplines and who rely on different methodological tools can together promote a technically strong, relevant research portfolio that builds and extends on that diversity of perspectives. Similarly, diverse panels with respect to salient social characteristics of researchers can be an effective tool for grounding the review in the contexts in which the work is done and for promoting research that is relevant to, and appropriate for, a broad range of educational issues and student populations. There is a final and particularly contentious issue related to diversity and to identifying the peers to review proposals for education research: how education practitioners and community members should be involved. Because education research is applied and attention to the relevance of the work is crucial, it is essential to involve practitioners and community members in the work of the agency. Whether and how they participate on panels, however, is a difficult question. A major concern with the practice of including reviewers without research expertise is that it could lead to inadequate reviews with respect to criteria of technical merit (or, in the criteria we defined above, research methods), a critical aspect of research proposal review in all agencies.2 In addition, since the field of education research is 2 We recognize that some practitioners and community members do have research expertise. In these cases, the concerns we outline do not apply. Our focus here is on those practitioners and community members who do not bring this expertise to peer review deliberations.
OCR for page 29
Advancing Scientific Research in Education in the early stages of developing scientific norms for peer review, this important process could be complicated or slowed by the participation of individuals who do not have a background in research. We do see the potential benefits of including practitioners and community members on panels that are evaluating education research funding applications, identifying high-quality proposals, and contributing to professional development opportunities for researchers, practitioners, and community members alike. Thus, we conclude that this option is one of four possible strategies—including reviewing proposals alongside researchers, reviewing proposals after researchers’ reviews, serving on priority-setting or policy boards, or participating in retrospective reviews of agency portfolios—that agencies could adopt to actively engage practitioner and community member groups in their work. A final note: while our focus is on federal funding agencies, this recommendation on peer review of proposals for education research is applicable to similar foundation efforts. Much education research is supported by private and not-for-profit organizations, and their role in promoting high-quality research through their grant-making is a significant one. Similarly, journals, through their choice of editors, publication committee members, and reviewers, as well as their manuscript review procedures, perform a significant role in shaping the quality of scholarly work. Just as funding agencies that screen proposals need to ensure a highly qualified, diverse set of reviewers, so too must the publication outlets that publish the finished research products. Recommendation 3: In research conducted in educational settings, investigators must not only select rigorous methods appropriate to the questions posed but also implement them in ways that meet the highest standards of evidence for those questions and methods. As described above, a critical scientific principle is the idea that the choice of methods used in particular studies should be driven by the nature of the question being investigated. This notion was extended in the workshop on the conduct of one method—randomized field trials—in educational settings to focus attention on the importance of rigorous implementation of research methods in educational settings. The report Implementing Randomized Field Trials in Education: Report of a Workshop contains a full
OCR for page 30
Advancing Scientific Research in Education accounting of the many practical issues associated with successful research of this kind discussed at the event (National Research Council, 2004a). Randomized field trials in education, when they are feasible and ethical, are highly effective methods for gauging the effects of interventions on educational outcomes (National Research Council, 2002). The power of random assignment of students (or schools, or other unit of study) to groups is that, on average, the two groups that result are initially the same, differing only in terms of the intervention.3 This allows researchers to more con-fidently attribute differences they observe between the two groups to the intervention, rather than to the known and unknown other factors that influence human behavior and performance. As in any comparative study, researchers must be careful to observe and account for any other confounding variables that could differentially affect the groups after randomization has taken place. That is, even though randomization creates (statistically) equivalent groups at the outset, once the intervention is under way, other events or programs could take place in one group and not the other, undermining any attempt to isolate the effect of the intervention. Furthermore, the use of multiple methods in such studies is highly desirable: for example, observational techniques can depict the implementation of the intervention and sharpen the ability to understand and isolate the influence it has on outcomes. The primary focus of the workshop was on how this kind of design can be implemented successfully in district or school settings. Pairs of researcher-practitioner teams described their experiences designing and conducting randomized field trials in schools in Baltimore and suburban Pittsburgh and made clear that the selection of this method is not sufficient to ensuring that a rigorous study is conducted—implementation matters. The challenges they described are daunting. Recruitment and retention of students and schools to participate are fraught with difficulties associated with trust, mobility and turnover of student and teacher populations, and laborious consent processes. Teachers are likely to share ideas and practices that seem promising, blurring differences between the interventions the two 3 It is logically possible that differences between the groups may still be due to idiosyncratic differences between individuals assigned to each group. However, with randomization, the chances of this occurring (a) can be explicitly calculated and (b) can be made very small, typically by a straightforward manipulation like increasing the number of individuals assigned to each group.
OCR for page 31
Advancing Scientific Research in Education groups receive. Life intervenes: in the studies described at the workshop, research tasks were affected by a fatal fire, a snowstorm, and a government shutdown. The presenters offered ways of anticipating and dealing with many of these problems, all of which were facilitated by the development of strong partnerships between the research team and district and school personnel. Some strategies for overcoming obstacles involve design features: for example, the so-called Power4Kids study recently launched in a consortium of districts outside of Pittsburgh was designed to address the concern that the process of random assignment may result in some students not receiving a promising intervention. Each of the participating schools was assigned to test one of four reading tutorials, so the study design does not exclude any school from these interventions (students were then randomly assigned within each school to an intervention or to continue to receive existing instruction) (Myers, 2003). Other strategies involve facilitating key implementation tasks, like training school-based personnel to coordinate obtaining consent from participants and to monitor compliance with random assignment to groups. Without the mutual trust and understanding that is enabled by a strong partnership, none of these strategies is feasible. Furthermore, expertise and flexibility in research staff and adequate project resources are needed to deal successfully with unforeseen issues as they arise (Gueron, 2003). In our view, the importance of attending to proper planning and implementation of design features is just as important for other kinds of methods when doing work in real-world educational settings. The workshop series did not address implementation issues with other methods directly (e.g., surveys, ethnographic studies), but explicit attention to them is important as a focus of future work. Recommendation 4: Federal agencies should ensure appropriate resources are available for education researchers conducting large-scale investigations in educational settings to build partnerships with practitioners and policy makers. As we have argued above, a key lesson that emerged from the workshop on implementing randomized field trials in education is that the quality of any large-scale research conducted in districts or schools—largely independent of method or design—depends significantly on relationships built between researchers and district- and school-based personnel.
OCR for page 32
Advancing Scientific Research in Education Educators are often wary of researchers. The reasons for this uneasiness, both perceived and real, are many: educators may feel that the topics of study do not align with the concerns they face from day to day, research tasks require their scarce time and effort to accommodate, and many kinds of research require that educators cede control of instructional decision making to investigators to meet research goals. Workshop discussions made clear that these circumstances have significant bearing on research quality. Without addressing them directly and working toward commonly held goals, educators have little incentive to ensure that research protocols are followed; to recognize, prevent, or communicate problems or issues relevant to the study’s objectives that arise during the study period (e.g., treatment fidelity issues in comparative studies); or to bring their observations and insights to bear on the research. Partnerships, especially those that are sensitive to issues of racial and ethnic diversity and bridging gaps in ethnicity and socioeconomic status that often exist between researchers and those in the districts and schools, enhance the ability to address these issues. In each of the three studies featured at the workshop, researchers were able to gain access to the schools, to ensure cooperation in faithfully carrying out the interventions, and to make progress toward mutual goals by establishing trust and encouraging open communication. Their experiences suggest that it is nearly impossible for researchers to conduct randomized field trials—or any other large-scale study—in districts and schools unless both researchers and education service providers take time to understand each others’ goals and develop a study design that will help both parties to reach them. The committee was particularly impressed with one model for how researcher-practitioner partnerships can be developed and nurtured to change these incentives to the advantage of all. This two-decade-old partnership in the Baltimore City Public School System that promotes research and practice simultaneously in the context of a series of randomized field trials is described in Box 2-2. Over the past four years, the National Research Council has issued a series of reports focused on how such partnerships could form the basis of a major new education research infrastructure called the Strategic Education Research Partnership (SERP). The culminating proposal (National Research Council, 2003b) contains many of the elements of a partnership described in Box 2-2. Indeed, part of the justification for the large-scale effort is that there are many such examples of productive partnerships (involving randomized field trials and other kinds of education research), but they are not
OCR for page 33
Advancing Scientific Research in Education connected in a way that contributes to a unified body of knowledge or that improve practice on a large scale. One of the three main components of the SERP plan is field sites, which are built on the kinds of tenets featured in the partnerships described at the workshop: mutual trust, collaborative prioritization, and deployment of resources to support the research-practice partnership. Creating these partnerships requires time and money. To implement the model for the series of large-scale randomized field trials described in Box 2-2, for example, the researcher-practitioner team estimate the need for a year of work before the research is formally launched. Thus, when funding large-scale studies to be conducted in educational settings, federal agencies and other funders need to ensure that adequate resources are available for partnership building. And investigators need to take the task seriously and spend the time in advance establishing that understanding and trust. Of course, not all education research requires working relationships with districts or schools. For example, education policy research that focuses on macro-level trends and relationships typically involves the use of existing large-scale data sets. However, even these kinds of projects rely (albeit less directly) on the fact that someone had to engage schools or districts or other educational settings to gather the data. Furthermore, appropriate interpretations of information that was collected may be difficult without thorough grounding in the context of the classroom. The bottom line is that promoting high-quality education research requires consideration of how to effectively engage districts and schools, and this requires time and money in research budgets, regardless of the study design. CONCLUSION The field of education research and the related goals of evidence-based education will not be served if the underlying science lacks rigor. In this chapter we point to ways in which quality can be promoted in a range of settings. Overall, the approach taps different institutions and thus the talents and energies of a wide range of practicing scholars in the field. Acting on these recommendations, therefore, could formally engage a broad swath of the diverse talent of education researchers and enrich the ongoing dialogues about quality in turn.
OCR for page 34
Advancing Scientific Research in Education BOX 2-2 Effective Implementation of Education Research Through Partnerships The Kellam-Chinnia team described a series of randomized studies in the Baltimore City schools that have stretched over two decades in the context of the larger, long-term Baltimore Prevention Program that have been supported by a strong partnership between the research team and district- and school-based personnel. The current study is exploring the effects of an integrated set of preventive first-grade interventions aimed at improving teachers’ classroom behavior management, family-classroom partnerships regarding homework and discipline, and teachers’ instructional practices regarding academic subjects, particularly reading. Chinnia explained that the school system supports the study because it lays the foundation for translating its findings into policy and practice. In addition to assessing the impact of the program, the researchers will follow the first-grade children as far as the end of third grade, and they will also follow their first-grade teachers over two subsequent cohorts of first graders. This long-term observation will allow researchers to test whether the multiple levels of support and training for teachers sustain high levels of program practice. The study will also test in the fourth year whether the support and training structure is successful in training nonprogram teachers. In their presentation, Kellam and Chinnia described how their partnership helped both the education community and the research team meet their goals. Kellam asserted that when a partnership is in place based on “mutual self-interests at multiple levels,” obtaining the consent of the parents of participating children requires far less logistical work than otherwise might be the case—illustrating how key implementation tasks such as recruitment are facilitated by the relationship. Chinnia described some of the self-interests that led to the long-term partnership. She explained that the ran
OCR for page 35
Advancing Scientific Research in Education domized field trials helped to meet several of the school system’s goals, including intervening early in elementary school to enhance and maintain student achievement, identifying best practices for instruction and classroom management, and promoting parent involvement in students’ progress. She noted that the current study could help to sustain best practices in a whole-day first-grade program, and that the goal of creating and sustaining whole-day first grade programs is included in the Baltimore City Public School System’s master plan. In sum, they described the development of an effective partnership as requiring six essential components (Kellam, 2000, p. 19): Analyze the social/political structure of the school district. Learn the vision and understand the challenges and priorities. Identify mutual self-interests within and across the leadership. Fit the prevention research/program interests under the visions of the leadership. Request ad hoc oversight committee of leaders. Work through trust issues. Workshop: Randomized Field Trials in Education: Implementation and Implications September 24, 2003 Transcript available at: http://www7.nationalacademies.org/core/ Key Speakers: Sheppard Kellam, American Institutes for Research Linda Chinnia, Baltimore City Public School System Related Product: Implementing Randomized Field Trials in Education: Report of a Workshop http://books.nap.edu/catalog/10943.html
Representative terms from entire chapter: