Skip to main content

Currently Skimming:

2 Analyzing Key Elements
Pages 20-49

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 20...
... This chapter provides an overview of some of the major components of peer review systems designed to assess education research proposals in federal agencies. We describe and analyze components of peer review processes with respect to how they promote particular objectives.
From page 21...
... 6. Peer review of education research proposals in federal agencies could be improved in a number of ways.
From page 22...
... . for what national questions other people are posing, and responses to those questions." And Edward Redish -- a physicist and physics education researcher at the University of Maryland -- also pointed to the benefits for researchers who serve on peer review panels, citing the value he has experienced in "see[ing]
From page 23...
... Workshop discussions also highlighted the role of peer review as a tool for professional development -- for proposers, reviewers, and agency staff-to promote a professional culture of inquiry and rigor among researchers. This culture includes an ethos steeped in self-reflection and integrity, as well as a commitment to working toward shared standards of practice (Shulman, 1999; National Research Council, 2002; Feuer, Towne, and Shavelson, 2002)
From page 24...
... KEY OBJECTIVES OF PEER REVIEW FOR EDUCATION RESEARCH Taking our cue from this discussion of multiple purposes, we conclude that two broad objectives that ought to guide the design of peer review systems in federal agencies: the identification and support of high-quality education research and the professional development of the field. The first objective of using peer review as a process to achieve quality
From page 25...
... By defining and upholding high standards of quality in the peer review process, researchers can exert a powerful influence on questions of what counts as high-quality research in particular contexts -- providing input directly from the scholarly communities with respect to the implementation of policies stemming from the now numerous definitions of quality research that appear in federal education law (e.g., the No Child Left Behind Act of 2001, the Education Sciences Reform Act of 2002, and bills pending to reauthorize the Individuals with Disabilities Education Act of 1997 and parts of the Higher Education Act of 1965, P.L.
From page 26...
... In much the same way, the feedback that reviewers provide to applicants often signals areas of contention about new ideas or techniques, preparing the ground for broader scrutiny and consideration of where and how to push the knowledge base and its application. Agency staff teach and learn as well: they familiarize reviewers with relevant agency priorities, goals, review criteria, process specifics, and the particular objectives held in a research competition for advancing the field.
From page 27...
... In some of these cases, we highlight the tensions that arise and the trade-offs that are often required in the attempt of peer review to serve multiple purposes. IDENTIFYING AND SUPPORTING HIGH-QUALITY RESEARCH The formal review of education research proposals by professional peers must be designed to identify and support high-quality research.
From page 28...
... Using the standards for peer reviewers that were in place at the time, they focused on the extent to which each reviewer had content, theory, and methodological expertise. They found a number of disconnects, including a relatively low level of fit on the methodological aspects of the research proposals under review (August and Muraskin, 1998)
From page 29...
... Beyond these three broad areas of competence that we view as essential for peer review panels, additional kinds of expertise relevant to the process surfaced in workshop discussions. For example, Robert Sternberg, director of the Yale Center for the Psychology of Abilities, Competencies and Expertise and the president of the American Psychological Association, suggested that creativity is an undervalued yet critical talent for assessing research quality.2 Teresa Levitin, director, Office of Extramural Affairs, speaking from her experience running panels at the National Institute on Drug Abuse at the National Institutes of Health (NIH)
From page 30...
... Agencies deal with these issues in different ways. Steve Breckler, of the Social Behavioral and Economic Sciences directorate at the NSF, referenced a "complex array of conflict of interest rules" that applies to peer review of research proposals submitted to the NSF.
From page 31...
... As we argue in the section that follows, engaging a range of perspectives sharpens thinking about, and opens avenues for considering, quality in the research that is funded. And as we discuss in the section on quality, so long as reviewers can agree on basic standards of quality, these divergent preferences can be accommodated in the peer review process and indeed can strengthen its outcomes.
From page 32...
... Assembling diverse panels with respect to groups traditionally underrepresented in education research -- like racial and ethnic minorities-is also an important consideration that surfaced a number of times in workshop discussions and is especially relevant to education research, as it often grapples explicitly with issues involving diversity. One important aspect of research quality across many of the agencies discussed at the workshop is the relevance or significance to educational problems of the proposed work.
From page 33...
... For this reason, in the long run, we think it is likely that socially and culturally diverse peer review panels will result in a more expansive set of perspectives on the assessment of relevance and significance, thereby improving the overall quality of the research over time. Since quality in the peer review of education research proposals includes both technical and relevance criteria, ensuring a diverse set of panelists who collectively bring the expertise and experience necessary to judge both well should always be the goal.
From page 34...
... In his remarks, Dodge warned that asking individuals without research expertise to evaluate scientific quality "discredits the process." And Hackett, while arguing that peer review in education is a natural place to help bridge policy and practice, acknowledged that practitioner (those without research expertise) participation on review panels could undermine attempts to develop a strong sense of professional culture in the field.
From page 35...
... Still another way to systematically engage practitioners in reviewing research is through an approach used by OSEP, whose agency assembles peer review panels of stakeholders to retrospectively assess the value of the agency's portfolio of research in addressing practical ends. This structure, when coupled with peer review by researchers, captures the expertise of both but does not involve practitioners in judging the merits of research proposals directly.
From page 36...
... Ensuring quality along the dimensions used by an agency suggests the need to create measures that are both reliable and valid. Reliability in this context refers to the extent to which a research proposal would receive the same ratings, funding outcome, and feedback across multiple independent review panels.
From page 37...
... However, the reliability of panels as a whole, while a more useful construct, is difficult to measure because agencies overseeing reviews of research proposals never have the luxury of convening multiple panels to review the same proposals and then comparing the results across the independent panels. Validity, as applied to the results of peer review, refers to the extent to which inferences made from the resulting ratings and specific feedback are warranted given the information provided in proposals for research funding (Messick, 1989)
From page 38...
... Workshop discussions about research quality analyzed both short-term and long-term aspects of quality, and many participants argued that peer review systems ought to be designed to attend to both. Peer review is typically designed to identify high-quality proposals for a given agency competition.
From page 39...
... Diversity Several workshop participants suggested that since peer review can and should serve an educative function, efforts to involve a diversity of research perspectives as well as the participation of people from traditionally underrepresented populations in the process were imperative. In response to a question about how agencies ensure diverse perspectives on peer review panels, Steven Breckler told the group that NSF program officers spend a significant amount of time trying to identify people and places that "ordinarily are not plugged into the NSF review process." He also pointed to the NSF criteria for reviewing research applications, which require an assessment of the extent to which the proposed activity will broaden the participation of such groups in the evaluation of the proposals themselves "broader impacts." According to Stanfield, NIH also pays close attention to these issues, relying on a number of mechanisms to promote broader participation, including the use of discretionary funding to support research among underrepresented groups and institutions.
From page 40...
... Since education researchers come from so many fields and orientations, panels focused on particular issues or problems in education can promote a collective expertise that builds interdisciplinary bridges and facilitates the integration of knowledge across domains. Hackett, drawing from his own experience participating on NSF peer review panels, asserted that establishing interdisciplinary trust is difficult when panels are ad hoc.
From page 41...
... Although well-suited as a professional development tool, standing panels have their drawbacks. Retaining the same people over time can have a narrowing effect on the advice given to agency leadership, which is why many standing panels have term limits.
From page 42...
... If peer review is to serve a professional development function effectively, agency staff and reviewers should take these responsibilities seriously and invest the time to fulfill them. Yet another issue aired at the workshop showed how difficult establishing high-quality feedback can be.
From page 43...
... Knowledgeable staff can follow the process from beginning to end, substantively interacting with members of the field in ways that facilitate learning on both sides and result in work with tight alignment to agency goals. As Susan Chipman, director, Cognitive Science Program, of the ONR, described the process, "ONR staff are the peers -- they review proposals and make recommendations for funding." Program officers at ONR often use multiple internal peers to judge research proposals, including potential consumers of the work, since ONR's work is very applied and mission-oriented.
From page 44...
... The agency mandates extensive documentation of peer review panels, requiring program officers to certify that they have completed parts of the process to the best of their ability and in concordance with relevant policy. To address charges that some investigators may not get the fair shake to which Whitehurst referred, Breckler pointed to a complex array of conflict of interest rules for program officers.
From page 45...
... The agencies represented at the workshop relied on a range of largely informal strategies to promote better proposals -- such as program officers talking with junior scholars about the grant-writing process -- and the degree to which this issue was addressed varied quite a bit. Procedures for resubmission at NIH was the most formal procedure described: with clear and comprehensive written feedback on the weaknesses of a submission, proposers get insights into how to improve their future proposals to the agency and are informed of specific guidelines for resubmitting a revised application in a future grant cycle.
From page 46...
... Negative experiences of many reviewers of education research proposals-especially in the competitions studied in the evaluation of the former OERI by August and Muraskin (1998) and in testimony about peer review at OSEP to the President's Commission on Excellence in Special Education (2002)
From page 47...
... We also identify some of the alternatives they describe for allocating federal education research dollars, ultimately concluding that, despite its flaws, peer review is nonetheless the best available mechanism for allocating scarce education research dollars. A persistent complaint about the peer review process is the possibility of cronyism -- that is, that engaging peers predisposes outcomes to benefit friends or colleagues with no or little regard for the actual merit of a given proposal (Kostoff, 1994)
From page 48...
... . The main deficiency of earmarking is that it circumvents technical expertise, jettisoning altogether the principle that scientific quality ought to be the primary basis for the allocation of research dollars.
From page 49...
... Term limits, blended expertise on panels, and attention to systematic evaluation of peer review processes and outcomes are additional examples of the kind we address in Chapter 3 that can and should be used to counterbalance the flaws of peer review systems. In short, peer review as a system for vetting education research proposals in federal agencies is worth preserving and improving.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.