This chapter addresses the following key study question:
Key Question #2. To what extent are peer reviews of grant applications done in such a way as to enhance the quality of final results?
The National Institute on Disability and Rehabilitation Research’s (NIDRR’s) peer review process encompasses recruiting and training reviewers, conducting the review, and approving the awards. In the context of this study, as with the priority-setting process (Chapter 3), it is challenging to link the peer review process directly with specific output quality because the quality of grant outputs is the product of multiple complex factors, including the priority-setting process, funding levels, the peer review process, and the scientific quality of grantees. However, it is clear that the peer review process used by NIDRR contributes significantly to the success of the grant award program and the quality of the resulting outputs. Moreover, as described in The Future of Disability (Institute of Medicine, 2007), significant efforts to enhance the quality of NIDRR’s portfolio by strengthening the peer review process were implemented during the past decade.
This chapter begins by describing NIDRR’s peer review process. It then presents results of the committee’s assessment of the process. Finally, the chapter offers the committee’s conclusions and recommendations on this aspect of its evaluation.
DESCRIPTION OF NIDRR’S PEER REVIEW PROCESS
This description of NIDRR’s peer review process was compiled from existing documentation, such as legislation, the Federal Register, NIDRR and the U.S. Department of Education (ED) policies and procedures, NIDRR’s Long-Range Plan (LRP), and notices inviting applications (NIAs). In addition, the committee interviewed NIDRR and ED management to obtain a more thorough and cohesive understanding of the process.1
Legislative and Departmental Foundation
Title II, section 202, of the Rehabilitation Act (1973, as amended) states that NIDRR will perform scientific peer review of all applications for research, training, and demonstration projects. The peer review is to “be conducted by scientists or other experts in the rehabilitation field, including knowledgeable individuals with disabilities, and the individuals’ representatives” (p. 98). Federal employees are not allowed to be peer reviewers. NIDRR is to provide training for peer reviewers as is deemed necessary and appropriate.
Title 34 of the Code of Federal Regulations (Disability and Rehabilitation Research Projects and Centers Program, 2009) states, “The purpose of peer review is to insure that activities supported by NIDRR are of the highest scientific, administrative, and technical quality, and include all appropriate target populations and rehabilitation problems” (p. 217). Applications for awards of $60,000 or more must be reviewed by a peer review panel, with the exception of applications related to evaluation, dissemination of information, or conferences.
In addition, NIDRR follows the peer review requirements of ED. In accordance with ED’s Handbook for the Discretionary Grant Process (ED Handbook), NIDRR annually reviews and updates its procedures in ED’s Application Technical Review Plan (a description of the processes for identifying and involving reviewers, resolving conflicts of interest, working with the review panels, and selecting applications for funding) and maintains Grant Program Competition Files (a collection of all information, decisions, and documentation related to a competition) (U.S. Department of Education, 2009).
Key Personnel in NIDRR’s Peer Review Process
Key personnel in NIDRR’s peer review process include the competition manager, the panel monitor, and the agency’s peer review contractor.
1 The committee conducted interviews with NIDRR and ED management in four sessions during summer 2010 and one session in spring 2011.
Once an application kit2 has been published, NIDRR assigns a competition manager—a NIDRR staff member who is responsible for all aspects of the review process (generally the individual who wrote the description of the priority area; see Chapter 3) (National Institute on Disability and Rehabilitation Research, 2010b, 2010c). The competition manager arranges for the participation of additional NIDRR staff as necessary, recruits reviewers, confirms receipt of all applications, and performs a final screen of eligibility and responsiveness. In accordance with the Education Department General Administrative Regulations (EDGAR) (2008),3 NIDRR generally errs on the side of inclusivity, ruling out applications that are ineligible or nonresponsive and allowing peer reviewers to judge the merit of all remaining applications.
According to NIDRR management, the competition manager may also be the panel monitor. Duties of the panel monitor include managing the logistics of panel review with assistance from NIDRR’s peer review contractor (see below), monitoring the progress of individual reviews, and overseeing the panel discussion. Competitions involving multiple panels typically employ additional panel monitors from NIDRR, but may include panel monitors drawn from across the Office of Special Education and Rehabilitative Services (OSERS).
NIDRR uses a contractor to provide support for the grant application and review process (Synergy Enterprises, Inc., 2008). The peer review contractor performs an initial screen of the eligibility and responsiveness of applications prior to the competition manager’s final screen, provides logistical support for the panel discussions, administers the postmeeting survey of the reviewers, compiles reports as requested, and provides other support as required. Additional detail on the role of the peer review contractor is provided later in the chapter.
2 An application kit is a package containing application forms, the notice of final priority, the NIA, salient regulations, and the peer review criteria for a competition.
3 Available: http://www2.ed.gov/policy/fund/reg/edgarReg/edgar.html [November 22, 2011].
Stages in NIDRR’s Peer Review Process
NIDRR’s grant selection and peer review process consists of 12 stages:
1. Determine peer review criteria
2. Peer review kick-off meeting
3. Recruiting of peer reviewers
4. Preapplication meeting with potential applicants
5. Peer reviewer orientation
6. Prepanel correspondence
7. Panel discussion
8. Site visits
9. Prefunding meeting
10. Preparation and finalization of slate
11. Slate review
12. Slate approval and award
The process takes approximately 4-6 months. The stages of the process are described below.
Determine Peer Review Criteria
Selection criteria applied by peer reviewers to assess and rate applications are drawn from Title 34 of the Code of Federal Regulations and matched to the requirements of the competition. Each competition includes 100 possible points allocated across the criteria and subcriteria. With the exception of Spinal Cord Injury Model System (SCIMS), for which the point allocation is prespecified, the distribution of points across the selected criteria is determined by NIDRR staff. Criteria related to the quality of the proposed research or development are always allocated a substantial percentage of the points (National Institute on Disability and Rehabilitation Research, 2010c). Past performance as a NIDRR grantee is not considered in the criteria for peer review, but is considered during the prefunding meeting (discussed below). The ED Handbook instructs reviewers to consider only the merit of the application itself. Additional knowledge of the field or the applicant is not to influence the review. Annex 4-1 at the end of this chapter provides more detail on the grant selection criteria, as well as an example of the selection criteria for a Disability and Rehabilitation Research Project-General (DRRP) competition.
After publication of an application kit the competition manager convenes a kick-off meeting with the contractor. During the kick-off meeting,
NIDRR staff determine the dates of panel discussions and other key dates leading up to the competition and discuss the division of labor for recruiting peer reviewers (National Institute on Disability and Rehabilitation Research, 2010c).
Recruiting of Peer Reviewers
NIDRR establishes peer review panels of five to seven members to review each submitted grant application (National Institute on Disability and Rehabilitation Research, 2006, 2009a, 2009b). The panel size depends on the size of the grants to be reviewed and the expertise needed. NIDRR uses standing panels—consisting of seven reviewers who serve as peer reviewers for up to 3 consecutive years following their initial appointment—for Field Initiated Project (FIP) competitions.4 Ad hoc panels are formed for all other competitions (National Institute on Disability and Rehabilitation Research, 2010b). According to NIDRR management, for Advanced Rehabilitation Research Training (ARRT), Small Business Innovation Research (SBIR), and Switzer Fellowship competitions, NIDRR draws on reviewers who have previously been supported by these program mechanisms and who have relevant knowledge and expertise in these program areas.
The competition manager tailors the composition of each review panel to competition requirements to ensure that the panel includes the expertise needed for the review (National Institute on Disability and Rehabilitation Research, 2010b). Competition managers identify potential reviewers through the Peer Review System (PRS), a searchable database containing information and resumes for thousands of potential peer reviewers maintained at the OSERS level, as well as through literature searches, networking at conferences, and personal connections (National Institute on Disability and Rehabilitation Research, 2010b). As part of the recruiting process, the competition manager screens potential reviewers for conflicts of interest and often is forced to rule out many qualified individuals. NIDRR management stated that it is not uncommon for competition managers to make 50 or more recruitment calls in order to find five reviewers. Additionally, conflicts of interest can develop after the initial screening, requiring that reviewers be replaced (sometimes at the last minute). Furthermore, delays in the approval and publication of NIAs often leave NIDRR staff with shortened timelines in which to recruit peer reviewers and hold the panel discussion.
NIDRR also strives to include qualified individuals with disabilities or their authorized representatives on review panels, as well as individuals from underrepresented populations. Since the number of individuals with disabilities
4 ED has strict rules related to conflict of interest, which impact the formation of NIDRR standing panels. FIP competitions are large enough to be exempt from the particular ED rules on conflict of interest.
who have the scientific credentials to conduct reviews is quite small, it can be difficult to represent the views of the various disability constituencies. At times, NIDRR will include individuals with disabilities without scientific expertise on review panels to lend the perspective of consumers5 if particularly relevant constituencies would otherwise not be included.
NIDRR also produces a general list of all reviewers who have served in a given year (National Institute on Disability and Rehabilitation Research, 2009b). Per ED policy, the list does not identify the specific competitions in which the reviewers participated and is made available upon request (U.S. Department of Education, 2009).
Preapplication Meeting with Potential Applicants
Several weeks after an NIA is published in the Federal Register, NIDRR arranges and publicizes a conference call to provide guidance on the peer review process and technical assistance to potential applicants (National Institute on Disability and Rehabilitation Research, 2010c; also noted by NIDRR management). During the call, NIDRR staff provide guidance on the application process but do not provide advice related to the content of potential applications. NIDRR staff also generally make time for one-onone consultation if it is requested.
Peer Reviewer Orientation
The competition manager conducts a competition-specific orientation session for all reviewers (National Institute on Disability and Rehabilitation Research, 2010b). The session is conducted via telephone within a few days of reviewers’ receipt of applications and review materials. The session is set up by the peer review contractor and generally lasts 1 hour. It includes an overview of the review process, a review of the selection criteria to be used in evaluating each application, a review of the online system, a discussion of reviewers’ responsibilities, tips for conducting a good review, and inquiries to determine whether any reviewer has developed a conflict of interest.
After the training session and prior to the review, the competition manager and/or panel monitor will correspond with the reviewers (National Institute on Disability and Rehabilitation Research, 2010c; also noted by NIDRR management). The correspondence is intended to ensure that reviewers
5 Consumers are defined in this report as individuals with disabilities and their family members and/or authorized representatives.
have everything they need to complete the review, that they are progressing through their initial reading of the applications, and that they are entering their initial scores and comments into the e-Reader system.
The technical review of applications consists of two parts: individual review of all applications, followed by panel review (National Institute on Disability and Rehabilitation Research, 2009a, 2009b). The panel review generally takes place via teleconference and e-Reader over 2-3 days. Individual written reviews from each member of the review panel and a summary of the panel review documenting an application’s strengths and weaknesses are required before a grant can be awarded.
NIDRR has conducted review meetings exclusively via teleconference for more than 5 years. NIDRR management noted that in the past there was some resistance to conducting review meetings by teleconference as opposed to in person. However, NIDRR believes that the benefits of teleconferences, including reduced cost for the agency and reduced time commitment for reviewers (which has resulted in more experienced researchers agreeing to participate), far outweigh the drawbacks, such as a loss of rapport among reviewers and between NIDRR staff and reviewers. Additionally, NIDRR has noticed that reviewers with mobility impairments benefit greatly from teleconference reviews, although reviewers with vision and hearing disabilities find the teleconference reviews more challenging. NIDRR provides additional support as necessary in the form of interpreters, communication access realtime translation (CART) services, alternative-format materials, and other personal assistance to allow reviewers with disabilities to participate fully in the review.
Grant applications are mailed to reviewers at least 3 weeks in advance of the review whenever possible (U.S. Department of Education, 2009). Reviewers independently score and comment on each application using technical review forms, which are accessed and saved electronically via e-Reader. Scores (whole numbers only) are assigned to each factor of each criterion. Peer reviewers may adjust their own scores before or immediately following the review teleconference. A score of less than the maximum point value must be accompanied by a written rationale. A maximum score does not require a written rationale, but reviewers are encouraged to include comments. As described by NIDRR management, the number of applications
6 Panel discussion procedures described here are a synthesis of information from written sources provided by NIDRR (National Institute on Disability and Rehabilitation Research, 2009a, 2009b); interviews with NIDRR management; and direct observation of panel discussions by committee members Thubi Kolobe and Pamela Loprest and co-study director Jeanne Rivard.
each panel reviews and the size of the applications vary greatly by program mechanism. On one end of the spectrum, center grant panel reviews (such as RRTC and RERC) generally include 2 or 3 applications with a maximum recommended length of 125 pages each (or 375 total pages maximum). Although center grant competitions usually receive only a few applications, each application is highly complex and technical. Additionally, many applications are longer than the maximum recommended length. On the other end of the spectrum, FIP applications are shorter (50 pages) and not as technical as center grant applications, but a single panel is likely to review 20 applications totaling 1,000 pages minimum.
In addition to the general review of all applications, each panel member is assigned to be either the primary or secondary reviewer for certain applications. The primary reviewer presents the application for discussion and writes a summary of the discussion. The secondary reviewer provides commentary on the application and assists the primary reviewer in writing the summary.
All panel members participate in the discussion of each proposal (National Institute on Disability and Rehabilitation Research, 2009a, 2009b). Each application is discussed in turn, with each reviewer, beginning with the primary reviewer, presenting the scores and rationales for each criterion. Differences in scores among reviewers are discussed. If panel members’ scores are very different, the primary reviewer submits a description, taken from the discussion, of why this is the case.
During the teleconference, the panel monitor oversees the discussion; helps the panel maintain consistency from criterion to criterion and application to application; reviews scores, comments, and summaries for adequacy and accuracy; and provides information concerning policy, regulations, selection criteria, technical review forms, conflicts of interest, and confidentiality. The panel monitor does not participate in the substantive discussion of applications or related research issues.
NIDRR provides peer reviewers an honorarium of $200 a day, generally for 1 day of preparation and 3 days of reviewing.7 NIDRR monitors the compensation for peer reviewers provided by other federal agencies and believes its rates are competitive.
Title II of the Rehabilitation Act requires a preaward 1-day site visit for those competitions in which an award or awards of more than $500,000 will be made. NIDRR management stated that the site visit is considered a
7 Doris Werwie, personal communication, National Institute on Disability and Rehabilitation Research, April 14, 2011.
part of the peer review process, with a visit being conducted for the highest rated applicant. Multiple site visits may be made if the highest rated applicants are within one point of each other. Site visits are conducted shortly after the review and include one member of the review panel and one NIDRR staff member (National Institute on Disability and Rehabilitation Research, 2010c). Shortly before the visit, the NIDRR staff member submits questions to the applicant developed by the peer reviewers and by NIDRR staff. Applicants respond to the questions in writing prior to and during the visit.
Following peer review, NIDRR holds a prefunding meeting involving the NIDRR Director, the Deputy Director, the two division Directors, the agency’s scientific advisor, the competition manager, and interested NIDRR staff to develop specific funding recommendations (National Institute on Disability and Rehabilitation Research, 2009a). At the meeting, the panel monitor and/or the competition manager presents the rank order of the applications as well as summary information on the peer review process, including information from the site visit if applicable, with emphasis on the peer reviewer comments (National Institute on Disability and Rehabilitation Research, 2010b). Additionally, applicants’ proposed project activities, budgets, and past performance are discussed. From this discussion, program staff develop specific funding recommendations. According to NIDRR management, only in rare cases do the recommendations not follow the rank order established in peer review.
Preparation and Finalization of Slate Through Award
After the prefunding meeting, the competition manager transfers the recommendations for funding into a departmental format called a slate (National Institute on Disability and Rehabilitation Research, 2010c). The slate is then reviewed by Research Division management and approved by the NIDRR Director. It then must undergo an OSERS and ED clearance process, similar to proposed priorities (see Chapter 3). After approval of a slate by the Office of the Secretary of Education, NIDRR’s Program, Budget, and Evaluation Division obligates the funds to the new grantee. Additionally, NIDRR provides comments and suggestions for improvement to unsuccessful applicants following a review.
NIDRR Competitions from Fiscal Years 2006 to 2009
NIDRR provided the committee with general data on the competitions held from fiscal years (FY) 2006 through 2009, including the number
of competitions held for each program mechanism, applications received per competition, applications reviewed per competition, and awards made per competition (National Institute on Disability and Rehabilitation Research, 2010a). Table 4-1 summarizes these data. Each year over this 4-year time span, NIDRR held an average of 25 competitions and received an average of 492 applications. NIDRR reviewed between 48 percent (RERC 2006 competition) and 100 percent (13 different competitions) of applications received for each competition, and awarded grants to between 6 percent (Field Initiated Project-Development [FID] 2006 competition) and 83 percent (Burn Model System [BMS] 2007 competition) of applications reviewed for each competition. However, the numbers of submitted applications, reviewed applications, and awards appear to vary greatly across years within the various program mechanisms. FIPs for research or development (FIR and FID) are by far the most competitive of the mechanisms, having the smallest proportion of grants awarded relative to number of grants reviewed (6 percent to 11 percent over the 4 years). The BMS and Disability and Business Technical Assistance Center (DBTAC) mechanisms (two mechanisms for which competitions were held for only a year of the analyzed data) appear to be the least competitive. Five out of the six BMS applications that were reviewed received awards; half of the DBTAC applications reviewed received awards.
Data Collection and Analysis by the Peer Review Contractor
NIDRR’s peer review contractor collects and manages the data from and about peer reviews, including the peer review scores themselves and peer reviewer feedback on the process. In 2008, NIDRR asked the contractor to analyze the scoring data it had collected for 18 of the competitions that occurred in 2007 (Synergy Enterprises, Inc., 2008). The contractor drew three notable conclusions. First, there appeared to be no bias as to the types of individuals and organizations that received NIDRR funding, although institutes of higher education were being funded slightly more often than other types of organizations. Second, some competitions, such as those under the DRRP and RERC program mechanisms, had a notably higher rate of ineligible applications. Finally, while all funded applications received an overall score of at least 77, the contractor observed a lack of consistency in the language used for the scoring criteria for each program mechanism and no consistency in the number of points assigned to each scoring criterion within a mechanism.
NIDRR’s peer review contractor surveys peer reviewers for feedback following every panel using the OSERS Panel Review Logistics Evaluation Form (Synergy Enterprises, Inc., 2010). Peer reviewers are asked to provide feedback on the prereview and review process, logistical support provided
by the contractor, special needs (only if any special accommodation was received), and suggestions for future reviews. Reviewers use a 5-point scale from poor (1) to excellent (5) to rate dimensions of the first three areas and provide comments on all areas.
NIDRR provided the National Research Council (NRC) with data and a summary of the data collected from 147 of the 163 panel members participating in fiscal year 2008 to 2009 peer reviews. Response forms indicated the 18 specific competitions to which they related, but reviewer names were not included. Of note, 5 panels included fewer reviewers than are recommended by NIDRR procedure. To supplement NIDRR’s summary, NRC staff conducted a reanalysis of the data on the prereview and review process, special needs, and suggestions for future reviews.
Data on the prereview and review process cover five dimensions: (1) completeness of materials, (2) quality of materials, (3) time allowed for initial review, (4) assistance provided by staff, and (5) participation by staff. The average rating for all dimensions was between excellent (5) and very good (4) except for the dimension time allowed, which was rated between very good (4) and good (3). The average ratings of the process across competitions for all dimensions ranged between 3.3 and 5, again except for time allowed, for which the ratings ranged from 1.7 to 4.8 and for which six ratings were lower than the lowest rating (3.3) for any of the other four dimensions.
Comments on the prereview and review process also indicated that peer reviewers spent an average of 27 hours preparing for the reviews and an average of 20 hours participating. Combined preparation and participation time ranged from a low of an average of 15 hours to a high of an average of 60 hours. It should be noted that some peer reviewers both reported less time spent preparing and gave low ratings to time allowed for initial review, indicating they had less time to prepare than they wished.
The last question about the prereview and review process asked reviewers to indicate whether the total of preparation and participation time was more than, less than, or about as much time as they expected to spend. Fifty-five percent of reviewers indicated they spent about as much time as they expected, 42 percent that they spent more time than they expected, and 3 percent that they spent less time then they expected.
The section of the Panel Review Logistics Evaluation Form on special needs includes space to rate interpreter services, CART services, alternativeformat materials, readers or scribes, and other personal assistance if any of these were requested. Only six reviewers used this section of the form; five rated alternative-format materials, readers or scribes, and other personal assistance as excellent, and one rated alternative-format materials as fair.
Finally, many reviewers provided suggestions for future reviews. The most common suggestion by far was to reduce reviewers’ time commitment—
TABLE 4-1 NIDRR Competitions from Fiscal Years 2006 to 2009
|FY 2006||FY 2007|
|Mechanism||No. of Competitions||Apps. Received||Apps. Reviewed (% of received)||Grants Awarded (% of reviewed)||No. of Competitions||Apps. Received||Reviewed (% of received)||Awarded (% of reviewed)|
|Burn Model System (BMS)||—||—||—||—||2||8||6 (75%)||5 (83%)|
|Disability and Business Technical Assistance Center (DBTAC)||2||27||22 (81%)||11 (50%)||—||—||—||—|
|Technical Assistance Center (DBTAC)||—||—||(81%)||(50%)||—||—||—||—|
|Disability and Rehabilitation Research Project-General (DRRP)||7||50||44 (88%)||8 (18%)||2||15||10 (67%)||2 (20%)|
|Knowledge Translation (KT)||3||12||7 (58%)||3 (43%)||—||—||—||—|
|Traumatic Brain Injury Model System (TBIMS)||1||2||2 (100%)||1 (50%)||1||25||22 (88%)||14 (64%)|
|Rehabilitation Research and Training Center (RRTC)||1||7||7 (100%)||1 (14%)||1||3||3 (100%)||1 (33%)|
|Rehabilitation Engineering Research Center (RERC)||4||25||12 (48%)||3 (25%)||8||40||32 (80%)||5 (16%)|
|Switzer Fellowship||1||55||48 (87%)||8 (17%)||1||35||31 (89%)||8 (26%)|
|Field Initiated Project-Research (FIR)||1||137||129 (94%)||12 (9%)||1||125||118 (94%)||16 (14%)|
|Field Initiated Project-Development (FID)||1||149||143 (96%)||8 (6%)||1||100||97 (97%)||9 (9%)|
|Spinal Cord Injury Model System (SCIMS)||1||34||32 (94%)||15 (47%)||—||—||—||—|
|Advanced Rehabilitation Research Training (ARRT)||1||7||7 (100%)||1 (14%)||1||8||8 (100%)||3 (38%)|
|Small Business Innovation Research I (SBIR-I)||1||90||90 (100%)||12 (13%)||1||25||16 (64%)||6 (38%)|
|Small Business Innovation Research II (SBIR-II)||1||22||22 (100%)||7 (32%)||1||12||12 (100%)||5 (42%)|
|FY 2008||FY 2009|
|No. of Competitions||Apps. Received||Reviewed (% of received)||Reviewed (% of received)||No. of Competitions||Apps. Received||Reviewed (% of received)||Awarded (% of reviewed)|
|1||13||8 (62%)||3 (38%)||—||—||—||—|
|7||32||31 (97%)||7 (23%)||2||19||16 (84%)||3 (19%)|
|2||13||9 (69%)||2 (22%)||1||3||3 (100%)||1 (33%)|
|2||13||12 (92%)||4 (33%)||—||—||—||—|
|8||36||18 (50%)||8 (44%)||7||23||23 (100%)||9 (39%)|
|8||28||22 (79%)||7 (32%)||3||8||8 (100%)||4 (50%)|
|1||33||27 (82%)||8 (30%)||1||68||50 (74%)||8 (16%)|
|1||97||93 (96%)||14 (15%)||1||124||108 (87%)||14 (13%)|
|1||74||72 (97%)||8 (11%)||1||131||106 (81%)||8 (8%)|
|1||9||9 (100%)||4 (44%)||1||15||13 (87%)||5 (38%)|
|1||61||57 (93%)||16 (28%)||1||111||76 (68%)||15 (20%)|
|1||16||16 (100%)||5 (31%)||1||28||15 (54%)||5 (33%)|
for example, by reducing the length of proposals, the number of proposals per competition, and/or the number of criteria. Additional comments included observations about logistical support and recognition of good work done by NIDRR staff.
RESULTS OF THE ASSESSMENT OF NIDRR’S PEER REVIEW PROCESS
This section first describes observations of peer review panels by committee members and NRC staff. It then reports on perspectives of NIDRR staff and grantees on the peer review process, including challenges and suggestions for change. Finally, it relates peer reviewers’ experiences, perspectives, and suggestions for enhancing NIDRR’s peer review process. This information was gathered from (1) observations of three peer review panels, (2) interviews with NIDRR staff members, (3) surveys of NIDRR grantees, and (4) surveys of peer reviewers identified as having reviewed in fiscal year 2008 to 2009.
In designing this assessment, NRC staff reviewed such sources as the NSF Committee of Visitors peer review model (National Science Foundation, 2011), the National Institutes of Health (NIH) 2007-2008 Peer Review Self-Study (National Institutes of Health, 2008), and the RAND report Evaluating Grant Peer Review in the Health Sciences (Ismail et al., 2009).
Observations of NIDRR Peer Reviews
Two committee members and one NRC staff member individually observed (via teleconference) three peer review panels.8 Two of the observed panels consisted of fewer members than are recommended by NIDRR procedure. Two of the panels dealt with conflict-of-interest issues, which resulted in smaller panels and one reviewer being added only a few days before the review. One panel was under pressure to complete its work before the end of the fiscal year, so all reviewers had less than 2 weeks to read the applications. ED’s grant administration handbook (U.S. Department of Education, 2009) suggests that applications be mailed at least 3 weeks in advance of the review whenever possible. One panel reviewed six applications, one panel five applications, and the third panel three applications. One panel included two consumers, one of whom was also a researcher. One panel was scheduled for 3 days and the other panels for 2 days.
Committee and NRC staff members observed that the review process had a high degree of integrity. The panel members observed were generally appropriately knowledgeable in the field of disability and rehabilitation
8 Committee members Thubi Kolobe and Pamela Loprest and co-study director Jeanne Rivard observed peer review panels during summer 2010.
research, and the discussions and deliberations were thorough and probing. The panel members also displayed strength in the way they directly addressed disagreements in ratings.
Each panel was led by the competition manager/panel monitor, who ably guided the panel members through a thoughtful and consistent review following the peer review procedures outlined above. The competition manager was observed providing such help as reminding panel members they could not use other information outside of the proposals to rate the applications, should not deduct points from multiple criterion areas for the same flaw, should apply criteria consistently across applications, and needed to justify all scores adequately. While operating within the scope of NIDRR procedures, the panel monitors did differ in the specific ways they facilitated each meeting. One panel monitor, for example, asked reviewers to first share overall scores, then proceed to discussion of the criteria, while another asked the panel to begin with the criteria and build to the overall scores. The preference of the panel monitor thus appears to determine the precise manner in which the panel discussion is carried out.
Committee and NRC staff members observed that the workload for reviewers appeared to be quite burdensome. Many reviewers reported spending considerable time prior to the meeting reviewing and rating the applications. One reviewer said it took 7 hours to review one application. One of the teleconferences started each day at 10 AM and ended each of 3 days at 4 PM or 5 PM. Discussion of each application generally took 2 hours (although one application was reviewed in as little as 45 minutes). There was a 2-hour lunch break each day, but most reviewers used a significant part of that time to revise and finalize technical review forms and panel summary statements. Reviewers also reported spending time in the evenings between review days completing technical review forms and summary statements. Throughout the teleconferences, the reviewers referred to some degree of frustration and fatigue in scoring the last applications, especially if those applications were poorly organized.
The applications themselves were a burden on reviewers at times. NIDRR recommends that the project narrative section of the application be no longer than 125 double-spaced pages, but some narratives are longer. There is no page limit for other sections of the application, such as budget, assurances and certifications, resumes, and letters of support. Applications vary considerably in organization and clarity, with the best being organized clearly by criterion. Some reviewers had trouble finding content in poorly organized applications and lowered their scores accordingly. On several occasions, a reviewer needed to point out information missed by another reviewer to prevent an unjustified lower score. One reviewer suggested it would be a good idea for NIDRR to require that applications be organized following the order of the criteria.
The observers perceived that reviewers had a good overall understanding of the selection criteria with some exceptions, possibly due to the nature of those particular competitions. Criteria such as “importance of the problem,” “responsiveness to the priority,” and “research hypothesis” engendered some discussion between the panel members and the competition managers. Also, one panel noticed some redundancy in several criteria items addressing access and diversity.
Some concern was noted about the structure of the panel summary that is provided as feedback to applicants. It consisted mainly of a list of strengths and weaknesses but appeared to lack comprehensiveness to inform future applications and build capacity.
Perspectives of NIDRR Staff and Grantees
NIDRR staff and grantees were asked open-ended questions about their perspectives on the agency’s peer review process. Sixteen NIDRR staff members were interviewed in person and shared information about their roles in the process, their perspectives on its quality, and their suggestions for its improvement. Two-thirds of the interviewees were project officers or direct supervisors of project officers; the remaining held administrative positions. In addition, 28 grantees were asked to share their perspectives on NIDRR’s peer review process through one item on the grantee questionnaire that was completed during the summative evaluation. Narrative data were analyzed using standard qualitative analysis techniques (see Chapter 2 for a description of the methods). Following are the major findings that emerged from an analysis of the data provided by NIDRR staff and grantees.
Quality and Consistency
Most of the NIDRR staff interviewed participated in NIDRR’s peer review process as competition managers and/or panel monitors. Respondents indicated that the process is very strong and that the hard-working nature of their colleagues contributed to the quality of the process. Some stated that ED’s grant administration handbook is an important facilitator of successful and consistent peer reviews. However, a need was identified for standard operating procedures and better training to promote greater consistency in monitoring competitions, such as FIP, involving multiple panels. NIDRR is aware of such questions about consistency, and recently has started debriefing competition managers and panel monitors after reviews for multipanel competitions to help promote consistency.
Challenges for Staff
Peer reviews were described as extremely time-consuming for staff in terms of both recruiting reviewers and running the panels. The small size of the field makes it difficult to find reviewers. A searchable database assists in recruiting reviewers, but it is somewhat dated, and its content has a broad focus designed to meet larger ED needs. It was suggested that a database more tailored to NIDRR’s needs would be useful. To alleviate the recruiting burden, it was also suggested that the creation of more standing panels, as well as more up-front support from the peer review contractor, would be helpful. To respond to challenges in having adequate representation of minorities and individuals with disabilities on panels, more active recruitment was suggested within and outside of the disability field to build capacity.
Several interviewees noted how burdensome reviews are for the reviewers. One remarked, “There is something like 32 items and subitems that have to be scored individually. And our applications are long and so it is a strenuous process for reviewers.” Another commented, “I think every other review I will have somebody leave the review saying I can’t review for NIDRR again. Not because they didn’t enjoy the process. Not because they didn’t enjoy reading the applications but because it was too time consuming, too burdensome.” Another interviewee suggested that peer reviewers are undercompensated, even compared with reviewers for other ED programs, and suggested that their honorarium be increased.
Staff remarked that the agency is tackling the identified problems with the peer review process through a continuous quality improvement effort. The need has been identified to improve electronic systems for assembling panels, tracking reviewers and expertise, and managing meetings and ratings. At the time of the interviews, this issue was particularly critical because a new online system for managing scores had been found to be unusable and was being replaced temporarily by an e-mail-based system.
Peer Review Scoring System
Grantees commented primarily on the scoring system. It was stated that, having improved over time and being well managed by staff members, NIDRR’s peer review process was good. The main element of the process the grantees suggested could be improved was the scoring system. Grantees’ suggestions for changes to the system included placing more emphasis in scoring on the effort to accelerate translation and use and on the implications for policy change and for system design or service delivery interventions; amending the process so the lowest score for a proposal would be
discarded; using a scoring system similar to that used by NIH, with greater emphasis on innovation; and awarding more points to applications that explain how the project’s costs are reasonable when considered against the likely benefit of its outputs to the nation.
Perspectives of NIDRR Peer Reviewers
Peer reviewers surveyed were asked a series of closed- and open-ended questions inquiring about (1) their experiences with and perspectives on the NIDRR peer review process, (2) how the process compares with those of other federal research agencies, and (3) their suggestions for improving the process. Invitations to participate in the survey were sent to all individuals (a total of 156) who served on NIDRR peer review panels during FY 2008-2009. NIDRR provided the reviewers’ names and contact information, but not the competitions they reviewed. Four potential respondents were deleted from the list because their e-mail addresses were invalid even after a concentrated search. Of the 152 reviewers successfully invited, 121 responded to the survey (response rate of 80 percent). Not all of the respondents elected to answer the 2 open-ended questions; 58 percent responded to the first question and 82 percent to the second.
The committee analyzed quantitative data from the closed-ended survey items descriptively to determine frequencies and measures of central distribution. The narrative data were analyzed using standard qualitative analysis techniques as described in Chapter 2. Results of the quantitative and qualitative analyses of responses to the 10 closed-ended and 2 openended questions follow.
Panel Participation Rates and Types of Program Mechanisms
From 2005 through 2010, respondents served on a median number of 3.5 review panels, with a range of 1 to18 panels (16 individuals served on 1 panel and 1 individual on 18 panels). The most common types of program mechanisms reviewed were FIR (by 69 percent of respondents), FID (52 percent), RRTC (37 percent), DRRP (28 percent), SBIR-I (23 percent), SBIR-II (13 percent), and RERC (17 percent). Fewer than 6 percent of peer reviewers served on panels for BMS, TBIMS, and SCIMS. Between 9 and 12 percent of reviewers served on panels for fellowship and training grants (Switzer Fellowship and ARRT).
Ratings of the Quality of NIDRR’s Peer Review Process
Peer reviewers were asked to rate key elements of the NIDRR peer review process using a scale of 1 to 5, where 1 = poor, 3 = adequate, and 5 =
excellent. Table 4-2 presents the percentage of reviewers who rated the key elements along a 5-point scale. The table is arranged in order of most favorably rated (largest percentage rated 4 and 5) to least favorably rated (largest percentage rated 1 and 2). Results include 121 respondents, although a few respondents did not rate each element (with the exception of thoroughness of the deliberation), so the number of respondents for each element is less than 121. Additionally, 2 respondents answered “don’t know” on level of experience, 3 responded “not applicable” on guidance in writing reviewer comments, and 1 responded “don’t know” and another “not applicable” on quality of the training. These responses are also excluded from the number of respondents in the table. The key element of consistency in the overall quality of the peer reviews across panels was rated only by reviewers who had served on three or more panels, so the number of respondents for that element purposely excludes 3 “don’t know” responses, 17 “not applicable” responses, and 7 reviewers who left the item blank.
Nine of the 13 elements were rated as “more than adequate to excellent” (4 to 5) by 61 percent or more of respondents. These included support and facilitation of the review panel by NIDRR staff (78 percent), integrity of the peer review process overall (76 percent), thoroughness of the deliberation (73 percent), use of reviewers’ time during the panel meeting (72 percent), level of expertise of the peer review panel members (68 percent), guidance in writing reviewer comments (64 percent), quality of the training to prepare for the review (64 percent), consistency in the overall quality of the peer reviews across panels (61 percent), and appropriateness of the evaluation criteria to applications under review (61 percent). Peer reviewers surveyed by the peer review contractor (as summarized in a previous section) likewise gave highly favorable ratings to assistance and participation provided by staff in the review process and to reviewer orientation.
Although close to half of the respondents rated the remaining items as favorable (4 to 5 on the rating scale), 25 percent of respondents rated the elements adequacy of time for review of materials before the meeting and appropriateness of scoring system to applications under review as poor to less than adequate (1 to 2 on the rating scale). Peer reviewers surveyed by the peer review contractor likewise gave their least favorable ratings to time allowed for initial review.
An additional survey question asked the 94 reviewers who had served on multiple panels since January 1, 2005, whether, generally speaking, the quality of NIDRR’s peer review process had changed over time. Two reviewers responded “don’t know,” and 3 reviewers responded “not applicable.” Of the remaining 89 reviewers who responded to this survey question, 24 percent indicated that quality had increased, 50 percent that it had remained about the same, and 26 percent that it had decreased.
On related survey items dealing with the perceived burden of reviewers’
TABLE 4-2 Peer Reviewers’ Perceptions of Key Elements of NIDRR’s Peer Review Process (121 respondents)
|Key Element||Number of Respondents||Poor (%)||(%)||Adequate (%)||(%)||Excellent (%)|
|Support and facilitation or the review panel by NIDRR staff||120||1||4||17||28||50|
|Integrity of the peer review process overall||117||2||2||20||41||35|
|Thoroughness of the deliberation (i.e., grant scoring and discussion) during the meeting||121||0||3||24||32||41|
|Use of reviewers’ time during the panel meeting||118||1||8||19||41||31|
|Level of expertise of the peer review panel members||118||2||9||21||36||32|
|Guidance in writing reviewer comments||117||1||7||28||44||20|
|Quality of the training to prepare for the review||117||0||4||32||39||25|
|Consistency in the overall quality of the peer reviews across panels (if you have served on three or more panels)||94||2||16||21||4:||19|
|Appropriateness of the evaluation criteria to applications under review||119||4||9||26||41||20|
|Appropriateness of scoring system to applications under review||120||4||21||21||39||15|
|Clarity of the criteria when applying them to applications||119||4||10||34||41||11|
|Ease of applying scoring system to applications||119||1||16||34||35||14|
|Adequacy of time for review of materials before the meeting||120||8||17||32||27||16|
workload, 44 percent stated they had had more applications to review than they would like to have had on a given panel, and 44 percent stated that they had spent more time on each panel than they would like to have spent. On both of these items, however, more than 50 percent responded that the amount of time had been about right. Additionally, reviewers were asked about the quality of face-to-face meetings versus teleconferences. Fifty-one percent responded that face-to-face meetings were of higher quality, 40 percent that quality was the same, and 9 percent that teleconferences were of higher quality.
A final survey question about NIDRR’s peer review process was posed to the subset of peer reviewers (55 percent) who had experience with other federal agency peer review processes. Results include 64 respondents, although a few respondents either responded “don’t know” or did not rate each characteristic, so the number of respondents for each characteristic is less than 64. Table 4-3 shows that close to half of these respondents considered the selected characteristics of NIDRR’s peer review process to be about the same as those of other federal agencies. More than one-quarter thought NIDRR’s process was stronger to some degree; slightly less than a quarter considered NIDRR’s process to be weaker. The committee noted that the 55 percent of respondents who had other federal agency review experience showed few differences from the other respondents when the two groups were stratified according to their perceptions of the key elements listed in Table 4-2. The order of the key elements from most to least favorable remained generally the same.
Perceptions of NIDRR’s Peer Review Process
Peer reviewers responded to the following open-ended questions:
• Any additional comments you may have on NIDRR’s peer review processes would be useful. Please use the space below. (This question followed a table asking respondents to rate key elements of NIDRR’s peer review process.)
• What three things would you suggest to enhance NIDRR’s peer review processes?
As noted in the discussion of analysis of process data in Chapter 2, responses to these questions were analyzed using standard qualitative methods. Two overarching themes emerged during the qualitative data analysis. Theme I, “Increase peer reviewer role satisfaction,” focuses on the impact of the peer review process on respondents’ perceived levels of role satisfaction. Theme II, “Develop quality improvement initiatives,” focuses on
TABLE 4-3 Peer Reviewers’ Perceptions of How the NIDRR Peer Review Process Compares with That of Other Agencies (64 respondents)
|Characteristic||Number of Respondents||NIDRR’s Are Much Weaker Than Those of Other Agencies (%)||NIDRR’s Are Somewhat Weaker Than Those of Other Agencies (%)||NIDRR’s Are About the Same (%)||NIDRR’s Are Somewhat Stronger Than Those of Other Agencies (%)||NIDRR’s Are Much Stronger Than Those of Other Agencies (%)|
|Quality of the review process||63||6||21||36||21||16|
|Transparency of the review process||61||8||13||46||17||16|
|Quality ol the proposals reviewed||62||2||19||47||16||16|
|Fairness of the review’ process||62||7||16||47||15||15|
|Reliability of the ratings||60||8||20||44||15||13|
|Expertise of the panel members||62||7||19||50||6||18|
respondents’ suggestions for enhancing the process. Both themes encompass aspects of peer review that affect the quality of final results.
Theme 1: Increase Peer Reviewer Role Satisfaction
Respondents identified six sources of role dissatisfaction during the premeeting phase of the peer review process that may negatively affect the quality of the results: (1) the time provided to read and review applications, (2) the number of applications to read and review per panel meeting, (3) the length of applications, (4) the lack of choice of format in which to view applications, (5) the compensation rate, and (6) the online software used to comment on and score applications. Comments on each of these factors are summarized below.
Increase the amount of time to review applications prior to the panel meeting Eleven respondents expressed strong concern about receiving applications too close to the meeting date and advised NIDRR to send applications more than a few weeks before the meeting; two respondents suggested 1 month before and five respondents suggested 2 months before. Four respondents contended that there was a link among three of the six sources of peer reviewer role dissatisfaction; as articulated by one, “either give reviewers more time to review, or reduce the number of applications each person has to review, and increase the compensation rate.” The quantitative data support peer reviewers’ comments on dissatisfaction with the amount of time allowed to review applications prior to a meeting. Ratings for adequacy of time for review of materials before the meeting received the largest percentage of poor to less than adequate ratings (25 percent).
Decrease the number of applications reviewed per panel meeting Eight respondents suggested that NIDRR needs to reduce the number of applications each panel reviews; four contended that excessive numbers of applications reduced the quality of the reviews, and two of these four believed the numbers discouraged busy, experienced reviewers from participating. The four suggested that decreased numbers would lead to more time spent on each application, as well as more detailed comments (cited by two of the four) and usable feedback (stated by one of the four). Potential solutions suggested included determining “the maximum number of applications to review according to the complexity of each [program] mechanism” (cited by three respondents) and enhanced prereview screening to eliminate very weak applications (also cited by three). These comments on the burdensome number of applications assigned to each reviewer echo the quantitative data, which revealed that 44 percent of respondents thought they received more applications than they would like to have reviewed.
Reduce the page length of applications Nine respondents stated that another way to reduce workload would be stricter page limits for applications. Four of the nine suggested a limit of 25 pages, and one suggested 50 pages; the remaining four did not mention a specific limit. Three of the nine respondents mentioned that NIH limits applications to 12 pages. One of the nine noted the additional need for page limits for appendices, without offering a specific limit.
Provide reviewers with a choice of formats in which to review applications Eight respondents suggested that reviewers should be able to choose the format of the applications they review based on their preferences, such as printed copies, on CD, or through a password-protected website. Respondents noted that PDF files would allow them to use the search function, and Word files would allow them to embed comments through the track changes feature.
Increase the compensation rate Eleven respondents were concerned about NIDRR’s compensation rate. One respondent said, “there were simply too many applications to review for the amount of reimbursement provided.” Four respondents were concerned that experienced reviewers might decline invitations to serve on panels because of the amount of time required and the relatively low compensation rates. Three respondents suggested that NIDRR increase compensation to match the rate paid by other federal agencies, while three suggested increasing compensation to better match the amount of time actually spent preparing and reviewing. One respondent suggested that increased compensation might encourage experienced researchers to participate.
Improve the online scoring software Fifteen respondents were concerned about the lack of user-friendly online software for completing reviews. Eleven of these respondents described the new G5 software as cumbersome, with “excessively convoluted navigations.” One stated, “The G5 is inaccessible for reviewers with visual impairments … [and] my primary task should be to lend expertise to the analysis and scoring of the applications … the logistical problems detract from the integrity of the review process.” Respondents provided specific suggestions for improving the software, such as placing one criterion and all subcomponents on a single screen (suggested by five respondents), allowing reviewers to enter critiques and scores for each application in one file (suggested by four), accepting both PDF and DOC formats (suggested by three), and adding a search/find command so reviewers can more easily edit or rescore an application (suggested by three).
Theme 2: Develop Peer Review Quality Improvement Initiatives
Respondents identified seven areas in which improvements would likely increase the integrity and quality of the peer review process and the quality of outputs: (1) expertise of peer reviewers, (2) premeeting orientation and training, (3) evaluation criteria used to score applications, (4) scoring process, (5) guidance and group facilitation provided by NIDRR staff, (6) panel discussions, and (7) feedback to applicants. Comments on each of these areas are summarized below.
Enhance the level of peer reviewer expertise Three respondents stated that the level of peer reviewer expertise has greatly improved over the years. However, six respondents held a mixed view of panel members’ level of expertise and reported variation in reviewers’ knowledge of the research process, areas of scientific methods and statistics (cited by four of the six), current literature on research and practice in rehabilitation medicine (cited by two of the six), or disability policy (cited by one of the six). Additionally, four of the six respondents mentioned having experienced instances in which insufficient expertise among primary or secondary reviewers significantly affected the review process. Three respondents commented on the urgent need to include a greater number of experienced researchers with disabilities to serve on panels, but also mentioned that inviting persons with disabilities who lack research expertise to serve as peer reviewers could impact the integrity of the review process.
Suggestions for addressing these issues included expanding peer reviewer recruitment (cited by seven respondents) to include more reviewers outside the NIDRR network (cited by three) and federally funded researchers (cited by one), increasing the responsibility of the secondary reviewer to balance out a potentially inexperienced primary reviewer (cited by one), asking reviewers to select applications for which they could serve as a primary or secondary reviewer (cited by one), and bringing in an ad hoc topic or methods expert if necessary for an application (cited by one).
While the open-ended comments related to expertise were mixed, on the quantitative item 68 percent of respondents rated level of expertise of the peer review panel members as more than adequate or excellent. Only 11 percent rated this element poor to less than adequate.
Improve the quality of premeeting orientation and training Ten respondents urged NIDRR to institute several quality improvement initiatives to increase the effectiveness of premeeting training. Seven of the 10 stated that both experienced and novice reviewers could benefit from additional training to help them better understand the qualities of excellent, average, and poor-quality applications. Suggestions included developing examples of
excellent, average, and poor-quality applications (cited by five respondents), creating separate training sessions for novice reviewers and others who need additional training (cited by four); providing separate training for nonresearchers (cited by three); developing online training modules that would be available at all times for reviewers to use as needed (cited by two); providing training throughout the calendar year (cited by one); assigning mentors to first-time reviewers (cited by one); and providing brief biosketches of each panel member (cited by one).
Eight respondents suggested that training sessions and premeeting materials need to include more information about how to distinguish one criterion from another. Other suggestions included covering how to translate the scoring criteria to actual applications (cited by seven respondents), how to use the online software (cited by six), the purposes of the priority to be funded (cited by three), how the panel will be facilitated by NIDRR staff (cited by two), and the specifics of what each criterion measures (cited by one).
Despite these suggestions for improving training, the quantitative data suggest that NIDRR already has high-quality training. No respondents rated training as poor, only 4 percent rated it as less than adequate, 32 percent rated it as adequate and 64 percent rated it as more than adequate to excellent.
Improve the evaluation criteria used to score applications Thirteen respondents expressed varying degrees of frustration that the criteria are “ambiguous, overlapping, and redundant.” One of the 13 captured the views of the other 12, suggesting that “the distinctions between criteria are too subtle and are very difficult for reviewers to distinguish one criterion from another.” Five of the 13 respondents suggested that the redundant and ambiguous criteria could lead to inconsistent interpretation, wider discrepancies among reviewer ratings, and decreased ability to identify the best applications, all of which would diminish the quality of the review.
Six respondents stated that the criteria do not adequately evaluate innovation, feasibility, and scientific merit. One of the six found the criteria to be “less scientific and more political” in orientation. Three respondents suggested that the “plan of evaluation” criterion was overemphasized. Finally, two respondents described frustration that participation by both ethnic minorities and persons with disabilities was not adequately being measured by the “diversity” criterion.
To address the repetitive, ambiguous nature of the criteria, nine respondents advised NIDRR to “combine duplicative criteria” and to “specify what each criterion measures.” To address the scientific inadequacy of the criteria, four respondents suggested better matching criteria and program mechanisms, two suggested developing new and innovative
ways for applicants to demonstrate that diversity is considered in staffing configurations and reflected in contributions to diverse communities, and one suggested adding a new criterion—“impact on the field.” One respondent suggested that reviewers should not evaluate budgets.
While the peer reviewers’ comments highlighted flaws in the criteria, the quantitative data suggest the evaluation criteria are generally seen as adequate. Sixty-one percent of reviewers rated appropriateness of the evaluation criteria to applications under review as more than adequate to excellent, and 34 percent rated clarity of the criteria when applying them to applications as adequate.
Improve the scoring process used to rate applications on the criteria Ten respondents suggested there is considerable variation in how reviewers apply the scoring system to evaluate applications. Two of these respondents noted extreme cases of reviewers scoring all applications much higher or lower than all other panel members. One respondent attributed this discrepancy to the steep learning curve reviewers experience during their first meeting.
Additionally, eight respondents were critical of NIDRR’s weighting of the criteria. One suggested that in some cases, primary issues are not given enough weight, one that minor details are given too much weight, and one that the weighting approach is not flexible enough to be adjusted across competitions. One respondent also expressed concern that “there are some criteria that are allotted a total of one point—that is really splitting hairs and is not significant.”
To achieve greater scoring consistency among reviewers, five respondents advised NIDRR to “develop a standardized set of scoring procedures” and a set of training “materials using concrete examples of how to score applications appropriately.” One respondent suggested that these materials would help reviewers translate the scoring criteria to the actual applications. Four respondents offered ways to improve the user-friendliness of the scoring system, such as by adopting a 0-10 scale for each criterion, beginning “with the lowest scoring applications to practice scoring as a group” (suggested by one respondent). Two respondents suggested that more weight be added to the criterion “implementation and outcomes,” while another advised NIDRR to give more weight to the “significance of the project and the track record of the applicant” and less to researcher qualifications, as “most applications have qualified personnel.”
Finally, two respondents expressed diametrically opposed views about the weight assigned to the “diversity” criterion. One stated that this criterion has too much weight such that the application’s scientific rigor could be diluted, while the other stated that this criterion has far too little weight such that applications could be approved with only token representation of relevant communities. Common ground could possibly be reached by
following the suggestion, described in the previous section, that NIDRR develop new and innovative ways to demonstrate diversity.
Address the inconsistent quality of the guidance and group facilitation provided by NIDRR staff Six respondents noted the high quality of guidance provided by NIDRR staff. However, 10 respondents thought the quality of staff guidance varied, with some staff being more knowledgeable about the process (cited by four respondents), more organized (cited by one), and more efficient (cited by one) than others. Nine respondents stated that the NIDRR panel monitors with whom they interacted lacked sufficient skills in guidance and facilitation. Two of these nine also noted confusing inconsistency in the direction given by different NIDRR staff members. For example, “some staff advocate for reviewers to start from zero and add points for positive features, whereas others say one should start from the maximum number of points and subtract for specific weaknesses or problems.” Also, five of the nine respondents were concerned that staff lacked the knowledge base or skill set to recognize and manage reviewer bias.
Suggestions for improving the quality of guidance and facilitation provided by NIDRR staff included developing new training and materials on “managing situations where one reviewer dominates the discussion” (cited by four respondents), “including all reviewers in the discussion” (cited by two), “insisting on courteous behavior at all times” (cited by two), and encouraging discussion and allowing for disagreements (cited by two). While these are reasonable suggestions, it must be noted that support and facilitation of the review panel by NIDRR staff was the element rated most favorably by peer reviewers.
Reduce variation in the quality of the panel discussions Reviewers noted several factors that affect the quality of discussions, including the discussion venue, unprepared reviewers, weak applications, and reviewer bias. The most frequent issue raised by respondents was face-to-face meetings versus teleconferences (cited by 19 respondents). Fourteen respondents noted only the benefits of their preferred venue, while 5 others described the tradeoffs. Three respondents were mindful of the current and future cost savings derived from using teleconferences but were against their exclusive use. One of the 3 advised NIDRR to use face-to-face meetings for “large Center applications and all 5-year award programs.” Two respondents hoped that “video-conference calls would help to bridge the gap between the pros and cons of each venue.”
Among these 19 respondents, many held differing views on the effectiveness of meeting by teleconference. Opinions ranged from “teleconferencing is very adequate” (cited by 6); to “it’s possible to have an engaged teleconference, it’s just more difficult” (cited by 1); to “teleconference reviews
are a failed experiment” (cited by 6). The 19 respondents identified three pros of teleconferences, such as “The quality of the reviewers increase [sic] because there is no need for people to travel and significantly disrupt work and family schedules” (cited by 4). They also identified nine cons, such as “The quality and depth of the panel discussions has [sic] deteriorated since teleconferences have been substituted for in-person meetings” (cited by 9). Respondents also identified seven pros of face-to-face meetings, such as “Discussions are of a higher quality” (cited by 12). Respondents did not specifically mention any cons of face-to-face meetings, but they are implicit in several of the cited pros of teleconference meetings.
Additionally, four respondents reported frustration with unprepared reviewers. Three respondents expressed frustration at discussing very weak applications, which two suggested interfered with discussion of more worthy applications in greater depth. Three respondents were concerned about reviewer bias, one noting that “individual biases and not having the ‘right’ people at the table for the type of review being conducted unfortunately tend to influence the final outcome.”
Five respondents offered ideas for better ensuring reviewer preparedness, including requiring all reviewers to write summaries of each application (cited by two) and requiring all reviewers to submit their comments and scores before the meeting (cited by one). One respondent advised NIDRR to disqualify weak applications, leaving reviewers to evaluate “only those applications that have potential for good scores.” Two reviewers shared the idea that expanding the recruitment pool of peer reviewers would help combat bias and inject objectivity into the process. Three respondents suggested that forming more standing panels could increase the consistency and quality of individual competition reviews as panel members would have more experience. One of the three noted that standing panels could also improve quality across competition years as the same reviewers would evaluate resubmitted applications. Two respondents suggested limiting the number of discussion hours per day or scheduling more days but fewer hours, or possibly even holding panel meetings only during academic breaks (cited by one) as ways to improve quality and reduce reviewer fatigue.
Concerns about the quality of panel discussions were not as evident in the quantitative data. Seventy-three percent of respondents rated thoroughness of the deliberation as more than adequate or excellent, and 72 percent rated use of reviewers’ time during the panel meeting as more than adequate or excellent. Only 3 percent of respondents rated thoroughness of the deliberation and 9 percent use of reviewers’ time during the panel meeting as poor or less than adequate. Fifty-one percent of respondents thought panels generally took the right amount of time. Fifty-two percent suggested the quality of face-to-face reviews is better, 40 percent said quality is similar for face-to-face reviews and teleconferences, and 9 percent replied
that teleconferences are of better quality than face-to-face reviews. As noted earlier in the chapter in the section on panel discussions, NIDRR is aware of the mixed opinions concerning the quality of reviews held by teleconference but believes the benefits outweigh the costs.
Improve the quality of feedback provided to applicants While one respondent stated that “reviewers’ comments to applicants seem useful,” four others shared the view that the “format and quality of the feedback to applicants lack depth and specificity.” Two respondents suggested that improving the quality and consistency of feedback to applicants could contribute to building capacity in the field. Five respondents suggested that NIDRR develop strategies to increase the quality of the feedback provided to applicants. Two suggested providing comments from all or most of the panel members and a discussion summary, similar to the NIH procedure; one suggested that reviewers take turns being note takers; and one suggested standardizing the format of the feedback. On a related quantitative item, the majority of peer reviewers (64 percent) rated the guidance they received from NIDRR in writing review comments as good to excellent.
CONCLUSIONS AND RECOMMENDATIONS
The sources relevant to peer review who were engaged in this evaluation, including NIDRR staff, grantees, and peer reviewers, consistently described NIDRR’s peer review process as generally good, although still in need of some improvement. Additionally, half of the NIDRR staff members interviewed thought the peer review process was very strong overall, although time-consuming and burdensome for both staff and peer reviewers. More than half of the grantees who commented on peer review noted significant recent improvement to the process, although certain aspects, such as the scoring system, could still be improved. Finally, of the 64 NIDRR peer reviewers surveyed who had experience with reviews for other federal research agencies, close to half considered the selected characteristics of NIDRR’s peer review process to be about the same as those of other federal agency processes; more than one-quarter thought NIDRR’s process was stronger to some degree; slightly less than a quarter considered NIDRR’s process to be weaker than those of other agencies. The narrative comments of peer reviewers provide examples of some of the areas in which these respondents suggested improvements could be made to increase the role satisfaction of reviewers and improve the quality of the process.
The committee also recognizes that NIDRR’s peer review process operates within the bounds of ED. As a result, some aspects of the process identified as potential weaknesses during the course of the review are controlled by ED, such as exclusion of grantees’ past performance as a criteria in peer
review, rules regarding the formation of standing panels, public identification of the competitions in which reviewers participate, and the ability of the ED-level database of potential reviewers to meet NIDRR’s needs.
The evidence presented indicates that NIDRR’s peer review process is generally good; nonetheless, there are significant opportunities for enhancements that would likely improve the quality of final project results. To address these concerns and strengthen NIDRR’s peer review process, the committee offers recommendations for enhancing the peer review infrastructure, reducing reviewer burden, and using consumers on review panels.
Enhancements to the Peer Review
While recognizing the care with which NIDRR’s competition managers assemble and facilitate review panels, the committee feels NIDRR’s peer review process is hampered by a limited pool of potential reviewers. NIDRR staff spend considerable time recruiting and screening potential reviewers. Competition managers regularly must manage potential conflicts of interest and rule out qualified reviewers. Despite staff efforts to recruit adequate numbers of reviewers, some panels are smaller than NIDRR’s recommended size; reviewers sometimes are added so close to the meeting date that they have inadequate time to prepare; and primary, secondary, and general reviewers lacking necessary scientific expertise may be participating in the reviews.
The committee concluded that improvements in the following areas of NIDRR’s peer review process would likely enhance the quality of project outputs: use of standing panels or formal cohorts of peer reviewers with specialized knowledge and expertise as appropriate for the program mechanisms, reviewer training, consistency in facilitating panel meetings, and the quality of feedback provided to applicants. The formation of more standing panels or cohorts would reduce the recruiting burden on NIDRR staff and provide a pool of reviewers with more experience with the review process, both of which may lead to more consistent and higher-quality reviews. While some reviewers surveyed for this study reported receiving highquality training, the committee believes that enhancing this training would be a simple and effective way to improve the quality of the review process. Finally, because panel monitors have different preferences as to how panels should be run and varying levels of experience in guiding panels, considerable variation exists across competitions. The committee believes that, even if all competition managers adhere to NIDRR rules, such inconsistency results in confusion and negatively influences the overall quality of the process.
Recommendation 4-1: NIDRR should further strengthen the peer review infrastructure by expanding the pool of high-quality reviewers;
establishing standing panels, or formal cohorts of peer reviewers with specialized knowledge and expertise as appropriate for the program mechanisms; enhancing reviewer training; and improving the consistency of NIDRR staff facilitation of panel meetings and the quality of feedback provided to grantees.
Expanding the pool of peer reviewers could be pivotal in helping to prevent conflicts of interest, a challenge that NIDRR consistently faces during the recruitment of peer reviewers. Examples of potential ways to increase the reviewer pool include formally reaching out to new groups of researchers, such as individuals who review for the National Center for Medical Rehabilitation Research (NCMRR), NIH, and the Agency for Healthcare Research and Quality (AHRQ). The committee recognizes that, in accordance with Title II of the Rehabilitation Act, federal employees are not allowed to be peer reviewers. In other peer review settings, however, federal employees are not necessarily prevented from serving as reviewers, depending on their agency’s regulations. For example, with supervisor approval, NIH employees can serve as peer reviewers for other federal agencies as an official-duty activity, provided the competition involves no NIH funds (see http://www.niehs.nih.gov/about/od/ethics/duties/oda/index.cfm [November 21, 2011]). NIDRR should consider investigating whether, with similar restrictions. Title II could be amended to allow for federal peer reviewers.
NIDRR also should monitor how peer reviewers perform so that ineffective reviewers can be counseled on review procedures and/or not invited to serve on subsequent panels. The committee urges NIDRR to consult with other federal agencies that have similar peer review panels, such as NIH and NSF, for guidance on actions they have taken to ensure that reviewers are of high quality. Additionally, NIDRR should consider requesting that ED allow publication of reviewer names by competition, as is common practice in other federal agencies. This would improve the transparency of the process.
NIDRR has means for recruiting informal groups of peer reviewers based on their areas of expertise and experience with ARRT, SBIR, and Switzer grants. While the use of more formalized cohorts of these types of reviewers would be more challenging and require careful planning for other mechanisms, the committee encourages NIDRR to consider expanding the use of such cohorts to ease burdens on both reviewers and staff.
Peer reviewer training enhancements could include sharing reviews of successful grant applications, providing concrete examples of how to translate scoring criteria to applications, and requiring trainees to observe panels before they become official reviewers. Training enhancements should also take into account the different needs of inexperienced and experienced reviewers.
While support and facilitation of the review panel by NIDRR staff was
one of the highest rated elements of the peer review process, comments from peer reviewers pointed to the need for greater consistency across panel managers. As indicated in staff interviews, NIDRR is aware of this need and has been focusing on improving the consistency in the manner in which peer review meetings are facilitated. However, it is the viewpoint of the committee that a more formal quality improvement initiative is needed to improve the consistency of managing the panel meetings.
The guidance provided to peer reviewers by NIDRR in writing review comments was rated quite highly by peer reviewers, but several commented that the quality of the feedback actually provided to grantees was lacking in depth and specificity, and was inconsistent. NIDRR should consider other approaches to consolidating comments from reviewers in order to provide applicants with comprehensive feedback that will inform future applications.
Finally, the standard calendar proposed in Recommendation 3-3 in Chapter 3 might also enhance the peer review process by providing staff with a longer and more regular period within which to recruit reviewers. A standard calendar could also benefit applicants, who would know when the peer review process was to take place and when decisions on awards would be likely.
Reducing Reviewer Burden
Participating in NIDRR’s peer review process clearly is a significant burden for a large percentage of reviewers. Many reviewers spend more time than they wish in preparing, and review days are long and intense. This significant time commitment makes it less likely that qualified and experienced reviewers will participate. Indeed, the committee found the review process is so burdensome to peer reviewers as to threaten its quality. Reviewers surveyed also reported sometimes having insufficient time to review proposals, which could affect the quality of the review discussions.
Recommendation 4-2: NIDRR should streamline the review process in order to reduce the burden on peer reviewers.
NIDRR could reduce the burden on reviewers by implementing page length restrictions for applications (NSF and NIH use substantially shorter applications with strict page limits); simplifying the application format, scoring criteria, and software; and limiting the number of proposals to be reviewed by a single panel. Formats for applications should be standardized where possible. Additionally, the committee thinks NIDRR’s requirement that reviewers write a rationale only when giving submaximum scores is a potential disincentive for reviewers to give such scores. NIDRR should
consider options for addressing this issue, as well as for reducing the complexity of scoring. NIDRR is already taking actions to improve the software used to score proposals; these efforts should continue. In addition, NIDRR should consider improving the quality of reviews by giving reviewers more time to review proposals. Also, establishing standing panels and more formal cohorts, as well as enhancing training, as detailed in Recommendation 4-1, should reduce the burden perceived by reviewers. Finally, NIDRR may want to explore a blended model of in-person and teleconference meetings to reduce the burden imposed by teleconferences for some reviewers, as expressed in the survey.
Use of Consumer Peer Reviewers
To address its mission, NIDRR makes concerted efforts to include both scientists with disabilities and consumers without scientific expertise in the peer review process. Consumers can represent the experiences and views of their particular disability communities and can evaluate applications for relevance to their communities’ needs and concerns (although it is important to note that one consumer cannot necessarily represent the views of consumers from a different community). Peer reviewers who are not consumers can learn from the people the research is intended to benefit. Additionally, consumers may gain a better understanding of NIDRR’s research and peer review process through their participation and thus be able to inform their communities about NIDRR’s work.
All reviewers, including researchers and consumers, should have the appropriate expertise to review those elements of proposals to which they are assigned. If consumers are to review scientific aspects of proposals, they should have the relevant expertise, or NIDRR should provide them with relevant methodological training suitable to their background and qualifications. NIDRR should review and monitor the role of consumers and researchers in peer review to ensure that quality is not compromised.
Recommendation 4-3: NIDRR should continue to have consumer representation in the peer review process and establish procedures to guide the participation of those without scientific expertise.
While the involvement of consumers without scientific expertise in conducting peer review and helping to shape the research agenda is critical, there is currently no scientific consensus as to how this involvement is best accomplished. Therefore, NIDRR should assess the participation of consumers without scientific expertise in its peer review process. A model of such an assessment was conducted by Andejeski and colleagues (2002) who examined the impact of nonscientist consumer participation in peer
reviews of breast cancer research proposals on review panels that included 11-17 scientists and 2 lay consumers. The authors found little difference in proposal scores of the nonscientist consumers and the scientists. Pre- and post-panel opinion questionnaires concerning consumer involvement in the scientific review process showed significantly greater positive post-panel opinions of consumer involvement than negative opinions.
Furthermore, the use of consumers in peer review processes is extensive in many other agencies. Following are examples of models used by other agencies to involve consumers in peer review which NIDRR might wish to review and consider for future use. (These examples are not intended to be exhaustive.)
The Office of Congressionally Directed Medical Research Programs (CDMRP), located in the Department of Defense, fully integrates consumers and scientists on peer review panels. According to CDMRP (2011), consumers “add perspective, passion, and a sense of urgency that ensures the human dimension is incorporated in the program policy, investment strategy, and research focus.” CDMRP employs a two-tiered system of review, involving first a scientific review by a peer review panel and then a programmatic review by an integration panel. Consumers are fully integrated in both panels.
Additionally, the National Institute of Mental Health (NIMH) at times includes consumers without scientific expertise in peer review. NIMH also uses a two-tiered peer review process. The first tier involves assessment of grant applications by review committees, which are comprised of scientist reviewers and sometimes reviewers who are members of the general public, including consumers (National Institute of Mental Health, 2011a). The NIMH website (2011b) states that, “The role of public reviewers is to bring critical perspectives from individuals and family members who have been directly affected … and to enhance the capability of the review committee to evaluate the ‘real world’ relevance and practicality of each research application.” Public reviewers are instructed to focus their review on particular aspects of the grant applications, such as public health significance, feasibility, outreach, and protection of human subjects (National Institute of Mental Health, 2011a). Similarly, NIDRR might identify which of its review criteria are most relevant to consumers without scientific expertise, and then ask consumer reviewers to rate only these criteria. The second tier in the NIMH process involves review by the NIMH Advisory Council, which is also composed of both scientist and lay members.
Finally, the Juvenile Diabetes Research Foundation (2011) also utilizes a two-tiered system of review. The first tier is scientific review, during which “each individual project should be evaluated for its standalone scientific merit as well as its potential contribution to the whole program.” This phase of the process involves panels made up only of scientists. The second tier is lay review, during which a lay review committee uses its consumer experience
and the results of the scientific review to determine which applications are likely to have the greatest impact.
Andejeski, Y., Bisceglio, I.T., Dickersin, K., Johnson, J.E., Robinson, S.I., Smith, H S., Visco, F.M., and Rich, I.M. (2002). Quantitative impact of including consumers in the scientific review of breast cancer research proposals. Journal of Women’s Health and Gender-Based Medicine, 11(4), 379-388.
Disability and Rehabilitation Research Projects and Centers Program, 34 C.F.R. pt. 350 (2009) Education Department General Administrative Regulations, 34 C.F.R. pts. 74-86 and 97-99 (2008). Available: http://www2.ed.gov/policy/fund/reg/edgarReg/edgar.html [November 22, 2011].
Institute of Medicine. (2007). The future of disability in America. Washington, DC: The National Academies Press.
Ismail, S., Farrands, A., and Wooding, S. (2009). Evaluating grant peer review in the health sciences. Cambridge, England: RAND Europe.
Juvenile Diabetes Research Foundation. (2011). Information for reviewers. Available: http://www.jdrf.org/index.cfm?page_id=103243 [June 4, 2011].
National Institute of Mental Health. (2011a). Review process. Available: http://www.nimh.nih.gov/research-funding/grants/review-process.shtml [October 12, 2011].
National Institute of Mental Health. (2011b). Role of public participants in NIMH grant reviews. Available: http://www.nimh.nih.gov/research-funding/grants/role-of-public-participants-in-nimh-grant-reviews.shtml [October 12, 2011].
National Institute on Disability and Rehabilitation Research. (2006). Department of Education:
National Institute on Disability and Rehabilitation Research—Notice of Final Long-Range Plan for Fiscal Years 2005–2009. Federal Register, 71(31), 8,166-8,200.
National Institute on Disability and Rehabilitation Research. (2009a). Briefing book for The National Academies. Unpublished document. Washington, DC: National Institute on Disability and Rehabilitation Research.
National Institute on Disability and Rehabilitation Research. (2009b). Peer reviewer instructions. Unpublished document. Washington, DC: National Institute on Disability and Rehabilitation Research.
National Institute on Disability and Rehabilitation Research. (2010a). 2006-2009 applications received, applications reviewed, and applications funded. Unpublished document. Washington, DC: National Institute on Disability and Rehabilitation Research.
National Institute on Disability and Rehabilitation Research. (2010b). Application technical review plan. Washington, DC: National Institute on Disability and Rehabilitation Research.
National Institute on Disability and Rehabilitation Research. (2010c). Overview of the NIDRR’s peer review process (what happens between OPP’s priority review and OPP’s slate review). Unpublished document. Washington, DC: National Institute on Disability and Rehabilitation Research.
National Institutes of Health. (2008). 2007-2008 peer review self-study. Washington, DC:
National Institutes of Health.
National Science Foundation. (2011). Committee of Visitors (COV). Available: http://www.nsf.gov/od/oia/activities/cov/ [April 5, 2011].
Office of Congressionally Directed Medical Research Programs. (2011). Consumer involvement. Available: http://cdmrp.army.mil/cwg/default.shtml [June 4, 2011].
The Rehabilitation Act of 1973, as amended. Pub. L. No. 93-112. Available: http://www2.ed.gov/policy/speced/reg/narrative.html [January 21, 2011].
Synergy Enterprises, Inc. (2008). Draft task 4 analysis. Washington, DC: National Institute on Disability and Rehabilitation.
Synergy Enterprises, Inc. (2010). Analysis of peer review process—Synergy survey. Unpublished document. Washington, DC: National Institute on Disability and Rehabilitation.
U.S. Department of Education. (2009). Handbook for the discretionary grant process. Washington, DC: U.S. Department of Education.
Selection criteria are used by peer reviewers in assessing and rating applications submitted by researchers for funding. Title 34 of the Code of Federal Regulations (CFR)9 provides guidance for NIDRR’s peer review process, as well as selection criteria. Part 350 of the CFR outlines the selection criteria for the competitions administered through the DRRP primary mechanisms, including DRRP-General, DBTAC, KT, Section 21, BMS, and TBIMS, as well as for the program mechanisms ARRT, FIP, RERC, and RRTC. Part 356 provides selection criteria for Switzer Fellowship. Part 359 provides selection criteria for SCIMS. Part 75 provides selection criteria for SBIR. Each competition includes 100 possible points allocated across criteria and subcriteria. With the exception of Part 359, governing SCIMS, where the points are prespecified, the distribution of points across the selected criteria is determined by NIDRR staff. All criteria are displayed in Table A4-1.
The term “absolute priority” refers to those requirements that applicants must address to demonstrate their responsiveness to the requirements of the program mechanism (e.g., DRRP) or to the specific topic (e.g., telerehabilitation). The term “competitive priority” refers to requirements that can result in competitive preference, either by awarding extra points based on the extent to which the application meets the priority or by selecting an application that meets the priority over a similarly reviewed application that does not. An example is additional points being awarded to an application that includes effective strategies for employing and advancing in employment qualified individuals with disabilities.
Competitions under Parts 350 and 75 are not required to use all of the criteria, as certain criteria are not relevant to some competitions. NIDRR staff select the relevant criteria from the list provided in the CFR. As defined in the CFR, each criterion in Parts 350 and 75 contains subcriteria. As part
9 The electronic Code of Federal Regulations can be accessed at: http://ecfr.gpoaccess.gov/cgi/t/text/text-idx?c=ecfr&tpl=%2Findex.tpl [January 4, 2012].
TABLE A4-1 Selection Criteria from Title 34, Code of Federal Regulations
|Title 34, Part 350|
|Title 34, Part 350|
|Importance or the problem||X||X||X||X|
|Responsiveness to an absolute or competitive priority’||X||X||X||X|
|Design of research activities||X||X||X||X|
|Design of development activities||X||X||X||X|
|Design of demonstration activities||X||X||X||X|
|Design of training activities||X||X||X||X|
|Design of dissemination activities||X||X||X||X|
|Design of utilization activities||X||X||X||X|
|Design of technical assistance activities||X||X||X||X|
|Plan of operation||X||X||X||X|
|Adequacy and reasonableness of the budget||X||X||X||X|
|Plan of evaluation||X||X||X||X|
|Adequacy and reasonableness of resources||X||X||X||X|
|Title 34, Part 356|
|Quality and level of formal education|
|Previous work experience|
|Quality of a research proposal|
|The research hypothesis, methodology, and design|
|Resources, equipment, institutional support|
|Title 34, Part 350|
|Title 54, Pan 359|
|Project design (20 points)|
|Service comprehensiveness (20 points)|
|Plan or operation (15 points)|
|Quality of key personnel (10 points)|
|Adequacy of resources (10 points)|
|Budget/ cost effectiveness (10 points)|
|Dissemination/ Utilization (5 points)|
|Evaluation plan (10 points)|
|Title 34, Part 75|
|Need for project|
|Quality of the project design|
|Quality of project services|
|Quality of project personnel|
|Adequacy of resources|
|Quality of the management plan|
|Quality of the project evaluation|
of recommending criteria for a competition, NIDRR staff also recommend which subcriteria are relevant. For each competition, points out of 100 are distributed across the chosen criteria. The points assigned to each criterion are then divided among the subcriteria for purposes of scoring. Box A4-1 contains an example of the selection criteria for a DRRP competition.
Part 350 also establishes additional considerations for FIP. Before funding is awarded, the Secretary of Education considers the extent to which applications that have been awarded 80 percent or more of the maximum possible points meet one or both of the following conditions: represent a unique opportunity to advance rehabilitation knowledge and/or complement current research or address such research in a promising new way. Part 75 does not include any additional considerations.
The criteria in Part 356 governing Switzer do not contain subcriteria. Based on peer review scores, the Secretary grades applicants as outstanding (5), superior (4), satisfactory (3), marginal (2), or poor (1). The Secretary
|Part 356||Part 359||Part 75|
funds some or all of the applications that have been awarded a rating of superior or better (4-5). In making a final selection, the Secretary considers the extent to which outstanding or superior applicants present a unique opportunity to effect a major advance in knowledge, address critical problems in innovative ways, present proposals that are consistent with the NIDRR’s Long-Range Plan, build research capacity within the field, or complement and significantly increase the potential value of already planned research and related activities.
Unlike the criteria in the other parts, Part 359 criteria governing SCIMS include point values (as can be seen in Table A4-1). The criteria in Part 359 do contain subcriteria for reviewers to consider, but the subcriteria are not scored; only the main criteria receive a score. In determining which applications to fund under this program, the Secretary also considers the proposed location of any project in order to achieve, to the extent possible, a geographic distribution of projects.
Example of Selection Criteria for Disability and Rehabilitation Research Project:
Center on the Effective Delivery of Rehabilitation Technology by State Vocational Rehabilitation Agencies to Improve Employment Outcomes (CFDA Number 84.133A-4)
Requirement for DRRP Projects:
To meet this priority, the Disability and Rehabilitation Research Projects (DRRP) must—
(a) Coordinate on research projects of mutual interest with relevant NIDRR-funded projects, as identified through consultation with the NIDRR project officer;
(b) Involve individuals with disabilities in planning and implementing the DRRP’s research, training, and dissemination activities, and in evaluating its work; and
(c) Identify anticipated outcomes (i.e., advances in knowledge or changes and improvements in policy, practice, behavior, and system capacity) that are linked to the applicant’s stated grant objectives.
Specific Criteria for This Competition:
The following selection criteria are used to evaluate applications under the DRRP program. The maximum score for all of these criteria is 100 points. The maximum score for each criterion is indicated in parentheses.
(a) Importance of the problem. (8 points total).
(1) The Secretary considers the importance of the problem.
(2) In determining the importance of the problem, the Secretary considers the following factors:
(i) The extent to which the applicant clearly describes the need and target population (4 points).
(ii) The extent to which the proposed project will have a beneficial impact on the target population (4 points).
(b) Responsiveness to an absolute or competitive priority (8 points total).
(1) The Secretary considers the responsiveness of the application to an absolute or competitive priority published in the Federal Register.
(2) In determining the application’s responsiveness to the absolute or competitive priority, the Secretary considers the following factors:
(i) The extent to which the applicant addresses all requirements of the absolute or competitive priority (4 points).
(ii) The extent to which the applicant’s proposed activities are likely to achieve the purposes of the absolute or competitive priority (4 points).
(c) Design of research activities (40 points total).
(1) The Secretary considers the extent to which the design of research activities is likely to be effective in accomplishing the objectives of the project.
(2) In determining the extent to which the design is likely to be effective in accomplishing the objectives of the project, the Secretary considers the following factors:
(i) The extent to which the research activities constitute a coherent, sustained approach to research in the field, including a substantial addition to the state-of-the-art (6 points).
(ii) The extent to which the methodology of each proposed research activity is meritorious, including consideration of the extent to which—
(A) The proposed design includes a comprehensive and informed review of the current literature, demonstrating knowledge of the state-of-the-art (5 points).
(B) Each research hypothesis is theoretically sound and based on current knowledge (5 points).
(C) Each sample population is appropriate and of sufficient size (8 points).
(D) The data collection and measurement techniques are appropriate and likely to be effective (8 points).
(E) The data analysis methods are appropriate (8 points).
(d) Design of dissemination activities (8 points total).
(1) The Secretary considers the extent to which the design of dissemination activities is likely to be effective in accomplishing the objectives of the project.
(2) In determining the extent to which the design is likely to be effective in accomplishing the objectives of the project, the Secretary considers the following factors:
(i) The extent to which the methods for dissemination are of sufficient quality, intensity, and duration (4 points).
(ii) The extent to which the information to be disseminated will be accessible to individuals with disabilities (4 points).
(e) Plan of operation (6 points total).
(1) The Secretary considers the quality of the plan of operation.
(2) In determining the quality of the plan of operation, the Secretary considers the following factor:
(i) The adequacy of the plan of operation to achieve the objectives of the proposed project on time and within budget, including clearly defined responsibilities, and timelines for accomplishing project tasks (6 points).
(f) Collaboration (4 points total).
(1) The Secretary considers the quality of collaboration.
(2) In determining the quality of collaboration, the Secretary considers the following factor:
(i) The extent to which the applicant’s proposed collaboration with one or more agencies, organizations, or institutions is likely to be effective in achieving the relevant proposed activities of the project
(g) Adequacy and reasonableness of the budget (4 points total).
(1) The Secretary considers the adequacy and the reasonableness of the proposed budget.
(2) In determining the adequacy and the reasonableness of the proposed budget, the Secretary considers the following factors:
(i) The extent to which the costs are reasonable in relation to the proposed project activities (2 points).
(ii) The extent to which the budget for the project, including any subcontracts, is adequately justified to support the proposed project activities (2 points).
(h) Plan of evaluation (8 points total).
(1) The Secretary considers the quality of the plan of evaluation.
(2) In determining the quality of the plan of evaluation, the Secretary considers the following factors:
(i) The extent to which the plan of evaluation provides for periodic assessment of progress toward—
(A) Implementing the plan of operation (4 points); and
(B) Achieving the project’s intended outcomes and expected impacts (4 points).
(i) Project staff (10 points total).
(1) The Secretary considers the quality of the project staff.
(2) In determining the quality of the project staff, the Secretary considers the extent to which the applicant encourages applications for employment from persons who are members of groups that have traditionally been underrepresented based on race, color, national origin, gender, age, or disability (4 points).
(3) In addition, the Secretary considers the following factors:
(i) The extent to which the key personnel and other key staff have appropriate training and experience in disciplines required to conduct all proposed activities (3 points).
(ii) The extent to which the commitment of staff time is adequate to accomplish all the proposed activities of the project (3 points).
(j) Adequacy and accessibility of resources (4 points).
(1) The Secretary considers the adequacy and accessibility of the applicant’s resources to implement the proposed project.
(2) In determining the adequacy and accessibility of resources, the Secretary considers the following factors:
(i) The extent to which the applicant is committed to provide adequate facilities, equipment, other resources, including administrative support, and laboratories, if appropriate (2 points).
(ii) The extent to which the facilities, equipment, and other resources are appropriately accessible to individuals with disabilities who may use the facilities, equipment, and other resources of the project (2 points).