For almost 25 years the Congressionally Directed Medical Research Programs (CDMRP) has been funding medical research for health conditions that affect military service members and veterans, their families, and the general public. Congress, in response to advocacy groups and other interested parties, determines which programs will be funded and at what level each year. Beginning with the congressional appropriation for a Breast Cancer Research Program in 1992, CDMRP has expanded to encompass 29 research programs on diverse health conditions ranging from autism to prostate cancer (see Box 2-2). The research programs also encompass health conditions of particular concern to service members and veterans, such as Gulf War illness and military burns. CDMRP has grown to become the second largest government funding source for medical research after the National Institutes of Health (NIH). CDMRP manages more than $1 billion annually in congressional appropriations across its research programs, and since its inception it has managed almost $10 billion.
To ensure that the best research applications are funded, CDMRP has implemented a two-tiered review process. The National Academies of Sciences, Engineering, and Medicine’s committee was asked by Congress to evaluate that review process, assessing how CDMRP coordinates its research priorities with NIH and the Department of Veterans Affairs (VA), and to make recommendations to improve the review and selection of research applications. This chapter presents the committee’s findings, based on the evidence it gathered from the CDMRP leadership, staff,
and website, its public sessions, and from the responses it received to its solicitation of input from a sample of the members of the 2014 programmatic panels and peer review panels. On the basis of these findings, the committee makes recommendations to improve the CDMRP process for reviewing and selecting research applications for funding and to enhance the coordination of research priorities with other organizations, particularly NIH and VA. Importantly, it should be noted that the committee was not asked to assess the quality of the applications that were funded, nor whether their outcomes meet the CDMRP goal of transforming health care through innovative and impactful research.
For some health conditions, such as breast cancer, prostate cancer, and neurofibromatosis, CDMRP has been a substantial funder of research for many years. CDMRP also has dedicated funding for health conditions that primarily affect service members, such as military burns and orthotics and prosthetics; however, the research results from the military-health focused programs are often applicable to and benefit the general public as well.
Based on the 1993 Institute of Medicine (IOM) recommendation that the U.S. Army Medical Research and Development Command (USAMRDC) should implement a two-tiered review process similar to that of NIH (IOM, 1993), CDMRP established a review process that has the following components: the development of an annual investment strategy, the solicitation of applications, the review of applications for scientific merit and program relevance, recommendations of applications for funding, and award negotiation and implementation within 2 years of receipt of the congressional appropriation.
On the basis of the information provided by CDMRP, discussions at the public sessions, and responses to the solicitation of input, the committee finds that, in general, the CDMRP review process has been effective in dispensing research funding across its programs and is not in need of extensive revisions. The solicitation of input comments received by the committee from both peer and programmatic reviewers, whether scientist or consumer, also indicated that the CDMRP review process worked well, and only minor modifications were suggested to enhance it. The committee generally concurred with these comments, particularly the positive response regarding the engagement of consumers throughout the CDMRP review process. Nevertheless, the committee has recommendations in four areas to improve the functioning of CDMRP. Specifically, these areas are as follows:
- the development of a strategic plan for each research program
- more formal coordination between CDMRP and NIH and VA
- greater transparency of the CDMRP review process
- stakeholders meetings
- contractor support activities and policies
- the use of ad hoc and specialty reviewers
- feedback from programmatic reviewers
- improved standardization of CDMRP’s business practices
- scoring criteria
- term limits
These opportunities for improvement are interconnected. For example, the development of a strategic plan may call for and result in more coordination with other funding organizations, and more transparent stakeholders meetings could also encourage coordination with other agencies and organizations. In the sections below, the committee presents its findings and recommendations with regard to the four major areas recommended for improvement as well as its endorsement of the use of consumers in the CDMRP review process. There are other government agencies, such as NIH, that have developed best practices for improving transparency in their processes. CDMRP may wish to consider those practices as it addresses the following recommendations for developing a strategic plan and improving both the transparency and the standardization of its business processes.
The CDMRP process is innovative in that it includes consumer reviewers on both the peer review panels and the programmatic panels. Consumers are engaged at all levels of the CDMRP process, from obtaining appropriations from Congress for new and existing programs to determining the programmatic relevance of individual applications. This level of consumer engagement is distinctive among government research funding agencies. Other organizations such as NIH are moving toward greater involvement of consumers in their funding processes, including setting research priorities, but CDMRP has been doing this since its beginning.
CDMRP uses mentors for consumer reviewers on peer review panels to supplement their training, explain the review process, and assist them with their initial scoring and critiquing of applications. The committee finds this to be a good approach for new consumer peer reviewers, but it notes that mentors are not provided to either consumer reviewers on programmatic panels or for scientist reviewers on either panel. Although CDMRP explained that these other reviewers are considered to be experts in their fields, may have served as reviewers for other organizations, and receive training, the committee suggests that CDMRP consider offering mentors to these other reviewers as well.
On the basis of information heard at its public sessions and from its solicitation of input, the committee finds that the inclusion of consumers in both tiers of the review is a positive aspect of the CDMRP review process that can benefit scientists and consumers alike.
Several CDMRP research programs have been in existence for many years. For example, the Breast Cancer Research Program has been funded since 1992, the Neurofibromatosis Research Program since 1996, and the Ovarian Cancer Research Program and the Prostate Cancer Research Program since 1997. In spite of the longevity of these and other CDMRP research programs, there is a continuing concern among the stakeholders and CDMRP leadership that Congress may not appropriate money for a particular program in any given year. Despite the fact that the funding allocations are short term, few CDMRP programs have been discontinued, and, for the most part, the current programs have received relatively consistent appropriations throughout their existence. CDMRP leadership and several stakeholders informed the committee that funding uncertainty precludes developing a long-term strategic plan that establishes research goals and promotes coordinated research efforts across other organizations. The committee disagrees with this reasoning.
Two CDMRP programs—the Breast Cancer Research Program and the Ovarian Cancer Research Program—have developed landscape documents that provide a snapshot of the epidemiology of these diseases and describe selected research initiatives and outcomes. Although each CDMRP research program has a vision setting booklet for its programmatic panel that presents a synopsis of research currently funded by CDMRP and other organizations, research gaps, and, less consistently, new research outcomes, the committee does not consider these landscape documents or the vision setting booklets to be equivalent to a strategic plan (see Chapter 4).
The committee finds that a program’s vision should not be confined to only 1 year and must look further into the future. A strategic plan is one way to develop a comprehensive vision for a program. Such a plan should identify research priorities for a program over the course of 3–5 years and specify the research initiatives, including award mechanisms, to achieve those priorities. Strategic plans should be based on an evaluation of the program’s current and past research portfolio that identifies successes, failures, and gaps. The plan should provide long-term, flexible approaches to build on past successes and address failures and gaps. It should also describe the resources (e.g., human, institutional, technological, and financial) needed to implement the initiatives. Finally, the
strategic plan could incorporate a systems approach to define program inputs and research outputs as well as the metrics to evaluate them. With the development of a strategic plan, the annual vision setting meeting might then become an opportunity to assess whether the outcomes of the previous year’s investment strategy align with the strategic plan and what modifications to the strategy might be necessary for the upcoming year’s funding to help meet the plan’s goals. The committee understands that having annual funding makes it difficult to guarantee that long-term plans will necessarily be implemented, but it also finds that this uncertain funding should not preclude the development of a strategic plan for a research program. Furthermore, the lack of a long-term strategic plan means that each year a research program establishes its priorities anew, making it difficult to track program progress and increasing reliance on the institutional memory of the programmatic panel members who serve for more than 1 year.
Developing a strategic plan is not a simple task; however, the committee suggests that one opportunity for establishing such a plan might be to leverage the knowledge and expertise available at the (suggested) periodic stakeholders meetings discussed below under transparency. A stakeholder or convening meeting held every 3–5 years that includes scientific experts, consumers, and advocates from a variety of governmental and nongovernmental organizations could support the development of a strategic plan, with subsequent stakeholders meetings to update the plan. Meeting participants could review the funding landscape across diverse agencies and organizations, identify short-term and long-term needs and opportunities (such as the need to develop new researchers in the field), and provide advice on whether and how to coordinate research priorities with other relevant organizations. Such a meeting would be an opportunity for new and small organizations and foundations, entrepreneurs, and start-up organizations to participate with the acknowledged leaders in the field and potentially provide a different perspective on the innovative and high-risk research that may be incorporated into the strategic plan. Having a plan for each program that emphasizes leveraging past and current research efforts and outcomes to address critical areas for future research may improve the likelihood that Congress will see the value of continued funding for a given research program.
Recommendation: Each CDMRP research program should develop a strategic plan that identifies and evaluates research foci, benchmarks for success, and investment opportunities for 3–5 years into the future. The plan should be re-evaluated and updated as necessary at the end of that interval. Each strategic plan should specify the mission of the program, coordination
activities with other organizations, research priorities, how those priorities will be addressed by future award mechanisms, how research outcomes will be tracked, and how the outcomes will inform future research initiatives.
The committee was specifically tasked with evaluating the coordination of CDMRP’s research priorities with those of NIH and VA. CDMRP’s approach to coordinating the research priorities for each of its programs with NIH, VA, and other organizations is discussed in Chapter 7. In that chapter, the various formal and informal mechanisms that the CDMRP program managers use to reduce the potential overlap of research funding with other organizations are described. The chapter also includes a discussion of the efforts of CDMRP program managers to interact with colleagues at other research institutions for the purpose of identifying research gaps and possibly duplicative research. The committee notes that coordination is best accomplished when all involved parties work together. In other words, while the focus of this review is on CDMRP’s activities, NIH and VA need to coordinate with CDMRP, and currently there is no requirement or tangible incentive for them to do so. Not only have health care organizations called on the federal government to be more active in coordinating their medical research efforts (AcademyHealth, 2005; IOM, 2002), but there are substantial benefits to be gained by doing so. These benefits include conducting critical research within each organization’s area of expertise or focus; reduced administrative costs; less unnecessary duplicative research; and improved transparency in the processes for selecting the best medical research.
With regard to formal coordination, CDMRP has published guidelines for reducing the duplication of research which focus on the use of the NIH RePORTER database to screen applications recommended for funding. The inclusion of CDMRP awards in the Federal RePORTER database is another mechanism that may be used to identify duplicative efforts. CDMRP science officers consult both databases during and after award negotiation and monitor them to identify redundant research. CDMRP is also a member of the International Cancer Research Partnership, which has a searchable database of cancer research worldwide. However, unless these databases are kept up to date by the respective users, it will be difficult to accurately identify duplicative research.
According to CDMRP, the coordination of its research priorities with NIH and VA generally consists of (1) informal discussions between program managers and scientific colleagues at NIH and VA, (2) having NIH and VA staff frequently serve as members of CDMRP programmatic pan-
els, (3) inviting presentations from NIH and VA researchers at the annual vision setting meetings, and (4) participation of the CDMRP program manager on a variety of interagency groups with other governmental and nongovernmental members. CDMRP program managers may also find information on other organizations’ research priorities through the organization’s website, literature searches, and through conversations with or surveys of programmatic panel members. As noted in Chapter 4, an overview of research funded by other organizations is often (but not always) included in the annual vision setting booklet for each CDMRP research program.
The committee finds that these coordination efforts by CDMRP program managers are helpful but insufficient for providing consistent and current information on what other government and nongovernmental agencies are doing with regard to funding research on a particular health condition. The variability in the mission, funding, and public awareness of each CDMRP research program may preclude a specific, one-size-fits-all approach for coordinating research priorities, but the committee finds that a more formal and systematic approach to the coordination of research priorities with other funding organizations, and particularly NIH and VA, is possible.
One way in which programmatic panel members may be systematically informed about research funded by other organizations is to require that all vision setting booklets include a section on research initiatives from other major organizations, including but not limited to NIH and VA. While this frequently occurs, it is not done for every program, and its implementation would also improve standardization. When there is no information on research initiatives or funding available from those organizations, this should be indicated. The results of searches of databases such as grants.gov and Federal RePORTER could also be included or provided as an appendix or link. The vision setting booklet could also include a list of individuals consulted on behalf of an organization by the program manager when developing the booklet as well as a description of the outcome of those consultations. Publishing the vision setting books on the CDMRP website in advance of the vision setting meeting would give the public, including other government organizations and researchers, an opportunity to provide input to the programmatic panel as well. The committee sees no reason why these booklets should not be publicly available and open for constructive comments from the public.
Other mechanisms might also help disseminate the results of CDMRP’s review processes. Formally notifying interested parties (NIH, VA, peer reviewers) of the applications that have been recommended for funding would address a frequent concern expressed by peer reviewers
(see the section Feedback from Programmatic Reviewers) and directly apprise NIH and VA of research to be funded by CDMRP.
Coordination efforts are not cost neutral. Additional CDMRP staff and contractor time and expertise may be required to implement many of these efforts, although the committee notes that some CDMRP research programs already undertake some of these efforts, such as consistently having representatives from NIH or VA on their programmatic panels. However, the advantages of developing a long-term strategic plan for each CDMRP program that includes input from the major funding organizations—not limited to just NIH and VA—could reduce the risk of funding duplicative research and enhance opportunities for identifying complementary or collaborative research.
Recommendation: Where there is a commonality in substantial research efforts by other organizations, whether federal or nongovernmental, CDMRP should have a formal mechanism to coordinate with these entities in a predictable, consistent, and standardized manner each year to learn of substantial or new areas of research on the health condition being funded or considered for funding by those other organizations.
CDMRP has made many aspects of its review process publicly available on its website (cdmrp.army.mil). For example, the CDMRP website specifies the qualifications necessary to serve as a consumer reviewer along with information on how to apply to become one. The website also lists available funding opportunities for all the research programs, provides a mechanism for receiving electronic alerts when program announcements are released, identifies peer and programmatic reviewers, posts the names and affiliations of applicants who have been recommended for funding, and publishes many program documents, including annual reports and research program booklets. The committee notes that CDMRP also frequently updates its website to provide additional information to users and commends these efforts at transparency and encourages their continued use.
There are four notable areas of the CDMRP review process, however, where transparency is lacking: stakeholders meetings, contractor support activities and policies, the use of ad hoc reviewers, and feedback from programmatic reviewers. On the basis of the information discussed in the following sections, the committee concludes that improving transparency in these areas would increase public knowledge of and engagement with the review process, beginning with the goals of the program, help
CDMRP obtain the best scientists, researchers, and consumers as reviewers, and possibly improve the quality of the applications it receives. It may be necessary for CDMRP’s contract and legal staff to provide advice to CDMRP leadership and program staff with regard to the committee’s recommendations for improving CDMRP’s internal processes and those of its contactors, particularly for issues such as the need to publicly disclose the ad hoc or specialty reviewers, conflict of interest (COI) criteria for reviewers, and the availability of training and other materials currently claimed as proprietary by the CDMRP contractors. The committee did not have a member who was a lawyer and thus was unable to address whether there are any specific legal concerns pertaining to the public availability of CDMRP’s internal processes and documentation or the contractors’ activities and materials.
When a new CDMRP research program is funded by Congress, CDMRP assigns it to a program manager who begins the process of establishing a formal program (see Chapter 4). The program manager identifies stakeholders (e.g., expert scientists, clinicians, funders, consumers, and advocates) and convenes a meeting with them to explore the current state of research on the health condition and how the new CDMRP program might best address congressional intent and research gaps. The CDMRP process for identifying stakeholders and engaging them in a discussion of the health condition is not publicly available. Stakeholders meetings are not publicly announced either before or after they take place; the public at large is not invited to attend the meeting, either in person or otherwise, or asked to submit information; and the outcomes of the stakeholders meeting are not made public at any time. Although the CDMRP program managers stated that they try to obtain input from all relevant stakeholders, the committee is concerned that a meeting that is by invitation only and is closed to the public may restrict the number of interested parties, particularly those with singular or dissenting viewpoints. This restriction could preclude an open, wide-ranging, and even spirited discussion of opportunities and goals for the new program. In particular, the committee did not see the need to limit public access to the stakeholders meeting to only invited attendees.
There are a variety of ways to facilitate public access and input for stakeholders meetings. For example, the meetings could be announced on the CDMRP website, Web-based opportunities for input could be allowed, and stakeholders could electronically submit information, comments, and questions. Some of the research programs are for health conditions with highly engaged advocates, consumers, and researchers. To reach such a
broad and diverse audience, CDMRP should make an additional effort to involve and hear from the public as well as diverse stakeholders. Given the call for CDMRP to coordinate its research priorities with NIH and VA, it is particularly important that representatives of these organizations, if appropriate, be included in the stakeholders meeting. This type of meeting would help ensure that the program has strong support and a mission statement that reflects the broad stakeholder community. Furthermore, given the ever-changing medical research landscape and the concern about continuing congressional funding for a health condition, having a stakeholders meeting on a periodic basis (e.g., every 3 to 5 years) would allow new information to be incorporated into the process, identify new stakeholders, and provide an opportunity to refocus the program as necessary. A periodic stakeholders meeting may also be used to develop and later adjust a strategic plan for the research program, as was discussed in a previous section.
Recommendation: Stakeholders meetings for each research program should include an opportunity for public engagement prior to, during, and after the meeting, using a variety of mechanisms (e.g., Web-based). Furthermore, such meetings should be held about every 3–5 years, and notices about such meetings should be broadly announced in advance.
Contractor Support Activities and Policies
External contractors provide support to CDMRP program managers for many peer review and programmatic review activities. Many government agencies, such as the U.S. Environmental Protection Agency and the Agency for Toxic Substances and Disease Registry, use contractors for administrative and logistical support for review activities. What the committee did find unusual with regard to CDMRP contractor support was the lack of transparency in what the contractors do and the roles and responsibilities of their respective staff. Both contractors declined to provide substantial input to the committee (see Chapter 1). The committee notes that for the 1997 IOM report on the Breast Cancer Research Program, the CDMRP support contractor at that time was helpful and provided written materials and answered questions from the 1997 committee. Although CDMRP contractors may need to maintain some proprietary information in order to remain competitive, there are many critical CDMRP contractor activities and materials (e.g., reviewer recruitment, training, and compensation) that the committee could not review because the contractors, rather than CDMRP, had control of or owned these data and materials. These materials include the handbooks for peer reviewers,
training materials, conflict of interest requirements, compensation policies, reviewer qualifications, and post-review surveys.
The committee has concerns about the contractors’ lack of transparency and the unavailability of their materials for review. First, in response to the committee’s solicitation of input, several scientist peer reviewers questioned the qualifications of some of their fellow scientist reviewers to evaluate their assigned applications. As noted in Chapter 3, on its website CDMRP describes in some detail the qualifications of and selection process for the consumer reviewers; these individuals are ultimately identified and recruited by the contractors. However, this transparency does not extend to the qualifications of and selection process for the scientist reviewers, who are also selected by the contractors. CDMRP was unable to provide the committee with specifics on how scientist reviewers are identified. The committee finds that the minimum qualifications for scientist reviewers for both peer and programmatic reviewers should be publicly available.
Second, by not controlling or owning its training materials (e.g., handbooks and videos) or the peer and programmatic reviewer survey forms and evaluation results, CDMRP has ceded these key responsibilities to its contractors. Written criteria for determining what constitutes a COI are developed by the peer and programmatic review contractors and are not available publicly, such as on the CDMRP website. Although CDMRP reviews and approves the training, COI criteria, and the reviewer evaluation materials, they are claimed as business products of the contractors and thus do not appear to be subject to any external review.
There would be several advantages to CDMRP if it increases the transparency of its review process. Other government organizations such as NIH and VA, and nongovernmental organizations such as the Patient-Centered Outcomes Research Institute (PCORI), make information such as COI criteria, minimum qualifications to serve as a scientific reviewer, and compensation policies publicly available. By controlling or owning its review processes and materials and by making those processes and materials publicly available, CDMRP would be better aligned with other leading research funding organizations, and would demonstrate that it is complying with the President’s call for more transparency and public participation in federal government operations (Executive Office of the President, 2009). Finally, by controlling or owning its training and other materials, CDMRP could maintain continuity in its processes should contractors change over time or if there is a need for administrative modifications. The committee notes that both of the CDMRP support contacts were recently re-awarded to the incumbents, and therefore, it may be some time before CDMRP is able to obtain control or ownership of the training and other materials.
Recommendation: To improve its transparency and business practices, CDMRP should determine how best to obtain the training and evaluation materials used by its contractors so that the materials may be periodically reviewed and revised as needed. Furthermore, at a minimum, CDMRP should establish and make publicly available its qualification criteria for serving as a scientist reviewer—both peer and programmatic—and its conflict of interest policies.
Use of Ad Hoc and Specialty Reviewers
Given the scope of several of the CDMRP research programs and the number of pre-applications and applications that need to be reviewed in a short period of time, the use of ad hoc and specialty reviewers is necessary to complete the review process in a timely and effective manner. In examining the CDMRP website for lists of peer and programmatic reviewers, the committee found only a few programs that identified these additional reviewers. For example, the committee requested that CDMRP provide a table of the total number of peer reviewers by research program for 2014. The CDMRP table was compared to the lists of peer reviewers posted on the CDMRP website by program for 2014; this comparison showed that 639 peer reviewers in the CDMRP table were not accounted for on its website.
As ad hoc and specialty peer reviewers may provide an overall score for an application and thus have a substantial impact on its rating, the committee finds the lack of acknowledgement of these reviewers to be puzzling. Providing the names of the ad hoc reviewers would increase public confidence in the uniformity of reporting reviewers in the peer and programmatic review process. It would also contribute to the committee’s earlier recommendations for greater transparency in the review process. CDMRP may also consider including more information on its website on how and when ad hoc and specialty reviewers are used and selected. To be consistent with peer and programmatic reviewers, CDMRP should indicate the basic qualifications of ad hoc and specialty reviewers.
Recommendation: CDMRP should consistently and publicly identify ad hoc and specialty reviewers as is currently done for peer and programmatic reviewers.
Feedback from Programmatic Reviewers
There are three application review steps, conducted by two separate panels, in the CDMRP process: (1) pre-applications received in response
to a program announcement are screened by the programmatic panel; (2) full applications received in response to letters of invitation are scored and critiqued by peer review panels; and (3) full applications are evaluated by programmatic panels following peer review. As discussed in Chapter 4, applicants whose pre-applications did not result in an invitation to submit a full application do not receive any feedback on why. Providing applicants with a standardized statement as to why their pre-application was rejected, such as indicating which pre-application criteria were not met, would improve the transparency of the program.
All applicants who are invited to and do submit a full application, whether recommended for funding or not, receive scores and summary statements from the peer review. The committee finds this to be an appropriate level of feedback from the peer review stage and to be in line with other agencies that provide feedback as well.
However, the committee does have concerns about the lack of feedback to applicants from the programmatic review. CDMRP has stated that after programmatic review some applicants may receive brief “snippets” that indicate why they were not recommended for funding. Not all applicants receive such snippets, and there do not appear to be any criteria as to who receives them and what information they contain. Consequently, the provision of snippets appears to be entirely at the discretion of the CDMRP program manager.
Several peer reviewers stated in the solicitation of input that they felt their input appeared to be ignored by the programmatic panel because applications they had deemed of lesser scientific merit were recommended for funding. Those peer reviewers commented that they did not understand how the programmatic review part of the process worked, how peer review scores and summaries were used by the programmatic panels, or why peer reviewers were not informed as to which applications were ultimately recommended for funding by the programmatic review panel. Some peer reviewers felt that their input was not given sufficient weight in the programmatic review, particularly if a highly scored application was not funded.
For applications that are recommended for funding, CDMRP lists on its website the names of the applicants and their affiliation. Further information on an award may eventually be found on the CDMRP website after the award has been made, but this may take up to 1 year after the application is recommended for funding and must be found via a database search of funded applications.
The transparency of the programmatic review process could be improved in two ways. First, the list of applications recommended for funding, including the names of the applicants (and co-applicants), their affiliations, and the names of projects, should be available electronically
on eBRAP to all peer reviewers who reviewed those awards after the programmatic review is complete and the applicants have been notified.
Second, and perhaps more importantly, the transparency of the review process would be enhanced if applicants and peer reviewers were consistently provided with more feedback about the results of the programmatic review. A better understanding of what programmatic reviewers found lacking in an application would assist applicants in revising their applications for possible resubmission should the proposed research apply to one of the next year’s award mechanisms, and it would provide peer reviewers with more insights on the programmatic review process. Given that CDMRP has programmatic review guidelines for each award mechanism that include details on how the programmatic review should be conducted for that award, the committee finds that the programmatic review overall score and panel discussion could be captured and summarized and provided to the applicant along with the peer review scores and summaries that are already provided. Peer reviewers could be provided with the programmatic review summary as well.
Recommendation: Applicants whose pre-applications are rejected should receive a standardized statement explaining why. A programmatic review summary that includes more than simply the panel’s funding decision should be provided to applicants along with the peer review scores and summary statements. Furthermore, peer reviewers should be informed of which applications were recommended for funding by the programmatic panel.
CDMRP has made an attempt to standardize many of its practices and processes across its many research programs. These efforts include the use of vision setting documents for each program; consistent terminology, scoring criteria, and formats for program announcements; listing the names and affiliations of reviewers on its website; and identifying successful applicants for funding. It is not an easy task to standardize practices and processes across programs that vary so widely in terms of funding, public concern, longevity, and research interests. In addition, adherence to rigid standardized processes, without having the flexibility to deal with new or changing situations, may actually negate the benefits of having well-thought out standardized aspects of the program. Nevertheless, the committee finds that CDMRP could modify two of its current practices—scoring criteria and term limits—to provide greater transpar-
ency and to standardize them with those used by other organizations, such as NIH.
Each CDMRP program announcement contains pre-application screening criteria if applicable, the evaluation criteria for peer review, and criteria to be used for the programmatic review. Peer review typically consists of scores for the individual evaluation criteria and an overall score (see Chapter 5).
In those cases where peer and programmatic reviewers may not have sufficient expertise to comment competently on all aspects of an application CDMRP may use specialty reviewers who review portions or all of an application for such specialized areas as biostatistics and clinical trials. Specialty reviewers and consumer reviewers are tasked with scoring and critiquing only portions of an application, although they may choose to review and score the entire submission, that is, to assign an overall score to an application prior to the peer review meeting. Although these scores are preliminary and will be discussed and possibly revised at the subsequent peer review panel meeting, nevertheless, the committee is concerned that overall scores from reviewers who may not have read the entire application may have undue influence on the discussion and scoring of the full application at the plenary peer review meeting.
The scales for each of the evaluation scores and the overall score are inverted to prevent reviewers from averaging the evaluation criteria scores to give an overall score. CDMRP encourages peer reviewers to be thoughtful in their scoring of applications, but the committee found that several peer reviewers reported in their response to the solicitation of input that the scoring system was confusing. For programmatic review, it would appear that ratings depend on the award mechanism and the specific research program, with some award mechanisms using narrative ratings (e.g., outstanding, good, poor) and others using a numerical scale. This variability makes it difficult to generalize across programmatic ratings and contributes to the concern about transparency at the programmatic level.
Other organizations that review research applications have developed and validated scoring systems that might be considered by CDMRP. For example, NIH uses the same 9-point rating scale (with 1 being exceptional and 9 being poor) for both criterion scores and for overall impact for applications (NIH, 2016d). PCORI uses a similar scale to score its applications, and it provides a description of what the scores mean and the range of applications to which the scores correspond (PCORI, 2016).
These approaches to scoring research applications might be preferred by both peer and programmatic reviewers as well as by applicants. Furthermore, the use of a wider and whole number scale might prevent some of the bunching of scores that occurs with the current CDMRP scales, as was noted by some peer and programmatic reviewers in the solicitation of input.
Recommendation: CDMRP should consider updating and standardizing its scoring system to reflect current review practices and to reduce confusion among reviewers and applicants.
Like CDMRP, many organizations that fund research, such as NIH, the National Science Foundation, EPA, and VA, have panels of experts who review applications. In some cases panels are ad hoc, and each review session (annual or otherwise) begins with a new roster of experts. For standing panels, advisory boards, and other groups, the lengths of the terms that members serve may vary, and the membership may be renewed. For example, members of NIH study sections serve for only one session, but members of NIH standing panels may serve from 4–6 years and then must rotate off the panel for at least 2 years before they are eligible to serve again.
CDMRP provided data on the turnover of panel members across its research programs (see Chapter 3). The committee examined several of the CDMRP research programs specifically to determine whether there were programmatic panel members who had served for more than 6 years. Although there were a few programmatic panel members who had served for more than 6 years—and for at least two programs, programmatic panel members who had served since a program’s inception—this is a rare situation.
For programmatic panels, the average annual turnover rates were low (4.3%) for consumer reviewers and higher (21.4%) for scientist reviewers (Salzer, 2016c). For peer review panels, the average proportion of new reviewers for the past 5 years was 43.3% for consumer reviewers and 47.8% for scientist reviewers (Salzer, 2016c). Although scientist and consumer reviewers are recruited on an annual basis if their expertise is needed, some have served as peer reviewers for several years.
Nevertheless, the committee suggests that CDMRP consider standardizing its practices (such as rotation off the panels) with those established by organizations such as the National Health Council. For example, peer or programmatic reviewers might serve for a specific term (e.g., 1–3 years) with multiple, but finite, renewals, such as serving for two 3-year
terms with a 1–2 year rotation off the panel before being eligible to serve again. This call for term limits does not mean that other representatives of the same organization may not participate on the panel. Although long-serving panel members may provide institutional memory, new perspectives, knowledge, and diversity would improve the review process.
Recommendation: CDMRP should have standardized limits on both the terms of service and the number of consecutive terms that peer and programmatic panel members, including chairs, may serve.
As described in Chapter 1, this is not the first time that the CDMRP program has been reviewed by a committee of the IOM. In 1993, an IOM committee provided recommendations on how to structure and focus the newly funded USAMRDC Breast Cancer Research Program. Then in 1997, Congress asked the IOM to assess the progress that USAMRDC had made toward implementing the recommendations of the 1993 IOM committee. The 1997 committee found that USAMRDC had established a working Breast Cancer Research Program, but it made a number of recommendations that it believed would improve the organizational structure of the program and its review processes. The recommendations of both these committees are summarized in Box 8-1.
In an effort to be comprehensive, the current committee reviewed the recommendations from both those earlier committees. In general, the committee found that many of the relevant recommendations had been implemented by CDMRP, including the establishment of a two-tiered review process for scientific merit and then program relevance, the provision of summary statements and peer reviews to the applicants, communicating to the scientific community about the role of consumer reviewers, and the inclusion of programmatic evaluation criteria and exclusionary parameters in program announcements. The committee notes that with the inclusion of pre-application submission and screening, the review process is now three-tiered and conducted by two separate panels.
The committee also found that some of the earlier recommendations were beyond its scope of work, such as those pertaining to award negotiations and the flexibility of awardees to change aspects of their awards. Nevertheless, this committee determined that there were still a few recommendations from the 1997 committee that had not been addressed by CDMRP and that merit attention. Specifically, the 1997 recommendation that CDMRP “develop and implement a plan with benchmarks and
appropriate tools to measure achievements and progress toward goals of the Cancer Research Program annually and over time” echoes the committee’s current recommendation (discussed above) that each CDMRP research program develop a long-term strategic plan, with input from the stakeholders meeting, that is periodically updated. The other recommendation made by the 1997 committee that is endorsed by this committee is the need for improved feedback to applicants whose applications were not recommended for funding. The recommendation for greater reviewer feedback to applicants in the context of improved transparency is discussed earlier in this chapter.
Some of the earlier recommendations appear to have been overlooked by USAMRDC and CDMRP or have been adapted over time. For example, the 1993 recommendation that the “fundamental criterion for awarding grants must be scientific excellence” (IOM, 1993), has evolved to the present practice where scientific excellence is a factor in rating an application, but during programmatic review other considerations might result in an application that does not have the highest scientific score being funded.
The 1993 report also recommended that the process for recruiting scientific experts should be an open process, as noted in the earlier section
on Contractor Support Activities. The committee finds that CDMRP and its contractors do not make the minimum qualifications for serving as a peer or programmatic scientist reviewer publicly available.
CDMRP was established in response to grassroots advocacy efforts whose goal was to accelerate finding a cure for breast cancer. CDMRP is now a long standing medical research funding organization that manages an annual budget of more than $1 billion and encompasses 29 health conditions of concern to military service members and veterans, their families, and the general public. Although it has grown substantially in terms of the breadth and complexity of its research programs, its management practices do not appear to have evolved to keep pace with this growth. The committee was tasked by Congress to evaluate CDMRP’s review process and how it coordinates research priorities with NIH and VA. In general, the committee found the CDMRP process for reviewing and selecting applications for funding to be effective and not dissimilar to the processes used by NIH, PCORI, and other funding organizations. Nevertheless, improvements could be made in the CDMRP review process, primarily in the areas of transparency and standardization, which would help to align CDMRP more closely with NIH and other organizations. The committee’s recommendations are interconnected—that is, improving standardization can also result in better coordination and transparency. The development of strategic plans that include an evaluation of each research program would also improve transparency and coordination with other research organizations. Although most of the coordination of research priorities between CDMRP and NIH and VA is informal, there are formal mechanisms that could improve the communication among the three organizations.
In closing, the committee would be remiss in not noting that it was not asked to look at the results of more than two decades of medical research funding by CDMRP. The committee finds that there has been no known concerted effort by an external entity to determine whether the research funded by CDMRP research programs have in fact helped them accomplish their individual missions. Thus, perhaps a more fundamental question remains unanswered: Has CDMRP, even with a well-conducted review process, produced the innovative and impactful research it strives to fund?