field.1 A fair process also ensures that grant applications are solicited from as wide a variety of applicants as possible.

The remaining two attributes of a successful competitive grants program are relevance and flexibility. A relevant grants program provides funding for research that will most effectively further the goals of the program and meet national needs. Flexibility refers to the program’s capacity to shift in response to emerging fields of research. Almost by definition, emerging fields are highly relevant. However, flexibility also should be intrinsic to the research enterprise as a whole. Achieving flexibility can be difficult because of institutional inertia—the addition of individual programs adversely affect the resources remaining for other programs. Thus, a mechanism for periodically evaluating and revising programmatic areas is crucial in a successful competitive grants program.

Other attributes of a successful program are related to specific practical aspects of the program’s implementation. For example, the program must give awards of sufficient size, duration, and number to attract high-quality scientists and support important research. If the awards are too small or too short, many highly qualified scientists are likely to ignore the program in favor of other funding sources. Similarly, grants must be numerous enough to attract high-quality scientists, especially those at the beginning of their research careers. Grant acceptance rates below 10% suggest low chances for success and discourage many scientists from participating as either grant writers or reviewers. At very low funding rates, the effort expended by scientists in writing unsuccessful applications exceeds that of the scientists whose research is supported. Some have argued that such a program is a net burden rather than an asset to the scientific community as a whole (Chubin, 1998). Clearly, there are tradeoffs in the management of any research program (Chubin, 1994; Baldwin and McCardle, 1996). Implementation issues are analyzed in more detail in chapter 6.


Quality and value are terms commonly used to rank types of activities, and research is no exception. Specific metrics can be used to assess quality; alternatively, testimonials can be obtained from various sources to tap perceptions of quality. The latter approach generally was used by this committee to assess the quality and value of NRI-supported research. The former approach is addressed later in a committee finding on evaluation of quality and program accountability.

Evaluation of research has been a long-term challenge for the scientific community (NRC, 1998). In assessing the value of fundamental research, the private sector largely avoids such standard tools as return on investment and


Government science agencies use peer-review in many ways. For additional information on the use of peer-review, see Peer-review Practicies at Federal Science Agencies Vary, General Accounting Office, 1999.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement