National Academies Press: OpenBook
« Previous: 4 NIH SBIR Program - Outcomes
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

5
Program Management at NIH

5.1
INTRODUCTION

The congressional charge to the National Academies was to assess the SBIR program at NIH, and to suggest possible areas for improvement.

In this chapter, we focus primarily on the latter: areas where NIH might make improvements to its SBIR program. In doing so, we primarily utilize case studies, interviews with NIH staff and other stakeholders, and secondary materials, as well as data from the NRC surveys and other statistical sources.

The focus of the chapter is to provide an objective review of the management of the NIH SBIR program, with a view to providing recommendations for improvement. The latter are described in a separate chapter. The structure on this chapter follows the logic of the awards cycle at NIH starting, with outreach activities to attract the best applicants, through topic development, selection, and funding, and concluding with commercialization support and a discussion of metrics and data.

5.2
BACKGROUND

The NIH SBIR program started soon after the program was launched, in 1983. It has expanded steadily with the growth of extramural research at NIH, and effectively doubled over the past four years as NIH funding doubled. The program is now the second-largest, after DoD, and funded approximately $552 million in SBIR awards in FY2006.

Most of these awards are made in the form of grants; about 5 percent are contracts focused on specific NIH needs. Almost all others are not designed to

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

generate results that are purchased by NIH, unlike the procurement-oriented programs at DoD and NASA.

The NIH program has a number of defining characteristics, some of which are addressed in more detail in the remainder of this chapter.

  • Investigator-initiated research. NIH is the only agency where the topics areas in the program solicitation (request for applications) are guidelines, not mandatory limitations on research topics.

  • Larger awards. NIH now consistently exceeds the SBA awards size guidelines for Phase I and Phase II, utilizing a blanket SBA waiver to do so.

  • Peer-driven selection procedures. NIH appears to depend more than most other SBIR programs on external peer review for advice on award selection, although final decisions remain the responsibility of NIH staff.

  • Regulatory concerns. NIH is the only agency whose research often requires approval from the FDA before it can reach the market. This creates an important barrier to commercialization.

  • Multiple awarding components. Twenty-three Institutes and Centers (ICs) at NIH award fund their own SBIR awards, using a range of procedures and with different degrees of integration with other programs.

Together, these characteristics give the NIH program a unique character, and have informed management of the program in a number of important ways.

5.3
OUTREACH

Outreach activities at NIH are extensive, compared to some other agencies, and have received significant attention from the NIH SBIR/STTR Program Office in recent years.

The activities appear in general to have had three primary objectives:

  • To ensure that SBIR attracts the most qualified applicants;

  • To reach geographical areas often perceived to be underserved; and

  • To reach specific demographic groups that are perceived to be underserved (e.g., businesses owned by women and minorities).

Mechanisms for achieving these objectives include:

  • National SBIR conferences, which twice a year bring together representatives from all of the agencies with SBIR programs, usually at locations far from the biggest R&D hubs (e.g., the spring 2005 national conference was in Omaha, Nebraska).

  • The National NIH SBIR conference held annually, in Bethesda, MD.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
  • The annual Program Administrators’ bus tour. An annual swing through several “under-represented” states, with stops at numerous cities along the way. Participants always include the NIH Program Coordinators.

  • Web sites and listservs. NIH maintains an extensive Web site1 containing application information and other support information. A number of explanatory presentations are available online. NIH also allows users to sign up for a news list-serve.

  • Agency publications and presentations. NIH does not appear to use print publications to any significant degree to publicize SBIR (except as NIH events are reported in other publications, for example at the state level). NIH does use electronic publications, such as the NIH Guide for Grants and Contracts, to publicize Funding Opportunity Announcements as well as the Commercialization Assistance Program and the Niche Assessment Program.

  • Demographic-focused outreach. NIH regularly participates in several conferences designed to reach specific demographics.

Overall, there are currently no metrics in place to determine whether the above three objectives have been met in the past or are now being met. Interviews at NIH suggest that the staff believes more outreach is required, and that raising the size of awards has been the most important recent NIH outreach initiative. Some staff members suggest that bigger awards attract better applicants.

NIH has strongly supported the SWIFT bus tour, and the NIH SBIR/STTR Program Coordinator has gone on all recent tours personally.2 Staff members claim to have noticed a spike in applications from visited states and regions, but have no empirical evidence matching bus tours with increased applications.

A review of IC Web sites also indicates that they provide a range of online information from very basic to “fancy bells and whistles.” The Institutes and Centers (ICs) vary greatly in their resources and talent to launch attractive and informative Web pages. It could therefore be helpful if the NIH SBIR/STTR Program Office could develop a standard information package that the Institutes could then adapt for their particular programs, e.g., to display their own particular list of initiatives.

5.3.1
Attracting the Best Applicants

The NIH staff notes that average scores for SBIR awards have trended upward (NIH scores range from 100 (best) to 500 (worst), so an upward trend indicates relatively weaker applications.) Some staff members have stated that the

1

Accessed at: <http://www.nih.gov/grants/funding/sbir.htm>.

2

SWIFT is a multistate bus tour periodically undertaken by SBIR Program Administrators from different agencies to fuel technology growth and development across different regions by promoting awareness of the SBIR programs.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

rapid expansion of funding in the program, together with the trend in marginally funded scores, means that relatively weak applications are being funded.

This observation raises two questions:

  • Is this perception accurate—is the quality of funded SBIR applications low relative to those that receive other NIH funding?

  • If so, does this mean that there are other better-qualified companies who are not applying for SBIR?

Low relative scores. From discussions with staff, it appears that the paylines3 for SBIR awards at the different IC’s are substantially higher than for RO1 awards,4 and these gaps have grown recently. This implies that projects funded through SBIR are receiving worse peer-review scores than projects funded through other mechanisms.

NIH management decided not to share scoring data with the research team, so it is difficult to determine whether or to what extent reality matches perceptions in this area. However, it seems likely that these different scores may well be the result of using a selection process that is primarily aimed at selecting academic applications for basic research and adapting it for use with SBIR, which has different objectives and indeed different selection characteristics. For example, commercialization plans are supposed to play an important role in selection for SBIR, but not for other NIH awards. It does not appear that program staff has undertaken research either to substantiate this perception or to investigate possible alternative explanations for differential scores between RO1 and SBIR applications.

New companies are applying. More than 30 percent of winning applications are from companies not previously funded by the NIH SBIR program.5 New companies participate in the annual conferences, and hits on the Web site continue to increase. The new entrants in the program illustrate the attractiveness of SBIR awards but do not address the qualifications of the applicant companies.

Burden on staff. There are “cultural” issues that may affect perceptions of project or company quality. In interviews and responses to the NRC Program Manager Survey, many NIH staff noted that SBIR applicants and awardees placed a disproportionately high burden on agency staff, compared to similar applicants and awardees in other programs. Michael-David Kerns of NIA may have expressed this issue most clearly, observing that “We spend a disproportionately large amount of time with program administrators interacting with both

3

The payline is defined as the score for the worst-scoring application that is still funded.

4

RO1 awards are grants made to individual researchers. They constitute the most common form of NIH award, and are also sometimes used as an informal comparison group for SBIR awards. However, as explained below, they are different, and comparisons between these groups are invalid.

5

See Section 3.2.3.2: New Winners.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

potential and actual SBIR-STTR applicants.6 These potential and actual SBIR-STTR applicants send emails and telephone much more than other categories of applicants (for basic research grant programs at NIH-NIA), making tremendous demands upon the time of program administrators and the grants management specialists…. Some of the reluctance and the comparatively low regard for the SBIR-STTR Programs, is the amount of time that would-be applicants attempt to and actually engage program administrators in marketing and selling their project and product idea. Even after having explained, usually more than once, that program administrators at NIH-NIA are not in the position of “buying” any project and/or product, the SBIR-STTR potential applicants persist in marketing and selling their projects and products. NIH-NIA program administrators are not accustomed to and do not welcome attempts by individuals to “sell” anything”7

5.3.2
Applications and Awards from Underserved States

Chapter 3 on program awards illustrated the extent to which awards have been concentrated geographically. A single zip code in San Diego has received more than twice as many awards as any other zip code in the country. Massachusetts and California alone account for 36 percent of Phase I awards 1992-2005.

Even though there has been some increase in awards to underserved states, data for FY2005 shows that six states received zero Phase I awards, and a further four states received one or two.8

A better approach to the issue of underrepresentation would be to look at applications per scientists and engineer. The distribution of the latter reflects the distribution of scientific and engineering talent, which should tend to predict applications and awards as well.

As Table 5-1 shows, there are wide variation in the number of applications per 1,000 scientists and engineers, indicating that scientists and engineers in some states use the SBIR program much more—in fact up to twenty times more—than those in other states.

This does raise some important practical questions for the NIH program. To begin with, it points to a somewhat different set of “underserved” states. While

6

The Small Business Technology Transfer Program (STTR) reserves 0.3 percent of federal extramural R&D funding (vs. 2.5 percent for the SBIR program) for competitive awards to facilitate cooperative R&D between small business concerns and U.S. universities and research institutions, with potential for commercialization. STTR was established as a companion program to the SBIR program, and is executed in essentially the same manner. There are, however, distinct differences. Most notably, each STTR proposal must be submitted by a team that includes a small business (as the prime contractor for contracting purposes) and at least one research institution. The project must be divided such that the small business performs at least 40 percent of the work and the research institution(s) performs at least 30 percent of the work. The remainder of the work may be performed by either party or a third party.

7

Response to NRC Program Manager Survey, April 2006.

8

See Section 3.2.4: Phase I—Distribution Among the States and Within Them.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

TABLE 5-1 NIH SBIR Phase I Applications per 1,000 Scientists and Engineers

MA

50.5

NJ

13.7

MN

10.1

AL

7.4

NV

5.0

MD

33.4

HI

13.5

OH

9.9

TX

7.0

LA

4.8

UT

23.2

CO

13.5

IL

9.7

ND

6.8

NE

4.7

NH

23.1

CT

13.0

MT

9.7

FL

6.5

KS

4.7

CA

19.1

WA

12.7

DC

9.7

KY

6.4

AR

4.4

VT

18.7

NY

12.6

WY

9.3

IA

6.3

OK

3.9

RI

16.2

SD

12.3

NM

9.0

MO

6.1

ID

3.4

DE

16.0

PA

11.5

WI

8.0

GA

6.0

SC

3.1

VA

15.2

NC

11.0

TN

7.9

IN

5.7

MS

2.5

OR

14.5

ME

11.0

AZ

7.4

MI

5.7

WV

2.2

 

 

 

 

 

 

 

 

AK

1.4

SOURCE: U.S. Census; National Institutes of Health.

states with low numbers of applications per scientists and engineer tend to have low numbers of applications overall and hence low numbers of awards, only five of the bottom ten states in Table 5-1 are also among the bottom ten states in overall awards.

Some underserved states have made substantial efforts to win more awards in recent years. This approach has been partly supported by the FAST program.9 While a comprehensive analysis of the FAST program is not available, interviews with state agency staff and program participants suggest that, despite its limited funding, the program has been successful in helping to generate additional applications.

Additional applications do not, however, always translate into increased awards. For example, the state of Louisiana has made significant outreach efforts that have resulted in an increase in the number of Phase I applications to NIH from six in 1998 to 20 in 2001. However, during that period the number of awards increased only modestly, from 0 to 2. More experience with the application process may generate a more positive outcome over time.

5.3.3
New Applicants

Awards and applications data from NIH (described in detail in Chapter 3) suggest that about 40 percent of applicants for Phase I have not previously won an NIH SBIR award, and that about 30 percent of Phase I awards go to these companies.

9

The Federal and State Technology Partnership Program (FAST) Program is operated by the SBA, and provides states with a limited amount of matching funds to be used to strengthen the technological competitiveness of small business concerns in states. See <http://www.sba.gov/sbir/indexfast.html>.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
FIGURE 5-1 Percentage of winning companies new to the NIH SBIR program, 2000-2005.

FIGURE 5-1 Percentage of winning companies new to the NIH SBIR program, 2000-2005.

SOURCE: National Institutes of Health.

Figure 5-1 shows that that the number of new winners has fallen slowly but steadily in recent years. However, this is likely explained by the fact that there are many more previous winners in the potential applicant pool each year.

5.3.4
Conclusions

In general, the data above support the hypothesis that the NIH SBIR program is open to new companies, and continues to attract them, and that it is also open to companies from outside the major biomedical research hubs in states such as California, Massachusetts, and Maryland. However, it is also worth noting that some at NIH—including NCI in its institutional response to the NRC Program Manager Survey, suggested that funding for this outreach was severely constrained:

We need to have annually committed funds to support a reasonable number of HSA and Grants Management staff to travel to the two national meetings as well as the annual NIH SBIR/STTR Conference which is now being held offsite. If the NIH Conference is held in Bethesda, then logistics funds are needed to support the Conference. Either funds should be made available from the SBIR/STTR set-aside for outreach, or Institutes should make a standing commitment to support these activities.10

10

NCI response to NRC Program Manager Survey, April 2006.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

5.4
TOPICS

Like other agencies, NIH publishes areas in which it is interested in funding research, known as “topics.” These topics are published in the annual NIH Omnibus Solicitation. But unlike the other SBIR agencies, where technical topic descriptions tightly limit awards, NIH topics are guidelines, not boundaries. The agency is proud of this “investigator-initiated” approach. Researchers are encouraged to submit applications on any topic that falls within the broad mandate of the IC funding agencies—which covers the entire universe of biomedical research.

This description of the SBIR funding as “investigator-initiated” is broadly accurate. However, in recent years, an increasing percentage of awards have been made through alternative mechanisms. The Program Announcement (PA) mechanism operates through the regular selection procedure, but marks certain areas as being of special interest to NIH; the Request for Applications (RFA) mechanism goes further, and earmarks dollars within the SBIR set-aside specifically for selected topic areas. PAs now account for about 20 percent of Phase Is, and RFAs for a further 5 percent. These are discussed briefly below, and in more detail in Chapter 3.

5.4.1
Standard Procedure at NIH—The Omnibus Annual Solicitation

The Annual Omnibus Solicitation lists all the topics from all of the ICs at NIH (and two other HHS SBIR participating agencies, CDC and FDA who use NIH to manage their SBIR program). The Solicitation describes areas in which research applications are encouraged, but applications outside these topic areas are welcomed. The topics listed in the annual solicitation are broad guides to the current research interests of the ICs.

These topics are developed by individual ICs for inclusion in the annual Omnibus Solicitation. Typically, the NIH SBIR/STTR Program Office sends a request to the individual Program Administrators (PMs), the SBIR points of contact at each IC. These PMs in turn meet with division directors and determine the focus of SBIR topics within the IC.

Division directors review the most recent Omnibus Solicitation (with their staff), and suggest changes and new topics based on recent developments in the areas of particular interest to the IC, or agency-wide initiatives with implications within the IC. The revised topics are then resubmitted for publication by the SBIR office at the Office of Extramural Research (OER), which provides a further review.

5.4.2
Procedures for Program Announcements (PAs) and Requests for Applications (RFAs)

PAs and RFAs are NIH’s version of the mission-driven approach to topics used in particular by the procurement agencies—DoD and NASA. Essentially,

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

they are tools through which the Institutes and Centers (ICs) can encourage firms to propose project that meet IC research priorities.

RFAs are announcements of research funding areas that the IC expects to prioritize. The two types of announcement indicate different levels of IC interest. RFAs are high priority areas that have funding from SBIR set aside for them. In effect, they are operated much like the more rigid topics at other agencies.

PAs are simply announcements of interest—applications received in response go through the same SBIR application process as other applications. However, as described in Chapter 3, ICs may announce that awards made under a PA can be for a longer time period (several additional years) and also for more money than the standard guidelines or even than the average award at NIH. While the PA applications go through the same selection process as other SBIR applications, IC’s may exercise discretion and decide to fund an application under a PA over other better-scoring applications. Discussions with agency staff suggest that this occurs only at the margin (i.e. a decision between two projects both close to the payline).

RFAs indicate more interest from the IC in two respects. First, applications in response to a RFA compete for a separate pool of SBIR funding that the IC carves out of its general SBIR pool specifically to serve the RFA. Second, these applications are not selected using the normal Center for Scientific Research (CSR)11 process. Instead, RFA applications go through a separate review process, normally internal to the relevant IC.

Both PA and RFA announcements are published by one or more ICs and reflect the top research priorities at the ICs. NIH tries to ensure that while PAs and RFAs define a particular problem, they are written broadly enough to encompass multiple technical solutions to the defined problem.

PAs and RFAs appear to be the result of efforts to develop a middle ground between topic-driven and investigator-initiated research. Essentially, by layering PA/RFA announcements on top of the broad, standard solicitation, NIH seeks to focus some resources on problems that it believes to be of pressing concern, while retaining the flexible investigator-initiated approach that has served the agency well. In a recent interview, the NIH SBIR/STTR Program Coordinator indicated that NIH plans to increase the percentage of SBIR funds allocated to more targeted research through these mechanisms.

5.5
SELECTION

The peer review process at NIH is by far the most elaborate of all the SBIR agencies. It is operated primarily through the Center for Scientific Research

11

The Center for Scientific Research manages the review process for all NIH awards, except the small number managed in-house by individual ICs (such as the SBIR RFAs).

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

(CSR). CSR is a separate IC which serves only the other ICs—it has no direct funding responsibilities of its own.

The system has been criticized on a number of fronts, most notably for being inhospitable to innovation,12 and because in tests of peer review processes elsewhere in biomedical research a significant degree of randomness in results has been identified.13 Nonetheless, peer review is deeply entrenched at NIH, and the selection of SBIR awards at NIH operates through the peer review that has been implemented agency wide.

5.5.1
Study Sections

Applications for NIH SBIR awards are received at CSR and are assigned to a particular study section (as review panels are known at NIH) based on the technology and science involved in the proposed research. Panels can either be permanent panels legally chartered (established and defined) by Congress, or temporary panels designated for operation by NIH, called Special Emphasis panels (SEPs). Most SBIR applications are assigned to temporary panels, many of which specialize in SBIR applications only.

Specialized panels at NIH are increasingly used because the requirements for assessing SBIR applications—notably the commercialization component—are quite different from the analysis required to assess the basic research conducted under other NIH grant programs. However, several respondents to the NRC Program Manager Survey at NIH noted that some study sections did consider all kinds of applications, and they did not believe this was the optimal way to review SBIR applications. A program manager at NCI observed that “More and more mixing of mechanisms is occurring in study sections once devoted to SBIRs, thus diluting the focus.”14

CSR is organized into four divisions, each of which is divided into Integrated Review groups (IRGs) by science/technology (e.g., infectious diseases, immunology). Each IRG manages a number of study sections.15 Neither CSR nor the study sections are organized by either disease or IC—they reflect scientific distinctions only.

Special Emphasis Panels (SEPs) are reconstituted for each funding cycle. Almost all SBIR applications are reviewed by SEPs, which have a broader technology focus than the permanent chartered panels. Members can attend no more than 12 SEP study sections in 6 years. Section membership shifts with scientific trends.

12

D. F. Horrobin, “The philosophical basis of peer review and the suppression of innovation,” Journal of the American Medical Association, 263:1438-441, 1990.

13

T. Jefferson, et al., “Measuring the Quality of Editorial Peer Review,” Journal of the American Medical Association, 287:2786-2790, 2002.

14

Response to NRC Program Manager Survey, April 2006.

15

For example, the immunology IRG has seven permanent and two temporary study sections.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

The second kind of study section, known as chartered (permanent) study sections, usually has a narrow technical focus (e.g., host defenses, innate immunity). Most sections are chartered, and their members are semi-permanent; sitting for 4-8 years out of every 12.

Most SEPs draw the majority of their applications from a subset of ICs. For example, the immunology IRG covers applications that refer to about 15 ICs, but 50 percent of its work comes from NAIAD, with a further 33 percent from NCI, reflecting the technical specialization of the SEP.

NIH guidelines are that at least one panelist (member of the study section) should have small business background. However, some Scientific Review Administrators (SRAs) appear to be making a greater effort to get panelists with entrepreneurship experience. One recent panel, for example, had 13 small business representatives out of 25 panelists.16 That constituted a change for that panel: Previous panels in that technical area had been dominated by academics. NIH guidelines mandate 35 percent female and 25 percent minority panelists on each panel.17

There were numerous comments from agency staff and awardees about the difficulties of getting study sections with an appropriate mix of expertise. Some respondents to the NRC Program Manager Survey also focused on the need for more training for reviewers. Connie Dresser at NCI, for example, noted that “SBIR training needs to be mandatory for all SBIR reviewers in that they need to know what they should not be focusing on or why they should not be comparing SBIR content with R01 content. Also, we need people with marketing training and experience in review. The university types know text book information about marketing, not real-world marketing.”18 Other comments were more trenchant: “One basic flaw, in addition to the fundamental methodological deficiencies, is the reliance upon academic scientists to conduct reviews of SBIR-STTR applications. To put it simply: They are not qualified.”19

One additional point on this subject was made by an NIH staff member. She noted that the selection process would be improved by the addition of professional consumers of medical producers, e.g., users of MRI technology, as well as experts in its development.20

16

NIH staff interview.

17

See Center for Scientific Review, “Overview of Peer Review Process” for detailed discussion of the peer review process at NIH. <http://cms.csr.nih.gov/ResourcesforApplicants/PolicyProcedureReview+Guidelines/OverviewofPeerReviewProcess/>.

18

Response to NRC Program Manager Survey, April 2006.

19

Michael-David Kerns, NIA, Response to the NRC Program Manager Survey, April 2006.

20

Amy Swain, NCRR. Response to NRC Program Manager Survey, April 2006.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

BOX 5-1

“Competitive pressures have pushed researchers to submit more conservative applications, and we must find ways to encourage greater risk-taking and innovation and to ensure that our study sections are more receptive to innovative applications.”

Dr. Toni Scarpa, Director, CSR.

“Research Funding: Peer Review at NIH” Science, 311(5757):41, January 6, 2006.

5.5.2
Selection Procedures

Each application is assigned to a subset of outside reviewers on the relevant panel—two lead reviewers and one discussant.

These three panelists begin by separating out the bottom half of all applications. These applications are not formally scored, though the applicants do receive a written review explaining why they were not selected.

At the review panel meeting, the three reviewers provide their scores on the remaining half of the applications before there is any discussion. Following a panel discussion, the three reviewers make changes to their scores if they wish. The entire review panel then scores the application.

Scoring is based on five core criteria:

  • Significance of the proposed research.

  • Effectiveness of the proposed approach.

  • Degree of innovation.

  • Principal Investigator’s reputation.

  • Environment and facilities.

There is no set point value assigned to each of these. Scores of individual reviewers are averaged (no effort is made to smooth results for example by eliminating highest and lowest scores). This average is multiplied by 100 to generate the reported score, between 100 (best) and 500 (worst). Fundable scores are usually in the 210-230 range or better, although this varies widely by IC and by funding year. Scores are computed and become final immediately.

According to an experienced NIH SBIR program manager, Gregory Milman, “most reviewers feel that NIH funds should be used for research and not for development.”21 This reflects the view that reviewers are generally biased

21

Gregory Milman, “Advice on NIH SBIR and STTR Applications,” April 2005, Slide 10. Accessed at: <http://www.niaid.nih.gov/ncn/sbir/advice/advice.pdf>.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

toward the kind of basic research funded by more standard NIH programs, such as RO1. Currently, there are no data to substantiate this view, but it is held by several senior staff members. For example, the NIDA response to the NRC Program Manager Survey noted that “Grants are currently reviewed mostly from a research perspective (which reflects the characteristics of the review group and NIH priorities) with minimal emphasis on commercialization potential.”22

Milman further notes that “Academic reviewers are most comfortable with hypothesis-driven research … the collection and analyses of data necessary for your product. Research is not developing something, building something, or discovering something. You can use grant funds to develop, build, and discover but only as necessary to collect and analyze data.”

Reviewers are instructed not to base their evaluation of applications on the size of the funding requested. They are required to note if the funding requested is appropriate for the work proposed. As a result, reviewers do not consider possible trade-offs between different size applications (i.e. whether one large high scoring project is “worth” giving up for two or three similarly meritorious smaller projects). This is increasingly important as the size of applications varies from the standard SBA and NIH guidelines. These trade-offs are supposed to occur within the IC as it makes funding decisions, but interviews with IC staff suggest that the degree to which it does so is highly variable, and nontransparent.

Reviewers are also instructed not to consider in their evaluation the number of SBIR awards previously given to the applicant. The application form asks proposing companies to note if they have received more than 15 Phase II awards, but this question is for administrative purposes only. Otherwise, the application forms have no place to list previous awards. While companies with strong track records seek to ensure that these previous successes are reflected in the text of their application, there is no formal mechanism for indicating the existence or outcomes of past awards. Reviewers also do not know the minority or gender status of the PI or of the company.

5.5.3
Post-meeting Procedures

Once the study section has completed its meeting, scores are tallied immediately. These scores are then sent to the funding IC, which receives scores for all other SBIR applications that have been assigned to it.

Budget officers then work through procedures designed to establish the payline—the score above which applications will be funded for this funding cycle. These procedures include identifying the overall size of the funding pool for SBIR (2.5 percent of the total budget for extramural research), identifying and tallying all noncompeting SBIR awards (e.g., Phase II, year two awards) to which the NIH is already committed, setting aside funds needed for RFAs, and

22

NRC Program Manager Survey, April 2006.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

finally calculating the amount of available funds. These funds are then allocated to applications by the IC (ICs appear to use different procedures for doing this), primarily according to their scores. The payline is established at the point at which all available funds have been expended.

Typically, the payline for each ICs SBIR awards is in the range of 210-230, but it can be considerably higher or lower depending on the specific IC and the specific application cycle.

At this point, IC staff may intervene to make marginal adjustments to the list, perhaps moving one or two nonfunded applications up above the payline, and consequently defunding marginal applications with higher scores. Staff at NIH report that these adjustments are minimal, but there are no available data on this important point.

The funding procedure at NIH does not appear to have changed even though the size of awards has increased substantially. A new element—trade-offs—has been added into the funding equation. Applications asking for relatively large funding amounts can potentially preclude multiple smaller awards of similar merit. It does not appear that any IC staffers are explicitly charged with assessing these possible trade-offs within the SBIR program, nor is there any additional formal layer of review for unusually large SBIR awards. Extra-large RO1 applications, by contrast, must receive special approval.

5.5.4
Positive and Negative Elements of NIH Peer Review Process

On the positive side, outside review results in:

  • Strong endorsements within the agency for applications derived from formal peer review;

  • Alignment of the program with other programs at NIH, which operate primarily via peer review;

  • Perceptions of fairness related to outside review in general;

  • Absence of claims that awards are prewired for particular companies; and

  • Access to reviewers with specialized expertise.

On the negative side, difficulties with the outside review process expressed by staff, awardees, and other experts in interviews appear to have been exacerbated by recent efforts to infuse commercialization assessment. Problems include:

  • Deteriorating quality of reviews as workload increases, and difficulties in recruiting peer reviewers with appropriate expertise; NIH now han-

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

dles 80,000 applications annually, and recruits more than 15,000 peer reviewers.23

  • Significant perceptions that scoring has a large random component (a view presented by many case study interviewees, and also by a number of NIH SBIR program officers).

  • Conflict of interest problems related to commercialization (an issue raised forcefully by several interviewees and by other stakeholders knowledgeable about the program, but not accepted in the course of NIH agency interviews).

  • Substantial delays in processing (accepted by NIH as a problem).

  • Questions about the trade-offs between different size awards (see above). These questions are likely to grow as the number and diversity of extra-sized awards continues to expand (see Chapter 3 for details).

Overall, outside review appears to add fairness and legitimacy but also complexity and delay. Companies interviewed and NIH program officers both pointed out that in many ways, the NIH process had not been adjusted to address the needs of companies trying to work fast in an increasingly competitive environment. Delays that might be acceptable at an academic institution focused on basic research with multiyear timeframes may have a more harmful effect on smaller businesses working within a much shorter development cycle. These issues are to a considerable extent understood at NIH, and the agency has started to initiate changes to address these problems. (See Section 5.5.8.)

5.5.5
Confidentiality and IP Issues

Applications are, in theory, strongly protected. They are not made public and reviewers sign confidentiality agreements before seeing the applications. Only the summaries of awards are published.

Nonetheless, confidentiality remains an important issue at NIH. Several case study interviewees (e.g., those at Neurocrine, Advanced Brain Research) were concerned that competitors are able to act as reviewers—in some cases despite written appeals for their removal to the Scientific Review Administrator (SRA), the NIH health scientist administrator in charge of review and advisory groups.

These concerns were reflected in some of the responses to the NRC Program Manager Survey (although others specifically saw no problems with conflict of interest). Connie Dresser of NCI, for example, noted that “Conflict of interest is a major concern in my review sessions. While the SRA is very good about reminding reviewers to excuse themselves from the room, I have had reports from

23

Dr. Toni Scarpa, “Research Funding: Peer Review at NIH,” Science: 311(5757):41, January 6, 2006.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

grant applicants about reviewers who presented similar information or projects to theirs at conferences.24

There appears—from interviews—to be some evidence that peer review panels are requiring more detailed data from applicants, especially at Phase II, and that these demands present further difficulties: Neurocrine noted that this raised problems because the data requested were confidential, commercially critical, and not yet legally protected because patenting every advance at the earliest stage was simply not economically feasible. This left an “IP gap” between the initial identification of a promising compound or molecule, and the date at which testing results were sufficiently promising to justify the time and expense of patenting. Conversely, CSR officials noted that review panels had every right to require sufficient data on which to make a reasoned judgment about the viability of a particular technical approach, and that with increasing numbers of applications, more attention was focused on the technical details of each proposal.

These concerns are reflected in the advice from Gregory Milman, SBIR Program Manager for NAIAD, who warns applicants in advance: “I strongly recommend that you protect your intellectual property before you describe it in a grant application. I would not depend upon confidentiality agreements signed by reviewers or the fact that grant applications are not public documents.”25

5.5.6
Metrics for Assessing Selection Procedures

Assessment of the SBIR selection process is complicated because the program serves many objectives and hence must meet multiple distinct criteria. Discussions with agency staff, award winners, and other stakeholders (such a bio-oriented venture firms, congressional staff) suggest that the following criteria best reflect a “successful” selection process:26

  • Fair. Award programs must be fair and be seen as fair; the selection process is a key component in establishing fairness.

  • Open. The program should be open and accessible to new applicants.

  • Efficient. The selection process must be efficient, using the time of applicants, reviewers, and agency staff efficiently.

  • Effective. The selection process must select the applications that show the most promise for meeting congressionally mandated goals, as interpreted by NIH.

  • Mission-oriented. The selection process must help the program to meet the agency mission.

24

Response to NRC Program Manager Survey, April 2006.

25

Gregory Milman, “Advice on NIH SBIR and STTR Applications,” op. cit., Slide 16.

26

While these are the criteria against which all SBIR agencies develop their selection procedures, the criteria are not explicitly recognized or articulated in any agency, and the agencies balance them quite differently.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

The last two criteria are best considered in light of outcomes (see Chapter 4). The remaining components of the selection process are discussed below.

5.5.6.1
Fairness

Discussions with case study interviewees and agency staff indicate that the perceived fairness of selection procedures is a function of several factors. These may include:

  • Transparency—is the process well known and understood?

  • Implementation—are procedures implemented consistently?

  • Checks and balances—are outcomes effectively reviewed by staff with the knowledge and the authority to correct mistakes?

  • Conflicts of interest—are there procedures in place and effectively implemented to ensure that such conflicts are recognized and eliminated?

  • Appeals and resubmissions—are there effective appeals and/or resubmission procedures in place?

  • Debriefings—is there a debriefing procedure that increases the perception of fairness among unsuccessful applicants?

Both agency staff and applicants noted that the considerable degree of apparent randomness in the process to some extent undercut perceptions of fairness. Karen Peterson, of NIAAA, noted that “This is the weakest point of all in the program. While scores have been improving for applicants to our institute, the quality of reviews especially in the behavioral sciences is widely variable.”27

Transparency. At NIH, the selection process is almost the same process that is used for all other NIH awards. The process is explained on the Web, and in written materials sent to applicants. However, NIH staff report that they spend considerable more effort supporting SBIR applicants and awardees than they do applicants from universities, where the NIH application process is often supported by more experienced staff.

Implementation. The NIH review procedures are formalized, and are implemented under the supervision of professional and independent review staff at CSR; procedures appear to be followed consistently and predictably.

Checks and balances. Scores are highly influenced by the three core reviewers of each proposal, and within them, by the lead reviewer. Once the study section has scored and reviewed the panel, IC staff may decide to fund or not fund “across the payline,”28 essentially reversing decisions by the study section. Interviews with NIH staff suggest that this is rare, though NIH has provided no

27

Response to NRC Program Manager Survey, April 2006.

28

See discussion of Payline below.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

data on this subject. Decisions by IC staff are reviewed and usually approved by the IC’s advisory council, which usually meets three times annually.

Appeals. The appeals process is largely moribund. NMIH staff and interviewees agreed that the resubmission process was much faster, simpler, and likely to be more effective. NIH does provide a written response to every application, with detailed information about why awards were not accepted. Applicants indicated in interviews that this debriefing was critical to the resubmission of applications—although some noted that changes in the composition of review sections meant that fixing criticisms was often not enough to ensure selection next time around.

Conflicts of interest. NIH has clear conflict of interest regulations in place for reviewers, and also has procedures in place that would allow applicants to seek to exclude an individual panel member from reviewing their proposal.

However, a number of interviewees among the companies and other stakeholders such as VC firms noted that these regulations largely operate on the context of an honor system: CSR undertakes no systematic or random checks on reviewers. Their experiences had been mixed, and several noted that as NIH seeks to introduce more commercial expertise into the review process for SBIR awards, the potential for conflict of interest problems may increase (although others noted that academics may also have conflicts of interest). The extent to which this works in practice is not clear, and it may depend on individual CSR officers. Interviewed awardees have repeatedly mentioned potential conflicts of interest as a problem with the SBIR review system.

Resubmissions are the standard mechanism for appeal at NIH, and about 33 percent of all awards are eventually made after at least one resubmission. This ability to resubmit enhances perceptions of fairness.

Finally, respondents to the NRC Program Manager Survey from NHLBI noted that there were inequities between the larger and smaller ICs with regard to paylines: “it seems unfair for smaller Institutes to have to forego paying outstanding applications when the larger Institutes fund at much higher (i.e., lower quality) scores.”29

5.5.6.2
Openness

Some useful metrics for assessing the degree of openness relate to new companies entering the program; others relate to the concentration of awards going to certain companies within the program.

5.5.6.2.1
New Winners

Figure 5-2 shows the annual percentage of previous nonwinners at NIH (who

29

NHLBI composite responses to the NRC Program Manager Survey, April 2006.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
FIGURE 5-2 Percentage of all Phase I applications and awards at NIH from previous non-winners at NIH, 2000-2005.

FIGURE 5-2 Percentage of all Phase I applications and awards at NIH from previous non-winners at NIH, 2000-2005.

SOURCE: National Institutes of Health.

may however have received SBIR awards from other agencies) applying for and winning Phase I awards from 2000-2005.

The data show that the Phase I share of previous nonwinners has remained above 35 percent, although it has declined since 2000. The latter is possible due to the increasing number of previous winners in the pool of potential applicants.

The fact that one-third of all applications and awards involve companies who have not previously won an NIH SBIR grant strongly suggests that the program is reasonably open. These levels are comparable to those at other agencies.30

5.5.6.2.2
Award Concentration and Multiple-Award Winners

Another view of openness might consider the extent to which awards are concentrated among the top award winners. Table 5-2 shows the distribution of

30

See NRC Reports on the SBIR programs at DoD, NSF, DoE, and NASA: National Research Council, An Assessment of the Small Business Innovation Research Program at the Department of Defense, Charles W. Wessner, ed., Washington, DC: The National Academies Press, 2009; National Research Council, An Assessment of the Small Business Innovation Research Program at the National Science Foundation, Charles W. Wessner, ed., Washington, DC: The National Academies Press, 2008; National Research Council, An Assessment of the Small Business Innovation Research Program at the Department of Energy, Charles W. Wessner, ed., Washington, DC: The National Academies Press, 2008; National Research Council, An Assessment of the Small Business Innovation Research Program at the National Aeronautics and Space Administration, Charles W. Wessner, ed., Washington, DC: The National Academies Press, 2009.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

TABLE 5-2 Top 20 Companies—Phase II Awards at NIH, 1992-2005

Name of Organization

Number of Phase II Awards

RADIATION MONITORING DEVICES, INC.

45

NEW ENGLAND RESEARCH INSTITUTES, INC.

38

OREGON CENTER FOR APPLIED SCIENCE, INC.

37

INFLEXXION, INC.

37

SURMODICS, INC.

28

INSIGHTFUL CORPORATION

22

LYNNTECH, INC.

21

CREARE, INC.

21

INOTEK PHARMACEUTICALS CORPORATION

17

BIOTEK, INC.

17

CLEVELAND MEDICAL DEVICES, INC.

16

ABIOMED, INC.

16

OSI PHARMACEUTICALS, INC.

15

PHYSICAL SCIENCES, INC.

15

GINER, INC.

15

PANORAMA RESEARCH, INC.

14

SOCIOMETRICS CORPORATION

14

WESTERN RESEARCH COMPANY, INC.

14

CANDELA CORPORATION

13

PERSONAL IMPROVEMENT COMPUTER SYSTEMS

13

Total

428

Percent of all Phase II awards

10.4

SOURCE: National Institutes of Health.

Phase II awards to the “top 20” winners at NIH—the 20 companies receiving the most Phase II awards at NIH.

The data set above shows that Phase II awards at NIH are not highly concentrated. The most frequent recipient of Phase II awards received 45 over 14 years—just over three per year. In all, these top 20 winners account for 428 Phase II awards, 10.4 percent of the total awarded.

5.5.6.3
Efficiency

Efficiency can be defined in many ways. Box 5-2 includes several possible external and internal efficiency goals towards which the NIH SBIR program should strive.

5.5.6.3.1
Efficiency for Applicants

There are a number of positive components of the current system from the perspective of applicants. These include:

  • The possibility of resubmission;

  • Broad topic design, which ensures that highly promising research applications are not arbitrarily excluded;

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

BOX 5-2

Possible Efficiency Indicators for SBIR Selection Process

External: Efficiency for the Applicant

  • Shorten time from application to award.

  • Reduce effort involved in application.

  • Reduce red tape involved in applying.

  • Output from application (not including award).

  • Re-use of applications.

Internal: Efficiency for the Agency

  • Move the grant money quickly to right recipients.

  • Minimize use of staff resources.

  • Maximize agency staff buy-in.

  • Reduce appeals and bad feelings.

  • Widespread support for the notion of peer review; and

  • The existence of multiple annual application windows, which effectively shorten the time from idea to funding.

At the same time, interviews with NIH staff and SBIR awardees indicate considerable areas for possible improvement; these include:

  • Random outcomes. Many interviews and NIH staff asserted that that there is a substantial element of randomness in the selection process. While this clearly impacts fairness, it also impacts efficiency: Firms contribute time and resources in the form of applications, without a belief that these will generate a return commensurate with their quality.

  • Reliance on resubmissions. While the availability of resubmissions does promote fairness, its extensive use within the NIH SBIR application process is inefficient: It imposes significant additional costs and substantial delays on applicants, the latter almost always amounting to at least 8 months between applications. From a small business’s perspective, this delay could be disastrous. A second resubmission—which is not uncommon—results in a further 8-month delay.

  • Application procedures at NIH are still largely nonelectronic. NIH has now moved to all-electronic submission of applications. However, the study section process remains based on in-person meetings and written documentation, and there appears to be room for considerable improve-

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

ment and experimentation, as noted by Dr. Scarpa, Director of CSR, in a recent article in Science.31

  • Delays. The delays imposed by the current process, again as accepted by Dr. Scarpa, are substantial and could clearly be reduced. Eighty percent of NRC Phase II Survey respondents reported a gap between Phase I and Phase II. The median length of the gap was 13 months, and 11 percent of respondents reported a gap of 2 years or more. NIH is now beginning to experiment with a number of pilot changes to the selection process, focused on this issue.

5.5.6.3.2
Efficiency for NIH

Program efficiency can be measured in a number of ways, and—based on interviews with staff and awardees—these provide a mixed picture for NIH:

  • Moving the money. The process is 100 percent successful in moving SBIR funds from NIH to awardees.

  • Low overhead. Program costs appear to be low; NIH has simply imposed additional work on existing staff as grant applications have increased.

  • Return on Investment (ROI). NIH has only a limited knowledge of the ROI from its SBIR investment, partly because efforts to minimize overhead have led to insufficient investment in monitoring and evaluation.

  • Staff buy-in. The SBIR process is not designed to encourage staff buy-in (see staffing issues section). Nevertheless, some SBIR Program Administrators are enthusiastic and effective.

  • Minimizing appeals. Resubmission effectively replaces appeals within the NIH framework. Appeals are unusual.

Overall, it is fair to say that NIH has little idea whether the SBIR program is efficient for the institution, or whether efficiency varies by IC. SBIR has generated more data on outcomes than other NIH research funding programs, but not enough to make those kinds of determination. It is however true that some NIH staff strongly believe that SBIR programs place a significant additional burden on NIH administrators, compared to other programs, largely because the applicants are working in an environment that they are not familiar with: “Grants management specialists also report hugely disproportionate (vis-à-vis other principal investigators, organizations, & research-grant mechanisms) demands from SBIR-STTR potential & actual applicants (& applicant organizations) (vis-à-vis other research-grant program applicants). The vast majority of problems, including violations worthy of formal investigation, encountered by our grants management specialists within NIA’s entire research portfolio, derive from SBIR-STTR research grants & the small-business organizations. The grants management

31

Dr. Toni Scarpa, “Research Funding: Peer Review at NIH,” op. cit.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

specialists have indicated that they spend anywhere from 40-60 percent more time and effort working up and administrating SBIR-STTR grant projects. The frequency and persistence of problems with SBIR-STTR projects are such that within NIA’s GCMO (contracts office), there is a trenchant lack of enthusiasm for the SBIR-STTR programs.”32

5.5.7
Funding Cycles and Timelines: The NIH Gap-reduction Model

Many SBIR awardees rely heavily on SBIR funding to pay for their operations. Gaps in funding can be deadly to small businesses without other stable sources of revenues.

NIH has recognized this issue, and several characteristics of the NIH SBIR program fall within what the Summary Report describes as the “gap reduction model” for managing funding cycles and timelines.33 This model is distinguished by its emphasis on supporting applicants using a range of features designed to reduce gaps in funding and decrease the time from initial conception to final product deliverable. Elements in use at NIH include:

Multiple annual submission dates. NIH provides three annual submission dates for awards, in April, August, and December. This is a substantial improvement on the one annual date in effect at some other agencies because it potentially reduces time lags related to these deadlines by 8 months. Dr. Scarpa has indicated that CSR will experiment in 2007 with open submission—submissions throughout the year with no set deadline.

Topic flexibility. Topics are discussed extensively above, but they have important implications for the gap reduction model. Narrow, topic-bounded application processes can harm small businesses because they have to wait for an appropriate topic to show up in a solicitation before they can apply for an SBIR grant. NIH is in this respect highly flexible, with its investigator-initiated research approach, which in largely preclude “topics-based” delays. This should therefore be seen as an important component of the overall gap-reduction model at NIH.

Phase I - Phase II gap funding. Two mechanisms have been developed at NIH to bridge the funding gasp between the conclusion of Phase I and the start Phase II funding: “work-at-risk” and the NIH Fast Track.

  • “Work at risk.” Companies that anticipate winning an NIH Phase II award can work for up to three months at their own risk, and the cost of that work will be covered if the Phase II award eventually comes through. If it does not, the company must swallow the cost.

  • Fast Track. Fast Track efforts are designed primarily to reduce the amount

32

Michael-David Kerns, NIA, Response to the NRC Program Manager Survey, April 2006.

33

Described in more detail in National Research Council, An Assessment of the Small Business Innovation Research Program, Charles W. Wessner, ed., Washington, DC: The National Academies Press, 2008.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

of time between the end of Phase I and the start of Phase II. At NIH (unlike DoD34), applicants must apply for Fast Track status during the Phase I application, as it is in effect an application to do a joint Phase I—Phase II application. The advantage of Fast Track is that acceptance should—at least in theory—mean that funding gap is dramatically reduced. (See Section 5.6 for more details).

Phase II plus programs. Phase II plus programs are designed to help bridge the gap between the end of Phase II and commercialization (sometimes known as “Phase III”).

NIH has implemented a new initiative targeted at helping to fund companies through the first stages of the clinical trials process, with funding for up to three years, at up to $1 million per year.

5.5.8
NIH Selection Initiatives

NIH is well aware of complaints about cycle times, and about the burden placed on companies and other grant applicants. As Dr. Toni Scarpa, Director of CSR, notes, “Our system can be particularly frustrating for those who may need to make only minor revisions, because results from our reviews typically come too late for them to reapply for the next review round.”35

CSR is now working to reduce cycle time. In particular,

  • As of October 2005, NIH now posts summary statements of most reviews within 1 month after the study section meeting, instead of 2-3 months after the meeting. This gives important guidance to applicants.

  • In February 2006, NIH began a pilot study to cut 1½ months from the review process. Forty CSR study sections will participate in this pilot, which will speed the reviews of R01 applications submitted by new investigators. Resubmission deadlines will be extended to allow these new investigators to resubmit immediately if only minor revisions are necessary. Specifically, CSR will: (i) schedule study section meetings up to a month earlier; (ii) provide scientists their study section scores, critiques, and panel discussion summaries within a week after the section meeting; (iii) shave days from the internal steps involved in assigning applications to study sections; and (iv) extend resubmission deadlines by 3 weeks.

Dr. Scarpa notes that “we are experimenting with new electronic technologies that permit reviewers to have discussions with greater convenience and to spend less of their precious time in traveling. For example, asynchronous Inter-

34

The DoD Fast Track program is completely different from the NIH Fast Track effort; the only operational similarity is the name.

35

Dr. Toni Scarpa, “Research Funding: Peer Review at NIH,” op. cit.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

TABLE 5-3 Fast Track Applications and Success Rates, 1997-2004

Fiscal Year

Number of Applications

Number of Awards

Fast Track Success Rate (%)

Phase I Success Rate (%)

1997

41

13

31.7

26.6

1998

63

11

17.5

26.8

1999

129

45

34.9

26.5

2000

120

34

28.3

25.1

2001

129

38

29.5

28.6

2002

183

50

27.3

25.8

2003

273

61

22.3

15.1

2004

329

58

17.6

17.9

SOURCE: National Institutes of Health.

net-assisted discussions—secure chat rooms—allow reviewers to “meet” and to comment independently of time as well as place.”36

5.6
FAST TRACK AT NIH

Fast Track at NIH is a completely different program than Fast Track at DoD. At NIH, Fast Track offers the promise of accelerated flow of funds by eliminating the reselection process at Phase II. Instead, companies with approved Fast Track awards simply provide an approved final report for Phase I, and Phase II begins automatically.

Fast Track has attracted to a growing number of companies in recent years (as shown by Table 5-3). To be eligible for Fast Track, an applicant must submit complete Phase I and Phase II applications at the same time, along with:

  • Clear, measurable milestones for Phase I, used to judge whether Phase I objectives have been met;

  • A full Phase II Product Development Plan; and

  • Evidence of commitment from a commercial partner.

In theory, Fast Track should reduce funding gaps and application time by up to seven months, as the diagram in Figure 5-3 shows.

Milman notes however that in many cases, Fast Track is not an appropriate route, particularly where the specific milestones are unclear. For example, he contrasts a drug company with a drug candidate selected, now planning small mammal trials in Phase I and primate trials in Phase II, with a drug company whose candidate drug has not yet been identified and which will rely on Phase I

36

Ibid.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
FIGURE 5-3 Fast track and normal timelines at NIH.

FIGURE 5-3 Fast track and normal timelines at NIH.

SOURCE: Gregory Milman, NAIAD.

results in designing its Phase II research plan. The latter case is, according to Milman, better suited to the standard Phase I-Phase II progression.

Karen Peterson of NIAAA also notes that “Fast Track is not very useful in its current incarnation.” She goes on to say that “Most reviewers are very reluctant to give these applications good scores because of the time and money commitment they feel they are making.”37 This view is also reflected in comments from NIDA: “For some reason, reviewers do not like fast track and all most always give them worse scores than they would normally receive. We now recommend, even to the best of companies, not to submit using a fast track because it definitely reduces their chances of funding.”38

Other reasons for avoiding Fast Track include:

  • Difficulties in attracting a commercial partner on appropriate terms, which is likely if the product is early in the development cycle.

  • The proposal work required, which Milman estimates at four times the work of a standard Phase I.

  • The existence of alternative paths across the funding gap which may be less risky and resource-intensive.

  • Reluctance, according to other NIH staff, among reviewers to accept Fast Track applications. Study sections can recommend that fast Track appli-

37

Response to NRC Program Manager Survey, April 2006.

38

Ibid.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

cations be approved for Phase I only, returning the application to standard format.

These points and the data above suggest several observations:

  • Fast Track is rapidly growing in importance, expanding from 41 applications and 13 awards in 1997 to more than 300 applications and almost 60 awards in 2004, or from 1.4 percent to 5.7 percent of all applications during that period.

  • Success rates for Fast Track are on average close to those for Phase I (26.1 percent for Fast Track, 24.0 percent for Phase I).

  • Fast Track appears to be working well enough that companies are applying in growing numbers.

  • Fast Track is still an uncommon choice for applicants—95 percent of awardees use the standard progression. Milman’s analysis suggests that relatively few additional companies will qualify for this approach in the future.

  • Projects for which the experimental design is known and accepted are good candidates for Fast Track.

  • NIH has undertaken no outcomes analysis to assess whether Fast Track awards generate more positive outcomes than standard awards.

5.7
FUNDING: AWARD SIZE AND BEYOND

NIH’s SBIR program gives out awards that are different than those of other agencies in three ways:

  • In some cases, NIH has made much larger awards than are given out by other agencies (see Chapter 3).

  • NIH has begun to offer additional years of support including a second year of Phase I support in some cases, compared to the 6-month limit imposed by most other agencies.

  • NIH provides administrative supplements that boost Phase I awards when additional resources are needed to complete the proposed research.

5.7.1
Larger Awards at NIH

Figure 5-4 shows that, starting in 1999, NIH began to provide an increasing number of Phase I awards of more than $250,000. There have been similar increases in the number of awards between $100,000 and $250,000. NIH has also in a few, but increasing, number of cases provided Phase I funding of more than $1 million.

An extensive discussion of larger awards can be found in Chapter 3. Here,

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
FIGURE 5-4 Extra-large Phase I awards at NIH, 1992-2005.

FIGURE 5-4 Extra-large Phase I awards at NIH, 1992-2005.

SOURCE: National Institutes of Health.

we simply note that the trend toward larger awards has continued, and that awards beyond the size of the SBA guidelines are rare except at NIH.

It is also worth noting that views among the Program Administrators responding to the NRC Program Manager Survey varied widely on this issue. Many recommended increased funding and extended time for awards; others indicated that they would prefer to see the limits more strictly enforced. However, this appears to depend on the kind of research being pursued. For example, Melissa Raccioppo of NIDA noted “Since our SBIR/STTR grants tend to involve a clinical trial of some sort, the limits on budget seem too restrictive for our investigators’ purposes.”39 These comments applied to the new Competing Continuation Awards as well; Program Administrators with few likely recipients of these awards were concerned that they might take a disproportionate amount of SBIR program funding.

Finally, one of the respondents to the NRC survey noted that “these larger awards further point to a dire need for a solid outcomes tracking and evaluation capability.”

5.7.2
Supplementary Funding

NIH officials have observed that the availability of supplementary awards adds further flexibility in helping companies to handle the unexpected costs that can easily arise in high-risk research.

In principle, program officers can add limited additional funds to an award

39

Ibid.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
FIGURE 5-5 Supplementary Phase I awards at NIH, 1993-2003.

FIGURE 5-5 Supplementary Phase I awards at NIH, 1993-2003.

SOURCE: National Institutes of Health.

in order to help a recipient pay for unexpected costs. While practices vary at individual ICs, it appears that up to 25 percent (or up to $50,000) of current annual funding for an individual grant can be awarded by the program manager without further IC or NIH review (budget permitting). More substantial supplements must be more extensively reviewed, but are not unknown.

All supplemental requests require documentation. Full applications are required for competing supplements, and administrative supplements need at least a budget page and a letter justification.

For Phase I, supplements remain relatively rare, averaging less than 20 annually in recent years. They are also not especially large, and in no cases have NIH Phase I supplements totaled more than $1 million for a given fiscal year, Still, the data indicate that the size of Phase I supplementary awards are growing at NIH (see Figure 5-5).

Supplementary awards are also available for Phase II, where they are more significant. As shown in Figure 5-6, the number of Phase II supplement awards has hovered around 30. Thus about 10 percent of all Phase II awards receive supplementary funding.

5.7.3
Duration of Awards

Just as the size of awards has grown, NIH has extended the period of support as well. In FY2002 and FY2003, more than 5 percent of all Phase I awards received a second year of support, with a median value of about $200,000.

Year one and year two awards cannot be easily aggregated into a single

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
FIGURE 5-6 Supplementary Phase II, Year One awards at NIH, 1992-2003.

FIGURE 5-6 Supplementary Phase II, Year One awards at NIH, 1992-2003.

SOURCE: National Institutes of Health.

“Phase I award” at NIH owing to characteristics of the NIH awards database. However, the rapidly growing number of year two awards—which in FY2003 were equal to 6.3 percent of all 2002 Phase I, year one awards—as well as the jump in median size in 2000, suggests that this mechanism is of growing importance at NIH.

NIH staff and recipients alike agree that 6 months is too short to complete Phase I work in many biomedical disciplines. NIH usually approves requests for “no-cost” extensions to one year or even longer. No-cost extensions simply ex-

FIGURE 5-7 Phase I, Year Two awards at NIH, 1992-2003.

FIGURE 5-7 Phase I, Year Two awards at NIH, 1992-2003.

SOURCE: National Institutes of Health.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
FIGURE 5-8 Third year of support for Phase II awards, 1992-2003.

FIGURE 5-8 Third year of support for Phase II awards, 1992-2003.

SOURCE: National Institutes of Health.

tend the term of the award without providing additional funding. No other agency offers such a liberal extension program.

For Phase II, NIH also offers extended funding beyond the standard 24 months of support. Figure 5-8 contains estimates of Phase II, year three support calculated on the basis of NIH data (see Chapter 3 for detailed calculations).

The steadily rising numbers of Phase II, year three grants in recent years suggest that third year support is becoming an important component of NIH SBIR activity. In FY2002 and FY2003, more than 10 percent of awards received a third year of support.

In a few cases, NIH goes further. Ten grantees have received a fifth overall year of SBIR support, a few for even longer period.

5.7.4
Award Size: Conclusions

The data shown in Chapter 3 indicate that the size of awards at NIH is rising, that additional administrative support is of increasing importance, and that the duration of awards (and support) is expanding as well.

One important question might be why NIH is making these large awards. A second question might concern the growing number of extended awards. Both are discussed in Chapter 3, but conclusive answers are not available partly because neither question has been directly addressed by NIH, at least in materials that are publicly available.

One final point should, be noted, drawn from conversations with agency staff and from responses to the NRC Program Manager Survey: NIH has repeatedly sought to convert Phase I STTR’s to Phase II SBIR’s and vice versa, as the

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

circumstances related to the research change. SBA has denied these appeals, for reasons that are not clear to NIH staff. Unless SBA can find convincing justifications for this position, it would appear that a change of policy here could be warranted.

5.8
COMMERCIALIZATION SUPPORT

5.8.1
Background

Since its inception in 1982, the SBIR program has aimed to increase “commercialization innovations derived from Federal research and development” (Public Law 97-219). After reauthorization in 1992, agencies were required to consider commercialization potential as part of its review process. The reauthorization also included a provision for technical assistance services to help grantees “develop and commercialize new commercial products and processes.” SBA then issued a rule stating that assistance efforts focused on bringing products to market could be supported by up to $4,000 per Phase I award and up to $4,000 per year for each Phase II award. Subsequent interpretations of the rule by SBA supported aggregation of these funds for an SBIR technical assistance program.

5.8.2
Overview

NIH has recognized that many SBIR Phase II winners struggled to survive the period between the end of SBIR Phase II and market entry, and in June 2002, the Office of Extramural Programs at NIH (OEP) began to provide commercialization assistance to SBIR winners in June 2002.

This assistance is now rendered through the Technical Assistance Program (TAP). Thus far, OEP has initiated three pilot assistance programs and two follow-on, full-scale assistance program under the TAP:

  • The Pilot NCI Commercialization Assistance Program (PCAP) supported 47 SBIR Phase II winners (related to NCI only) and concluded in March 2003.

  • The Pilot Niche Assessment Program (PNAP) was made available to a maximum of 100 SBIR Phase I winners on a first-come, first-serve basis. The pilot program had finished assisting 45 projects as of February 16, 2005, and ended in August 2005.

  • Pilot Manufacturing Assistance Program. In FY2007, NIH plans to pilot an additional assistance program targeting the many manufacturing issues small companies face when trying to commercialize their SBIR-funded products. In partnership with the NIST Manufacturing Extension Partnership (MEP) program, the pilot is aimed at providing transitional support as Phase II awardees move to a manufacturing stage. The goal

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

is to help companies make better decisions when developing their operational transition strategies (method of scale up, cost estimation, quality control, prototyping, design for manufacturability, facility design, process development/improvement, vendor identification and selection, plant layout, etc.) NIH has engaged Dawnbreaker of Rochester, NY, to operate this program. Twenty-five (25) NIH SBIR Phase II awardees are expected to participate.

  • The Commercialization Assistance Program (CAP) was launched in July 2004 as the first full-scale, ongoing commercialization assistance program. Two cohorts of 114 firms each have completed the program as of January 2007.

5.8.3
The Commercialization Assistance Program (CAP)

The perceived success of PCAP prompted OEP to launch the Commercialization Assistance Program (CAP) as its first full fledged, ongoing TAP “menu” item. It is open to companies funded by all NIH ICs.

Larta Institute (Larta) of Los Angeles, CA,40 was selected by a competitive process to be the contractor for this program.41 The Larta contract began in July 2004, and will run for five years. During the first three years, three cohorts of SBIR Phase II winners will receive assistance. Years four and five will cover follow-up work, as each cohort is tracked for 18 months after completion of the assistance effort.


CAP Program details. The assistance process for each group typically includes:

  • Provision of consultant time for business planning and development.

  • Business presentation training.

  • Development of presentation materials.

  • Participation in a public investment event organized by Larta.

  • Eighteen months for follow-up and tracking.

Participants. Based on interviews with NIH staff and Larta, the typical CAP participant is:

  • A small technology-oriented business;

  • Founded by an engineer or physician turned entrepreneur;

  • In operation for 5 to 10 years; and

40

Larta Web site, accessed at: <http://www.larta.org>.

41

Larta was founded by Rohit Shukla who remains as its Chief Executive Officer. It assists technology oriented companies by bringing together management, technologies, and capital to accelerate the transition of technologies to the marketplace.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
FIGURE 5-9 CAP participants, by industry sector.

FIGURE 5-9 CAP participants, by industry sector.

SOURCE: National Institutes of Health.

  • Substantially reliant on government grants because of limited outside funding.

These companies have typically not yet generated meaningful sales, but appear to have significant commercial upside.

As of January 1, 2004, NIH had 634 active SBIR Phase II projects from 455 companies across 23 Institutes and Centers. All of these companies were invited to participate in the CAP program42 and a total of 114 companies participated. Approximately 75 chose to participate in a series of investment workshops offered in Orange County, CA; San Francisco, CA; Washington, DC; Chicago, IL; and Boston, MA, which allowed participants to present their respective business opportunities to a group of investors, and to receive feedback on the effectiveness of their presentations.

Participation by industry. The two largest industry sectors in CAP are Medical Devices (37 or 29 percent of total participants) and Biotech (29 or 23

42

NIH SBIR Technical Assistance Program, Office of Extramural Programs, Enrollment Criteria.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

percent of total participants). The Northeast region accounts for 35 percent of total participants and the West 32 percent of total participants.43

Areas of focused assistance. Three primary “Tracks,” areas of focused assistance, were added by NIH after the pilot based on participant feedback. The three tracks are:

  • The Regulatory Track, for participants in need of a strategy for FDA approval.

  • The Licensing Track, for participants in need of documentation for establishing relationships with potential licensees.

  • The Strategic Alliance Track, for participants in need of documentation for establishing joint ventures, collaborative agreements, or other similar partnerships.

Each Track is further adapted to the special needs of two industry sectors: Biomedical Devices (includes all medical devices and device-based products) and Biotechnology (includes all drugs and biologic-based products). The distribution of the current CAP participants by “Track” is represented in Figure 5-10.44

5.8.4
Niche Assessment Program (NAP) (for Phase I Winners)

Sometimes scientific researchers do not have the entrepreneurial skills to assess other applications or niches for their SBIR-developed technology. As a result, they may underestimate its true market value. This program assesses the market opportunities and needs and concerns of the end-users and helps to discover new markets for possible entry.

The NAP aims to assist SBIR Phase I winners in identifying and evaluating various market opportunities for commercialization (e.g., licensing, sales, partnering). This effort is operated by Foresight Science and Technology, Inc. (Foresight) of New Bedford, MA.45 It has three phases:

  1. Foresight gathers relevant information on the technology from the participant and begins to identify potential commercial applications.

  2. Foresight and the participant determine the technology application that warrants detailed analysis. This application is analyzed by Foresight to determine end-user needs, current and emerging competing technologies, market dynamics, socioeconomic trends, market drivers, market size, the

43

NIH CAP Participants by State, March 1, 2005.

44

Update, SBIR Technical Assistance Program, February 16, 2005.

45

Foresight is a scientific consulting firm offering market research, technology assessment, and valuation and licensing services to the medical, pharmaceutical and biotechnology industries. They focus on helping move technology from the laboratory to the marketplace and assess approximately 300 new technologies annually.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
FIGURE 5-10 CAP distribution by track.

FIGURE 5-10 CAP distribution by track.

SOURCE: National Institutes of Health.

potential technology’s possible market share, potential technology’s current competitive advantages, and strategies for improving the technology’s competitiveness.

  1. Foresight develops a market entry strategy including how to market the technology to end-users and attract Phase III partners. The strategy also projects revenues from the sale or licensing of the technology, possible “launch” customers, testing centers, suppliers, manufacturers, and other parties potentially interested in the technology (e.g., beta testers). Foresight may also make introductions to potential partners.

Each step concludes with an electronic report plus follow-on discussions.

5.8.5
Outcomes and Metrics

5.8.5.1
Pilot NCI Commercialization Assistance Program

Evaluations were completed at 6, 12, and 18 months following culmination of this pilot program, when 32 participants presented at an investor/partner Forum in March 2003.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

Participants were not obligated to provide feedback. However, 13 (40 percent) of the 32 companies reported that they had received additional private sector investment and/or sales related to the technology opportunity they presented at the Forum. Cumulative private sector funding and sales received within 18 months following completion of this program totaled almost $38 million.46 Unsurprisingly, these results were highly skewed: A majority of these funds were received by five of the companies—Computer Science Innovations, Focus Surgery, High Throughput Genomics, Phoenix Pharmacologics, and Vaccinex. Approximately $18 million—or about 47 percent of the total—was generated through the sale of one of these companies.47

Of course, this minimal assessment does not provide or even suggest grounds for a causal link between the program and these results.

5.8.5.2
Commercialization Assistance Program

Two cohorts (2004/2005 and 2005/2006) have completed the CAP training program, and results have been very encouraging though not yet definitive. Evaluation data are collected from the companies at the conclusion of the program, and at 6, 12, and 18 months afterwards. These data indicate that firms going through the CAPM program are attracting funding, as Table 5-4 illustrates.

NIH has also developed some intermediate metrics that indicate project impact. However, as these metrics are not compared with other groups of companies that have not gone through the CAP program, it is difficult to draw conclusions from them.

Data collected six months after the CAP showed a strong increase in commercialization, and in particular in the conclusion of commercialization agreements, which increased for the 2004/2005 cohort by 87 percent (up from 23 at the baseline to 43 6 months later).

These data are encouraging, and are bolstered by discussions with individual participants that indicate that participants find this program to be of considerable value. Development of a control group of some kind would add considerably to the power of this analysis.

5.9
EVALUATION AND ASSESSMENT

Traditionally, NIH has not conducted outcomes assessment on its SBIR and STTR programs, or indeed on other programs. More recently, the NIH SBIR/STTR Program Office has initiated a number of activities aimed at infusing more data into the operation of the program, Most notably, in 2003 NIH followed on from its agreement to fund the NRC study with a separate NIH Survey of Phase II

46

NIH Office of Extramural Programs.

47

OER would not disclose the exact details of these outcomes citing confidentiality restrictions.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

TABLE 5-4 Funding for CAPM Firms

 

Year

2004/2005

2005/2006

Number of companies in CAP

114

114

Number receiving investments

24

13

Percent of total

21.1

11.4

Total investment to date

$22,414,078

$45,636,520

SOURCE: National Institutes of Health.

recipients. Using somewhat different methodologies from the NRC Phase II Survey, with concomitantly different strengths and weaknesses, the NIH Survey broke important ground, and provided results that have been used throughout this analysis.

Discussions with agency staff and responses to the NRC Program Manager Survey indicate widespread views that the program does not have the resources needed to develop an evaluation and assessment program sufficient to manage a program of this size and scope. Phil Daschner, from NCI, for example noted that “More resources should be available to program staff that track and evaluate objective benchmarks for past institutional and investigator productivity.” In its institutional response to the NRC survey, NCI observed that “we still do not have reliable tools to capture in an ongoing way success stories from our grantees. It is a considerable undertaking to get evaluation funds and go through the OMB process. Methods have been identified to capture outcomes, but funds are not

FIGURE 5-11 Aggregate number of partnership- and deal-related activities by category.

FIGURE 5-11 Aggregate number of partnership- and deal-related activities by category.

SOURCE: National Institutes of Health.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

available to support a sustainable effort to track SBIR/STTR outcomes. This is a critical and long-term need.”48

More specifically, as NIDA noted, “More time should be spent following up on grants near their end and after they no longer received NIH funding. We know little about Phase III and whether or not it actually occurs. Most time is spent funding the grant and administering it, but little or no time is spent on follow-up and evaluation.”49

Currently, the NIH SBIR/STTR Program Office must seek one-time funding for any significant assessment activity; this largely precludes longitudinal approaches needed for effective use of evaluation and assessment.

48

NRC Program Manager Survey, April 2006.

49

Ibid.

Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 130
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 131
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 132
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 133
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 134
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 135
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 136
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 137
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 138
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 139
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 140
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 141
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 142
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 143
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 144
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 145
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 146
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 147
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 148
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 149
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 150
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 151
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 152
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 153
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 154
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 155
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 156
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 157
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 158
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 159
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 160
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 161
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 162
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 163
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 164
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 165
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 166
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 167
Suggested Citation:"5 Program Management at NIH." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 168
Next: Appendix A: NIH SBIR Program Data »
An Assessment of the SBIR Program at the National Institutes of Health Get This Book
×
Buy Hardback | $153.00 Buy Ebook | $119.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The SBIR program allocates 2.5 percent of 11 federal agencies' extramural R&D budgets to fund R&D projects by small businesses, providing approximately $2 billion annually in competitive awards. At the request of Congress the National Academies conducted a comprehensive study of how the SBIR program has stimulated technological innovation and used small businesses to meet federal research and development needs.

Drawing substantially on new data collection, this book examines the SBIR program at the National Institutes of Health and makes recommendations for improvements. Separate reports will assess the SBIR program at DOD, NSF, DOE, and NASA, respectively, along with a comprehensive report on the entire program.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!