National Academies Press: OpenBook
« Previous: Summary
Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

1
Introduction

1.1
SMALL BUSINESS INNOVATION RESEARCH PROGRAM CREATION AND ASSESSMENT

Created in 1982 by the Small Business Innovation Development Act, the Small Business Innovation Research (SBIR) program was designed to stimulate technological innovation among small private-sector businesses while providing the government cost-effective new technical and scientific solutions to challenging mission problems. SBIR was also designed to help to stimulate the U.S. economy by encouraging small businesses to market innovative technologies in the private sector.1

As the SBIR program approached its twentieth year of existence, the U.S. Congress requested that the National Research Council (NRC) of the National Academies conduct a “comprehensive study of how the SBIR program has stimulated technological innovation and used small businesses to meet Federal research and development needs,” and make recommendations on improvements to the program.2 Mandated as a part of SBIR’s renewal in 2000, the NRC study has assessed the SBIR program as administered at the five federal agencies that together make up 96 percent of SBIR program expenditures. The agencies are, in

1

The SBIR legislation drew from a growing body of evidence, starting in the late 1970s and accelerating in the 1980s, which indicated that small businesses were assuming an increasingly important role in both innovation and job creation. This evidence gained new credibility with the Phase I empirical analysis by Zoltan Acs and David Audretsch of the U.S. Small Business Innovation Database, which confirmed the increased importance of small firms in generating technological innovations and their growing contribution to the U.S. economy. See Zoltan Acs and David Audretsch, Innovation and Small Firms, Cambridge, MA: The MIT Press, 1990.

2

See Public Law 106-554, Appendix I—H.R. 5667, Section 108.

Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

decreasing order of program size: the Department of Defense (DoD), the National Institutes of Health (NIH), the National Aeronautics and Space Administration (NASA), the Department of Energy (DoE), and the National Science Foundation (NSF).

The NRC Committee assessing the SBIR program was not asked to consider if SBIR should exist or not—Congress has affirmatively decided this question on three occasions.3 Rather, the Committee was charged with providing assessment-based findings to improve public understanding of the program as well as recommendations to improve the program’s effectiveness.

1.2
SBIR PROGRAM STRUCTURE

Eleven federal agencies are currently required to set aside 2.5 percent of their extramural research and development budget exclusively for SBIR awards. Each year these agencies identify various R&D topics, representing scientific and technical problems requiring innovative solutions, for pursuit by small businesses under the SBIR program. These topics are bundled together into individual agency “solicitations”—publicly announced requests for SBIR proposals from interested small businesses. A small business can identify an appropriate topic it wants to pursue from these solicitations and, in response, propose a project for an SBIR award. The required format for submitting a proposal is different for each agency. Proposal selection also varies, though peer review of proposals on a competitive basis by experts in the field is typical. Each agency then selects the proposals that are found best to meet program selection criteria, and awards contracts or grants to the proposing small businesses.

As conceived in the 1982 Act, SBIR’s award-making process is structured in three phases at all agencies:

  • Phase I awards essentially fund feasibility studies in which award winners undertake a limited amount of research aimed at establishing an idea’s scientific and commercial promise. Today, the legislation anticipates Phase I awards as high as $100,000.4

  • Phase II awards are larger—typically about $750,000—and fund more extensive R&D to further develop the scientific and commercial promise of research ideas.

  • Phase III. During this phase, companies do not receive additional funding from the SBIR program. Instead, award recipients should be obtaining additional funds from a procurement program at the agency that made the

3

These are the 1982 Small Business Development Act, and the subsequent multi-year reauthorizations of the SBIR program in 1992 and 2000.

4

With the agreement of the Small Business Administration, which plays an oversight role for the program, this amount can be substantially higher in certain circumstances, e.g., drug development at NIH, and is often lower with smaller SBIR programs, e.g., EPA or the Department of Agriculture.

Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

award, from private investors, or from the capital markets. The objective of this phase is to move the technology from the prototype stage to the marketplace.

Obtaining Phase III support is often the most difficult challenge for new firms to overcome. In practice, agencies have developed different approaches to facilitate SBIR grantees’ transition to commercial viability; not least among them are additional SBIR awards.

Previous NRC research has shown that firms have different objectives in applying to the program. Some want to demonstrate the potential of promising research but may not seek to commercialize it themselves. Others think they can fulfill agency research requirements more cost-effectively through the SBIR program than through the traditional procurement process. Still others seek a certification of quality (and the investments that can come from such recognition) as they push science-based products towards commercialization.5

1.3
SBIR REAUTHORIZATIONS

The SBIR program approached reauthorization in 1992 amidst continued concerns about the U.S. economy’s capacity to commercialize inventions. Finding that “U.S. technological performance is challenged less in the creation of new technologies than in their commercialization and adoption,” the National Academy of Sciences at the time recommended an increase in SBIR funding as a means to improve the economy’s ability to adopt and commercialize new technologies.6

Following this report, the Small Business Research and Development Enhancement Act (P.L. 102-564), which reauthorized the SBIR program until September 30, 2000, doubled the set-aside rate to 2.5 percent.7 This increase in the percentage of R&D funds allocated to the program was accompanied by a stronger emphasis on encouraging the commercialization of SBIR-funded tech-

5

See Reid Cramer, “Patterns of Firm Participation in the Small Business Innovation Research Program in Southwestern and Mountain States,” in National Research Council, The Small Business Innovation Research Program: An Assessment of the Department of Defense Fast Track Initiative, Charles W. Wessner, ed., Washington, DC: National Academy Press, 2000.

6

See National Research Council, The Government Role in Civilian Technology: Building a New Alliance, Washington, DC: National Academy Press, 1992, p. 29.

7

For FY2003, this has resulted in a program budget of approximately $1.6 billion across all federal agencies, with the Department of Defense having the largest SBIR program at $834 million, followed by the National Institutes of Health at $525 million. The DoD SBIR program, is made up of 10 participating components: Army, Navy, Air Force, Missile Defense Agency (MDA), Defense Advanced Research Projects Agency (DARPA), Chemical Biological Defense (CBD), Special Operations Command (SOCOM), Defense Threat Reduction Agency (DTRA), National Imagery and MapPhasing Agency (NIMA), and the Office of Secretary of Defense (OSD). NIH counts 23 separate institutes and agencies making SBIR awards, many with multiple programs.

Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

nologies.8 Legislative language explicitly highlighted commercial potential as a criterion for awarding SBIR awards. For Phase I awards, Congress directed program administrators to assess whether projects have “commercial potential,” in addition to scientific and technical merit, when evaluating SBIR applications.

The 1992 legislation mandated that program administrators consider the existence of second-phase funding commitments from the private sector or other non-SBIR sources when judging Phase II applications. Evidence of third-phase follow-on commitments, along with other indicators of commercial potential, was also to be sought. Moreover, the 1992 reauthorization directed that a small business’ record of commercialization be taken into account when evaluating its Phase II application.9

The Small Business Reauthorization Act of 2000 (P.L. 106-554) extended SBIR until September 30, 2008. It called for this assessment by the National Research Council of the broader impacts of the program, including those on employment, health, national security, and national competitiveness.10

1.4
STRUCTURE OF THE NRC STUDY

This NRC assessment of SBIR has been conducted in two phases. In the first phase, at the request of the agencies, a research methodology was developed by the NRC. This methodology was then reviewed and approved by an independent National Academies panel of experts.11 Information about the program was also gathered through interviews with SBIR program administrators and during two major conferences where SBIR officials were invited to describe program

8

See Robert Archibald and David Finifter, “Evaluation of the Department of Defense Small Business Innovation Research Program and the Fast Track Initiative: A Balanced Approach,” in National Research Council, The Small Business Innovation Research Program: An Assessment of the Department of Defense Fast Track Initiative, op. cit., pp. 211-250.

9

A GAO report had found that agencies had not adopted a uniform method for weighing commercial potential in SBIR applications. See U.S. General Accounting Office, Federal Research: Evaluations of Small Business Innovation Research Can Be Strengthened, GAO/RCED-99-114, Washington, DC: U.S. General Accounting Office, 1999.

10

The current assessment is congruent with the Government Performance and Results Act (GPRA) of 1993, accessed at: <http://govinfo.library.unt.edu/npr/library/misc/s20.html>. As characterized by the GAO, GPRA seeks to shift the focus of government decisionmaking and accountability away from a preoccupation with the activities that are undertaken—such as grants dispensed or inspections made—to a focus on the results of those activities. See <http://www.gao.gov/new.items/gpra/gpra.htm>.

11

National Research Council, An Assessment of the Small Business Innovation Research Program: Project Methodology, Washington, DC: The National Academies Press, 2004. The methodology report is available on the Web. Access at: <http://www7.nationalacademies.org/sbir/SBIR_Methodology_Report.pdf>.

Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

operations, challenges, and accomplishments.12 These conferences highlighted the important differences in each agency’s SBIR program’s goals, practices, and evaluations. The conferences also explored the challenges of assessing such a diverse range of program objectives and practices using common metrics.

The second phase of the NRC study implemented the approved research methodology. The Committee deployed multiple survey instruments and its researchers conducted case studies of a wide profile of SBIR firms. The Committee then evaluated the results and developed both agency-specific and overall findings and recommendations for improving the effectiveness of the SBIR program. The final report includes complete assessments for each of the five agencies and an overview of the program as a whole.

1.5
SBIR ASSESSMENT CHALLENGES

At its outset, the NRC’s SBIR study identified a series of assessment challenges that must be addressed. As discussed at the October 2002 conference that launched the study, the administrative flexibility found in the SBIR program makes it difficult to make cross-agency assessments. Although each agency’s SBIR program shares the common three-phase structure, the SBIR concept is interpreted uniquely at each agency. This flexibility is a positive attribute in that it permits each agency to adapt its SBIR program to the agency’s particular mission, scale, and working culture. For example, NSF operates its SBIR program differently than DoD because “research” is often coupled with procurement of goods and services at DoD but rarely at NSF. Programmatic diversity means that each agency’s SBIR activities must be understood in terms of their separate missions and operating procedures. This commendable diversity makes an assessment of the program as a whole more challenging.

A second challenge concerns the linear process of commercialization implied by the design of SBIR’s three-phase structure.13 In the linear model, illustrated in Figure 1-1, innovation begins with basic research supplying a steady stream of fresh and new ideas. Among these ideas, those that show technical feasibility become innovations. Such innovations, when further developed by firms, become marketable products driving economic growth.

As NSF’s Joseph Bordogna observed at the study’s initial conference, innovation almost never takes place through a protracted linear progression from

12

The opening conference on October 24, 2002, examined the program’s diversity and assessment challenges. For a published report of this conference, see National Research Council, SBIR: Program Diversity and Assessment Challenges, Charles W. Wessner, ed. Washington, DC: The National Academies Press, 2004. The second conference, held on March 28, 2003, was titled, “Identifying Best Practice.” The conference provided a forum for the SBIR Program Managers from each of the five agencies in the study’s purview to describe their administrative innovations and best practices.

13

This view was echoed by Duncan Moore: “Innovation does not follow a linear model. It stops and starts.” National Research Council, SBIR: Program Diversity and Assessment Challenges, op. cit.

Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
FIGURE 1-1 The linear model of innovation.

FIGURE 1-1 The linear model of innovation.

research to development to market. Research and development drives technological innovation, which, in turn, opens up new frontiers in R&D. True innovation, Bordogna noted, can spur the search for new knowledge and create the context in which the next generation of research identifies new frontiers. This nonlinearity, illustrated in Figure 1-2, makes it difficult to rate the efficiency of SBIR program. Inputs do not match up with outputs according to a simple function.

A third assessment challenge relates to the measurement of outputs and outcomes. Program realities can and often do complicate the task of data gathering. In some cases, for example, SBIR recipients receive a Phase I award from one agency and a Phase II award from another. In other cases, multiple SBIR awards may have been used to help a particular technology become sufficiently mature to reach the market. Also complicating matters is the possibility that for any particular grantee, an SBIR award may be only one among other federal and nonfederal sources of funding. Causality can thus be difficult, if not impossible, to establish. The task of measuring outcomes is made harder because companies that have garnered SBIR awards can also merge, fail, or change their name before a product reaches the market. In addition, principal investigators or other key individuals can change firms, carrying their knowledge of an SBIR project with them. A technology developed using SBIR funds may eventually achieve

FIGURE 1-2 A feedback model of innovation.

FIGURE 1-2 A feedback model of innovation.

Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

commercial success at an entirely different company than that which received the initial SBIR award.

Complications plague even the apparently straightforward task of assessing commercial success. For example, research enabled by a particular SBIR award may take on commercial relevance in new unanticipated contexts. At the launch conference, Duncan Moore, former Associate Director of Technology at the White House Office of Science and Technology Policy (OSTP), cited the case of SBIR-funded research in gradient index optics that was initially considered a commercial failure when an anticipated market for its application did not emerge. Years later, however, products derived from the research turned out to be a major commercial success.14 Today’s apparent dead end can be a lead to a major achievement tomorrow. Lacking clairvoyance, analysts cannot anticipate or measure such potential SBIR benefits.

Gauging commercialization is also difficult when the product in question is destined for public procurement. The challenge is to develop a satisfactory measure of how useful an SBIR-funded innovation has been to an agency mission. A related challenge is determining how central (or even useful) SBIR awards have proved in developing a particular technology or product. In some cases, the Phase I award can meet the agency’s need—completing the research with no further action required. In other cases, surrogate measures are often required. For example, one way of measuring commercialization success is to count the products developed using SBIR funds that are procured by an agency such as DoD. In practice, however, large procurements from major suppliers are typically easier to track than products from small suppliers such as SBIR firms. Moreover, successful development of a technology or product does not always translate into successful “uptake” by the procuring agency. Often, the absence of procurement may have little to do with the product’s quality or the potential contribution of SBIR.

Understanding failure is equally challenging. By its very nature, an early-stage program such as SBIR should anticipate a high failure rate. The causes of failure are many. The most straightforward, of course, is technical failure, where the research objectives of the award are not achieved. In some cases, the project can be technically successful but a commercial failure. This can occur when a procuring agency changes its mission objectives and hence its procurement priorities. NASA’s new Mars Mission is one example of a mission shift that may result in the cancellation of programs involving SBIR awards to make room for new agency priorities. Cancelled weapons system programs at the Department of Defense can have similar effects. Technologies procured through SBIR may also fail in the transition to acquisition. Some technology developments by small businesses do not survive the long lead times created by complex testing and

14

Duncan Moore, “Turning Failure into Success,” in National Research Council, SBIR: Program Diversity and Assessment Challenges, op. cit., p. 94.

Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

certification procedures required by the Department of Defense. Indeed, small firms encounter considerable difficulty in penetrating the “procurement thicket” that characterizes defense acquisition.15 In addition to complex federal acquisition procedures, there are strong disincentives for high-profile projects to adopt untried technologies. Technology transfer in commercial markets can be equally difficult. A failure to transfer to commercial markets can occur even when a technology is technically successful if the market is smaller than anticipated, competing technologies emerge or are more competitive than expected, or the product is not adequately marketed. Understanding and accepting the varied sources of project failure in the high-risk, high-reward environment of cutting-edge R&D is a challenge for analysts and policy makers alike.

This raises the issue concerning the standard on which SBIR programs should be evaluated. An assessment of SBIR must take into account the expected distribution of successes and failures in early-stage finance. As a point of comparison, Gail Cassell, Vice President for Scientific Affairs at Eli Lilly, has noted that only 1 in 10 innovative products in the biotechnology industry will turn out to be a commercial success.16 Similarly, venture capital funds often achieve considerable commercial success on only two or three out of twenty or more investments.17

In setting metrics for SBIR projects, therefore, it is important to have a realistic expectation of the success rate for competitive awards to small firms investing in promising but unproven technologies. Similarly, it is important to have some understanding of what can be reasonably expected—that is, what constitutes “success” for an SBIR award, and some understanding of the constraints and opportunities successful SBIR awardees face in bringing new products to market.

15

For a description of the challenges small businesses face in defense procurement, the subject of a June 14, 2005, NRC conference and one element of the congressionally requested assessment of SBIR, see National Research Council, SBIR and the Phase III Challenge of Commercialization, Charles W. Wessner, ed., Washington, DC: The National Academies Press, 2007. Relatedly, see remarks by Kenneth Flamm on procurement barriers, including contracting overhead and small firm disadvantages in lobbying in National Research Council, SBIR: Program Diversity and Assessment Challenges, op. cit., pp. 63-67.

16

Gail Cassell, “Setting Realistic Expectations for Success,” in National Research Council, SBIR: Program Diversity and Assessment Challenges, op. cit., p. 86.

17

See John H. Cochrane, “The Risk and Return of Venture Capital,” Journal of Financial Economics, 75(1) 2005:3-52. Drawing on the VentureOne database Cochrane plots a histogram of net venture capital returns on investments that “shows an extraordinary skewness of returns. Most returns are modest, but there is a long right tail of extraordinary good returns. 15% of the firms that go public or are acquired give a return greater than 1,000%! It is also interesting how many modest returns there are. About 15% of returns are less than 0, and 35% are less than 100%. An IPO or acquisition is not a guarantee of a huge return. In fact, the modal or ‘most probable’ outcome is about a 25% return.” See also Paul A. Gompers and Josh Lerner, “Risk and Reward in Private Equity Investments: The Challenge of Performance Assessment,” Journal of Private Equity, 1 (Winter 1977):5-12. Steven D. Carden and Olive Darragh, “A Halo for Angel Investors” The McKinsey Quarterly, 1, 2004 also show a similar skew in the distribution of returns for venture capital portfolios.

Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×

From the management perspective, the rate of success also raises the question of appropriate expectations and desired levels of risktaking. A portfolio that always succeeds would not be investing in high-risk, high pay-off projects that push the technology envelope. A very high rate of “success” would, thus, paradoxically suggest an inappropriate use of the program. Understanding the nature of success and the appropriate benchmarks for a program with this focus is therefore important to understanding the SBIR program and the approach of this study.

1.6
STRUCTURE OF THIS REPORT

This report sets out the Committee’s assessment of the SBIR program at the National Institutes of Health. The Committee’s detailed findings and recommendations are presented in the next chapter. The Committee finds that the NIH SBIR program largely meets it legislative objectives and makes recommendations to improve program outcomes. Chapter 3 reviews awards made by NIH. It analyzes data supplied by NIH, reflecting on both the advantages and disadvantages of NIH data gathering methods. Chapter 4 looks at the outcomes of the NIH SBIR program, including commercial sales and employment effects. Chapter 5 examines how the SBIR program at NIH is managed, including an explanation of the NIH award cycle, outreach efforts to attract the best applicants, and initiatives to support the commercialization of SBIR-funded technologies. Appendix A presents program data collected by NIH, DoD, and the NRC. Appendix B and C provide the template and results of the NRC Firm Survey and surveys of SBIR Phase I and Phase II projects. Appendix D presents illustrative case studies of firms participating in the NIH SBIR program. Finally, Appendix E provides a reference bibliography.

Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 10
Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 11
Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 12
Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 13
Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 14
Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 15
Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 16
Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 17
Suggested Citation:"1 Introduction." National Research Council. 2009. An Assessment of the SBIR Program at the National Institutes of Health. Washington, DC: The National Academies Press. doi: 10.17226/11964.
×
Page 18
Next: 2 Findings and Recommendations »
An Assessment of the SBIR Program at the National Institutes of Health Get This Book
×
Buy Hardback | $153.00 Buy Ebook | $119.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The SBIR program allocates 2.5 percent of 11 federal agencies' extramural R&D budgets to fund R&D projects by small businesses, providing approximately $2 billion annually in competitive awards. At the request of Congress the National Academies conducted a comprehensive study of how the SBIR program has stimulated technological innovation and used small businesses to meet federal research and development needs.

Drawing substantially on new data collection, this book examines the SBIR program at the National Institutes of Health and makes recommendations for improvements. Separate reports will assess the SBIR program at DOD, NSF, DOE, and NASA, respectively, along with a comprehensive report on the entire program.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!