SBIR—PROGRAM CREATION AND ASSESSMENT
Created in 1982 by the Small Business Innovation Development Act. the Small Business Innovation Research (SBIR) program was designed to stimulate technological innovation among small private-sector businesses while providing the government cost-effective new technical and scientific solutions to challenging mission problems. SBIR was also designed to help to stimulate the U.S. economy by encouraging small businesses to market innovative technologies in the private sector.1
As the SBIR program approached its twentieth year of existence, the U.S.
Congress requested that the National Research Council (NRC) of the National Academies conduct a “comprehensive study of how the SBIR program has stimulated technological innovation and used small businesses to meet federal research and development needs,” and make recommendations on improvements to the program.2 Mandated as a part of SBIR’s renewal in December 2000, the NRC study has assessed the SBIR program as administered at the five federal agencies that together make up 96 percent of SBIR program expenditures. The agencies are, in decreasing order of program size: the Department of Defense (DoD), the National Institutes of Health (NIH), the National Aeronautics and Space Administration (NASA), the Department of Energy (DoE), and the National Science Foundation (NSF). The SBIR program at DoD is the largest of all the SBIR programs. At $943 million in 2005, DoD accounts for over half the program’s funding.
The NRC Committee assessing the SBIR program was not asked to consider if SBIR should exist or not—Congress has affirmatively decided this question on three occasions.3 Rather, the Committee was charged with providing an evidence based assessment of the program’s operations, achievements, and challenges as well as recommendations to improve the program’s effectiveness.
SBIR PROGRAM STRUCTURE
Eleven federal agencies are currently required to set aside 2.5 percent of their extramural research and development budget exclusively for SBIR contracts. Each year these agencies identify various R&D topics, representing scientific and technical problems requiring innovative solutions, for pursuit by small businesses under the SBIR program. These topics are bundled together into individual agency “solicitations”—publicly announced requests for SBIR proposals from interested and qualifying small businesses. A small business can identify an appropriate topic it wants to pursue from these solicitations and, in response, propose a project for an SBIR grant, a process now immensely facilitated by the Internet. The required format for submitting a proposal is different for each agency. Proposal selection also varies, though peer review of proposals on a competitive basis by experts in the field is typical. Each agency then selects the proposals that are found best to meet program selection criteria, and awards contracts or grants to the proposing small businesses. Since the SBIR program’s inception at DoD, all SBIR awards have been contracts awarded on a competitive basis.
As conceived in the 1982 Act, SBIR’s grant-making process is structured in three phases at all agencies:
Phase I grants essentially fund feasibility studies in which award win-
ners undertake a limited amount of research aimed at establishing an idea’s scientific and commercial promise. Today, the legislation anticipates Phase I grants as high as $100,000.4
Phase II grants are larger—the legislated amount is $750,000—and fund more extensive R&D to further develop the scientific and commercial promise of research ideas.
Phase III. During this phase, companies do not receive additional funding from the SBIR program. Instead, grant recipients should be obtaining additional funds from a procurement program (if available) at the agency that made the award, from private investors, or other sources of capital. The objective of this phase is to move the technology from the prototype stage to the marketplace.
The Phase III Challenge
Obtaining Phase III support is often the most difficult challenge for new firms to overcome. In practice, agencies have developed different approaches to facilitate SBIR grantees’ transition to commercial viability; not least among them are additional SBIR grants.5 The multiple approaches taken to address the Phase III challenge are described in Chapter 5. The Department of Defense has shown considerable initiative in its efforts to enhance commercialization and capture returns for the program. Unlike some major agency participants in the program (e.g., NIH & NSF), DoD seeks to acquire and use many of the technologies and products developed through the SBIR program.
Previous NRC research has shown that firms have different objectives in applying to the program. Some want to demonstrate the potential of promising research but may not seek to commercialize it themselves. Others seek to fulfill agency research requirements more cost-effectively through the SBIR program than through the traditional procurement process. Still others seek a certification of quality (and the investments that can come from such recognition) as they push science-based products towards commercialization.6
The SBIR program approached reauthorization in 1992 amidst continued concerns about the U.S. economy’s capacity to commercialize inventions. Finding that “U.S. technological performance is challenged less in the creation of new technologies than in their commercialization and adoption,” the National Academy of Sciences at the time recommended an increase in SBIR funding as a means to improve the economy’s ability to adopt and commercialize new technologies.7
Following this report, the Small Business Research and Development Enhancement Act of 1992 (P.L. 102-564), which reauthorized the SBIR program until September 30, 2000, doubled the set-aside rate to 2.5 percent.8 This increase in the percentage of R&D funds allocated to the program was accompanied by a stronger emphasis on the commercialization of SBIR-funded technologies.9 Legislative language explicitly highlighted commercial potential as a criterion for awarding SBIR grants. For Phase I awards, Congress directed program administrators to assess whether projects have “commercial potential,” in addition to scientific and technical merit, when evaluating SBIR applications.
The 1992 legislation mandated that program administrators consider the existence of second-phase funding commitments from the private sector or other non-SBIR sources when judging Phase II applications. Evidence of third-phase follow-on commitments, along with other indicators of commercial potential, was also to be sought. Moreover, the 1992 reauthorization directed that a small business’s record of commercialization be taken into account when evaluating its Phase II application.10
The Small Business Reauthorization Act of 2000 (P.L. 106-554) extended SBIR until September 30, 2008. It called for a two-phase assessment by the
National Research Council of the broader impacts of the program.11 The goals of the SBIR program, as set out in the 1982 legislation, are: “(1) to stimulate technological innovation; (2) to use small business to meet federal research and development needs; (3) to foster and encourage participation by minority and disadvantaged persons in technological innovation; and (4) to increase private sector commercialization innovations derived from federal research and development.
STRUCTURE OF THE NRC STUDY
This NRC assessment of SBIR has been conducted in several ways. In an exceptional step, at the request of the agencies, a formal research methodology was developed by the NRC. This methodology was then reviewed and approved by an independent National Academies panel of experts.12 As the research began, information about the program was also gathered through interviews with SBIR program administrators and during two major conferences where SBIR officials were invited to describe program operations, challenges, and accomplishments.13 These conferences highlighted the important differences in the goals, and practices of the SBIR program at each agency. The conferences also explored the challenges inherent in assessing such a diverse range of program objectives and practices and the limits of using common metrics across agencies with significantly different missions and objectives.
Implementing the approved research methodology, the NRC Committee deployed multiple survey instruments and its researchers conducted a large number of case studies that captured a wide range of SBIR firms. The Committee then evaluated the results and developed both agency-specific and overall findings and recommendations for improving the effectiveness of the SBIR program at each agency. This report includes a complete assessment of the operations and achievements of the SBIR program at DoD and makes recommendations as to how it might be further improved.
The current assessment is congruent with the Government Performance and Results Act (GPRA) of 1993: <http://govinfo.library.unt.edu/npr/library/misc/s20.html>. As characterized by the GAO, GPRA seeks to shift the focus of government decision making and accountability away from a preoccupation with the activities that are undertaken—such as grants dispensed or inspections made—to a focus on the results of those activities. See <http://www.gao.gov/new.items/gpra/gpra.htm>.
The SBIR methodology report is available on the Web. National Research Council, An Assessment of the Small Business Innovation Research Program—Project Methodology, Washington, DC: The National Academies Press, 2004, accessed at <http://books.nap.edu/catalog.php?record_id=11097#toc>.
The opening conference on October 24, 2002 examined the program’s diversity and assessment challenges. For a published report of this conference, see National Research Council, SBIR: Program Diversity and Assessment Challenges, Charles W. Wessner, ed., Washington, DC: The National Academies Press, 2004. The second conference, held on March 28, 2003 was titled, “Identifying Best Practice.” The conference provided a forum for the SBIR Program Managers from each of the five agencies in the study’s purview to describe their administrative innovations and best practices.
SBIR ASSESSMENT CHALLENGES
Program Diversity and Flexibility
At its outset, the NRC’s SBIR study identified a series of assessment challenges that must be addressed. As the October 2002 conference made clear, the administrative flexibility found in the SBIR program makes it difficult to make cross-agency assessments. Although each agency’s SBIR program shares the common three-phase structure, the SBIR concept is interpreted uniquely at each agency. At DoD, the program is spread across the three services and seven agencies involving widely different missions, ranging from missile defense to Navy submarines to Army support for special forces to the special needs of DARPA.
This flexibility is a positive attribute in that it permits each agency to adapt its SBIR program to the agency’s particular mission, scale, and working culture. For example, NSF operates its SBIR program differently than DoD because “research” is often coupled with procurement of goods and services at DoD but normally not at NSF. Programmatic diversity means that each agency’s SBIR activities must be understood in terms of their separate missions and operating procedures. While commendable in itself, this diversity of objectives, procedures, mechanisms, and management makes an assessment of the program as a whole more challenging.
Nonlinearity of Innovation
A second challenge concerns the linear process of commercialization implied by the design of SBIR’s three phase structure.14 In the linear model, illustrated in Figure 1-1, innovation begins with basic research supplying a steady stream of fresh and new ideas. From among these ideas, those that show technical feasibility become innovations. Such innovations, when further developed by firms, can become marketable products driving economic growth.
As NSF’s Joseph Bordogna observed at the launch conference, innovation almost never takes place through a protracted linear progression from research to development to market. Research and development drives technological innovation, which, in turn, opens up new frontiers in R&D. True innovation, Bordogna noted, can spur the search for new knowledge and create the context in which the next generation of research identifies new frontiers. This nonlinearity, illustrated in Figure 1-2, underscores the challenge of assessing the impact of the SBIR
program’s individual awards. Inputs do not match up with outputs according to a simple function.15
A third assessment challenge relates to the measurement of outputs and outcomes. Program realities can and often do complicate the task of data gathering. In some cases, for example, SBIR recipients receive a Phase I award from one agency and a Phase II award from another. In other cases, multiple SBIR awards may have been used to help a particular technology become sufficiently mature to reach the market. Also complicating matters is the possibility that for any particular grantee, an SBIR award may be only one among other federal and
nonfederal sources of funding. Causality can thus be difficult, if not impossible, to establish.
The task of measuring outcomes is also made harder because companies that have garnered SBIR awards can also merge, fail, or change their name before a product reaches the market. In addition, principal investigators or other key individuals can change firms, carrying their knowledge of an SBIR project with them. A technology developed using SBIR funds may eventually achieve commercial success individually, at an entirely different company than the one that received the initial SBIR award.
Gauging Commercial Success
Complications plague even the apparently straightforward task of assessing commercial success. For example, research enabled by a particular SBIR award may take on commercial relevance in new unanticipated contexts. At the launch conference, Duncan Moore, former Associate Director of Technology at the White House Office of Science and Technology Policy (OSTP), cited the case of SBIR-funded research in gradient index optics that was initially considered a commercial failure when an anticipated market for its application did not emerge. Years later, however, products derived from the research turned out to be a major commercial success.16 Today’s apparent dead end can sometimes be a lead to a major achievement tomorrow, while others are, indeed, dead ends. Yet, even technological dead ends have their value, especially if they can be determined for the low costs associated with an SBIR award.
Gauging commercialization is also difficult when the product in question is destined for public procurement. The challenge is to develop a satisfactory measure of how useful an SBIR-funded innovation has been to an agency mission. A related challenge is determining how central (or even useful) SBIR awards have proved to be in developing a particular technology or product. Often, multiple SBIR awards and other funding sources contribute to the development of a product or process for DoD. In some cases, the Phase I award can meet the agency’s need—completing the research with no further action required. In other cases, Phase II awards, supplemental funding, and substantial management and financial resources are required for “success.”
Measurement challenges are substantial. For example, one way of measuring commercialization success is to count product sales. Another is to focus on the products developed using SBIR funds that are procured by DoD. In practice, however, large procurements from major suppliers are typically easier to track than products from small suppliers such as SBIR firms. In other cases, successful Phase II awards are just that—they meet the agency need and no further commer-
cialization takes place. In other cases, substantial commercialization occurs, and then ceases as a promising firm or technology is acquired by a defense supplier.
Moreover, successful development of a technology or product does not always translate into successful “uptake” by the procuring agency. Often, the absence of procurement may have little to do with the product’s quality or the potential contribution of SBIR. Small companies, especially new entrants to the program, entail greater risk for program officers. Perceived uncertainties about reliability, timeliness of supply, and risks of program delays all militate against acquisition of successful technologies from new, unproven firms.
Understanding and Anticipating Failure
Understanding failure is equally challenging. By its very nature, an early-stage program such as SBIR should anticipate a significant failure rate. The causes of failure are many. The most straightforward, of course, is technical failure, where the research objectives of the award are not achieved. In some cases, the project can be a technically successful but a commercial failure. This can occur when a procuring agency changes its mission objectives and hence its procurement priorities. NASA’s new Mars Mission is one example of a mission shift that may result in the cancellation of programs involving SBIR awards to make room for new agency priorities. Cancelled weapons system programs at the Department of Defense can have similar effects.
Technologies procured through SBIR may also fail in the transition to acquisition. Some technology developments by small businesses do not survive the long lead times created by complex testing and certification procedures required by the Department of Defense. Indeed, small firms encounter considerable difficulty in surmounting the long lead times, high costs, and complex regulations that characterize defense acquisition. In addition to complex federal acquisition procedures, there are strong disincentives, noted above, for high-profile projects to adopt untried technologies. Technology transfer in commercial markets can be equally difficult. A failure to transfer to commercial markets can occur even when a technology is technically successful if the market is smaller than anticipated, competing technologies emerge or are more competitive than expected, or the product is not adequately marketed. Understanding and accepting the varied sources of project failure in the high-risk, high-reward environment of cutting-edge R&D is a challenge for analysts and policy makers alike.
Evaluating SBIR: “Compared to What?”
This raises the issue concerning the standard by which SBIR programs should be evaluated. An assessment of SBIR must take into account the expected distribution of successes and failures in early-stage finance. As a point of comparison, Gail Cassell, Vice President for Scientific Affairs at Eli Lilly, has noted
that only one in ten innovative products in the biotechnology industry will turn out to be a commercial success.17 Similarly, venture capital funds often achieve considerable commercial success on only two or three out of twenty or more investments.18
In short, commercial success tends to be concentrated. Yet, commercial success is not the only metric of the program. At the Defense Department, SBIR can and does provide a variety of valuable services and products that do not achieve widespread commercial success, even if they do have sales or licensing revenue.
In setting metrics for SBIR projects, therefore, it is important to have a realistic expectation of the success rate for competitive awards to small firms investing in promising but unproven technologies. Similarly, it is important to have some understanding of what can be reasonably expected—that is, what constitutes “success” for an SBIR award, and some understanding of the constraints and opportunities successful SBIR awardees face in bringing new products to market. This is especially relevant in the case of a constrained, regulation-driven market such as the defense procurement market. From the management perspective, the rate of success also raises the question of appropriate expectations and desired levels of risk taking. A portfolio that always succeeds would not be pushing the technology envelope. A very high rate of “success” would, thus, paradoxically suggest an inappropriate use of the program. Even when technical success is achieved, as noted above, it does not automatically transfer into commercial success for a variety of reasons related to the defense mission and to procurement procedures. Understanding the nature of success and the appropriate benchmarks for a program with this focus is therefore important to understanding the SBIR program and the approach of this study.
SBIR ASSESSMENT RESULTS
Drawing on interviews, multiple survey instruments and case studies, and overcoming many of the research challenges identified above, the NRC Committee has developed a number of findings and practical recommendations for improving the effectiveness of the SBIR program at the Department of Defense.
The Committee found that the SBIR program at DoD is, in general, meeting the legislative and mission-related objectives of the program. The program is contributing directly to enhanced capabilities for the Department of Defense and the needs of those charged with defending the country.
Further, the Committee found that the DoD program also provides substantial benefits for small business participants in terms of market access, funding, and recognition. The program supports a diverse array of small businesses contributing to the vitality of the defense industrial base while providing greater competition and new options and opportunities for DoD managers. In addition, the Committee noted that the DoD SBIR program is generating significant intellectual capital, contributing to new scientific and technological knowledge, and generating numerous publications and patents.
The Committee’s recommended improvements to the program have been designed to enable the DoD SBIR managers to address the program’s congressional goals more efficiently and effectively. These include further work to improve the Phase III transition by (among other approaches) changing incentives faced by program managers so that they are motivated to make better use of the SBIR program. The Committee also recommends that additional funding should be provided for program management and assessment in order to encourage and support the development of an innovative and results-oriented SBIR program. The Committee’s complete findings and recommendations are listed in Chapter 2.
Chapter 3 provides a comprehensive overview of the distribution of SBIR awards by DoD, providing a basis (as drawn out in Chapter 4) for understanding program outcomes. Chapter 5 describes the Phase III challenge of commercialization at DoD. Chapter 6 describes the diversity of management structures as well as current practices and recent reforms found among the different services and agencies that fund SBIR programs at DoD. Together, this report provides the most detailed and comprehensive picture to date of the SBIR program at the Department of Defense.