1
Introduction

Small businesses are a major driver of high-technology innovation and economic growth in the United States, generating significant employment, new markets, and high-growth industries.1 In this era of globalization, optimizing the ability of innovative small businesses to develop and commercialize new products is essential for U.S. competitiveness and national security. Developing better incentives to spur innovative ideas, technologies, and products—and ultimately to bring them to market—is thus a central policy challenge.

Created in 1982 through the Small Business Innovation Development Act, the Small Business Innovation Research (SBIR) is the nation’s largest innovation program. SBIR offers competition-based awards to stimulate technological innovation among small private-sector businesses while providing government agencies new, cost-effective, technical and scientific solutions to meet their diverse mission needs. The program’s goals are four-fold: “(1) to stimulate technological innovation; (2) to use small business to meet federal research and development needs; (3) to foster and encourage participation by minority and disadvantaged

1

A growing body of evidence, starting in the late 1970s and accelerating in the 1980s indicated that small businesses were assuming an increasingly important role in both innovation and job creation. See, for example, J. O. Flender and R. S. Morse, The Role of New Technical Enterprise in the U.S. Economy, Cambridge, MA: MIT Development Foundation, 1975, and David L. Birch, “Who Creates Jobs?” The Public Interest, 65:3-14, 1981. Evidence about the role of small businesses in the U.S. economy gained new credibility with the empirical analysis by Zoltan Acs and David Audretsch of the U.S. Small Business Innovation Data Base, which confirmed the increased importance of small firms in generating technological innovations and their growing contribution to the U.S. economy. See Zoltan Acs and David Audretsch, “Innovation in Large and Small Firms: An Empirical Analysis,” The American Economic Review, 78(4):678-690, Sept. 1988. See also Zoltan Acs and David Audretsch, Innovation and Small Firms, Cambridge, MA: The MIT Press, 1991.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 12
1 Introduction Small businesses are a major driver of high-technology innovation and economic growth in the United States, generating significant employment, new markets, and high-growth industries.1 In this era of globalization, optimizing the ability of innovative small businesses to develop and commercialize new prod- ucts is essential for U.S. competitiveness and national security. Developing better incentives to spur innovative ideas, technologies, and products—and ultimately to bring them to market—is thus a central policy challenge. Created in 1982 through the Small Business Innovation Development Act, the Small Business Innovation Research (SBIR) is the nation’s largest innovation program. SBIR offers competition-based awards to stimulate technological inno- vation among small private-sector businesses while providing government agen- cies new, cost-effective, technical and scientific solutions to meet their diverse mission needs. The program’s goals are four-fold: “(1) to stimulate technological innovation; (2) to use small business to meet federal research and development needs; (3) to foster and encourage participation by minority and disadvantaged 1A growing body of evidence, starting in the late 1970s and accelerating in the 1980s indicated that small businesses were assuming an increasingly important role in both innovation and job creation. See, for example, J. O. Flender and R. S. Morse, The Role of New Technical Enterprise in the U.S. Economy, Cambridge, MA: MIT Development Foundation, 1975, and David L. Birch, “Who Creates Jobs?” The Public Interest, 65:3-14, 1981. Evidence about the role of small businesses in the U.S. economy gained new credibility with the empirical analysis by Zoltan Acs and David Audretsch of the U.S. Small Business Innovation Data Base, which confirmed the increased importance of small firms in generating technological innovations and their growing contribution to the U.S. economy. See Zoltan Acs and David Audretsch, “Innovation in Large and Small Firms: An Empirical Analysis,” The American Economic Reiew, 78(4):678-690, Sept. 1988. See also Zoltan Acs and David Audretsch, Innoation and Small Firms, Cambridge, MA: The MIT Press, 1991. 

OCR for page 12
 INTRODUCTION persons in technological innovation; and (4) to increase private sector commer- cialization derived from federal research and development.”2 A distinguishing feature of SBIR is that it embraces the multiple goals listed above, while maintaining an administrative flexibility that allows very different federal agencies to use the program to address their unique mission needs. SBIR legislation currently requires federal agencies with extramural R&D budgets in excess of $100 million to set aside 2.5 percent of their extramural R&D funds for SBIR. In 2005, the 11 federal agencies administering the SBIR program disbursed over $1.85 billion dollars in innovation awards. Five agencies administer over 96 percent of the program’s funds. They are the Department of Defense (DoD), the Department of Health and Human Services (particularly the National Institutes of Health [NIH]), the Department of Energy (DoE), the Na- tional Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF). (See Figure 1-1.) As the Small Business Innovation Research (SBIR) program approached its twentieth year of operation, the U.S. Congress asked the National Research Council (NRC) to carry out a “comprehensive study of how the SBIR program has stimulated technological innovation and used small businesses to meet federal research and development needs” and make recommendations on improvements to the program.3 The NRC’s charge is, thus, to assess the operation of the SBIR program and recommend how it can be improved.4 This report provides an overview of the NRC assessment. It is a comple- ment to a set of five separate reports that describe and assess the SBIR programs at the Departments of Defense and Energy, the National Institutes of Health, the National Aeronautics and Space Administration, and the National Science Foundation. The purpose of this introduction is to set out the broader context of the SBIR program. Section 1.1 provides an overview of the program’s history and legislative reauthorizations. It also contrasts the common structure of the SBIR program with the diverse ways it is administered across the federal government. Section 1.2 describes the important role played by SBIR in the nation’s innova- tion system, explaining that SBIR has no public or private sector substitute. Sec- tion 1.3 then lists the advantages and limitations of the SBIR concept, including benefits and challenges faced by entrepreneurs and agency officials. Section 1.4 summarizes some of the main challenges of the NRC study and opportunities for 2The Small Business Innovation Development Act (PL 97-219). 3 See U.S. Congress, Public Law 106-554, Appendix I—H.R. 5667, Section 108. 4At the conference launching the NRC assessment, James Turner, Counsel to the House Science Committee, noted that the study is not expected to question whether the program should exist. “We’re 20 years into the SBIR now,” he said. “It is a proven entity; it’s going to be with us.” He suggested that the appropriate goals for the study would be to look ahead and craft a series of sound suggestions on how to improve the program and to give good advice to Congress on what legislative changes, if any, are necessary. See National Research Council, SBIR: Program Diersity and Assessment Chal- lenges, Charles W. Wessner, ed., Washington, DC: The National Academies Press, 2004.

OCR for page 12
 AN ASSESSMENT OF THE SBIR PROGRAM Other (3.3%) * NSF (4.3%) Total = $1.85 billion $79 million * NASA (5.6%) $103 million * DoE (5.6%) $104 million * DoD (50.9%) $943 million * HHS (30.4%) $562 million FIGURE 1-1 Dimensions of the SBIR program in 2005. NOTE: Figures do not include STTR funds. Asterisks indicate those departments and agencies reviewed by the National Research Council. SOURCE: U.S. Small Business Administration, Accessed at from , July 25, 2006. improving SBIR. Finally, Section 1.5 looks at the changing perception of SBIR in the United States and the growing recognition of the SBIR concept around the world as an example of global best practice in innovation policy. The increasing adoption of SBIR-type programs in competitive Asian and European economies underlines the need, here at home, to improve upon and take advantage of this figure 1-1 unique American innovation partnership program. 1.1 PROGRAM HISTORY AND STRUCTURE In the 1980s, the country’s slow pace in commercializing new technolo- gies—compared with the global manufacturing and marketing success of Japa- nese firms in autos, steel, and semiconductors—led to serious concern in the United States about the nation’s ability to compete economically. U.S. industrial competitiveness in the 1980s was frequently cast in terms of American industry’s failure “to translate its research prowess into commercial advantage.”5 The pes- simism of some was reinforced by evidence of slowing growth at corporate re- 5 David C. Mowery, “America’s Industrial Resurgence (?): An Overview,” in David C. Mowery, ed., U.S. Industry in 000: Studies in Competitie Performance, Washington, DC: National Academy Press, 1999, p. 1. Mowery examines eleven economic sectors, contrasting the improved performance of many industries in the late 1990s with the apparent decline that was subject to much scrutiny in the 1980s. Among the studies highlighting poor economic performance in the 1980s are Dertouzos, et al., Made in America: The MIT Commission on Industrial Productiity, Cambridge, MA: The MIT Press, 1989, and Otto Eckstein, DRI Report on U.S. Manufacturing Industries, New York: McGraw Hill, 1984.

OCR for page 12
5 INTRODUCTION search laboratories that had been leaders of American innovation in the postwar period and the apparent success of the cooperative model exemplified by some Japanese kieretsu.6 Yet, a growing body of evidence, starting in the late 1970s and accelerating in the 1980s, began to indicate that small businesses were assuming an increas- ingly important role in both innovation and job creation. David Birch, a pioneer in entrepreneurship and small business research, and others suggested that national policies should promote and build on the competitive strength offered by small businesses.7 Meanwhile, federal commissions from as early as the 1960s had recom- mended changing the direction of R&D funds toward innovative small busi- nesses.8 These recommendations were unsurprisingly opposed by traditional recipients of government R&D funding.9 Although small businesses were begin- ning to be recognized by the late 1970s as a potentially fruitful source of inno- vation, some in government remained wary of funding small firms focused on high-risk technologies with commercial promise. The concept of early-stage financial support for high-risk technologies with commercial promise was first advanced by Roland Tibbetts at the National Sci- ence Foundation. As early as 1976, Mr. Tibbetts advocated that the NSF should increase the share of its funds going to innovative, technology-based small businesses. When NSF adopted this initiative, small firms were enthused and proceeded to lobby other agencies to follow NSF’s lead. When there was no im- mediate response to these efforts, small businesses took their case to Congress and to higher levels of the Executive branch.10 In response, the White House convened a conference on Small Business 6 Richard Rosenbloom and William Spencer, Engines of Innoation: U.S. Industrial Research at the End of an Era. Boston, MA: Harvard Business Press, 1996. 7 David L. Birch, “Who Creates Jobs?” The Public Interest, op. cit. Birch’s work greatly influenced perceptions of the role of small firms. Over the last 20 years, it has been carefully scrutinized, lead- ing to the discovery of some methodological flaws, namely making dynamic inferences from static comparisons, confusing gross and net job creation, and admitting biases from chosen regression tech- niques. See S. J. Davis, J. Haltiwanger, and S. Schuh, “Small Business and Job Creation: Dissecting the Myth and Reassessing the Facts, Working Paper No. 4492, Cambridge, MA: National Bureau of Economic Research, 1993. These methodological fallacies, however, “ha[ve]not had a major influence on the empirically based conclusion that small firms are over-represented in job creation,” according to Per Davidsson. See Per Davidsson, “Methodological Concerns in the Estimation of Job Creation in Different Firm Size Classes,” Working Paper, Jönköping International Business School, 1996. 8 For an overview of the origins and history of the SBIR program, see George Brown and James Turner, “The Federal Role in Small Business Research,” Issues in Science and Technology, Summer 1999, pp. 51-58. 9 See Roland Tibbetts, “The Role of Small Firms in Developing and Commercializing New Sci- entific Instrumentation: Lessons from the U.S. Small Business Innovation Research Program,” in Equipping Science for the st Century, John Irvine, Ben Martin, Dorothy Griffiths, and Roel Gathier, eds., Cheltenham UK: Edward Elgar Press, 1997. For a summary of some of the critiques of SBIR, see Section 1-3 of this Introduction. 10 Ibid.

OCR for page 12
 AN ASSESSMENT OF THE SBIR PROGRAM in January 1980 that recommended a program for small business innovation research. This recommendation was grounded in: Evidence that a declining share of federal R&D was going to small n businesses; Broader difficulties among innovative small businesses in raising capital n in a period of historically high interest rates; and Research suggesting that small businesses were fertile sources of job n creation. A widespread political appeal in seeing R&D dollars “spread a little more widely than they were being spread before” complemented these policy ratio- nales. Congress responded, under the Reagan administration, with the passage of the Small Business Innovation Research Development Act of 1982, which established the SBIR program.11 1.1.1 The SBIR Development Act of 1982 The new SBIR program initially required agencies with R&D budgets in ex- cess of $100 million to set aside 0.2 percent of their funds for SBIR. This amount totaled $45 million in 1983, the program’s first year of operation. Over the next six years, the set-aside grew to 1.25 percent.12 The legislation authorizing SBIR had two broad goals13: “to more effectively meet R&D needs brought on by the utilization of n small innovative firms (which have been consistently shown to be the most prolific sources of new technologies); and “to attract private capital to commercialize the results of federal n research.” 1.1.2 The SBIR Reauthorizations of 1992 and 2000 The SBIR program approached reauthorization in 1992 amidst continued worries about the U.S. economy’s capacity to commercialize inventions. Find- ing that “U.S. technological performance is challenged less in the creation of new technologies than in their commercialization and adoption,” the National Academy of Sciences at the time recommended an increase in SBIR funding 11Additional information regarding SBIR’s legislative history can be accessed from the Library of Congress. See . 12The set-aside is currently 2.5 percent of an agency’s extramural R&D budget. 13 U.S. Congress, Senate, Committee on Small Business (1981), Senate Report 97-194, Small Busi- ness Research Act of 1981, September 25, 1981.

OCR for page 12
7 INTRODUCTION as a means to improve the economy’s ability to adopt and commercialize new technologies.14 Following this report, the Small Business Research and Development En- hancement Act (P.L. 102-564), which reauthorized the SBIR program until Sep- tember 30, 2000, doubled the set-aside rate to 2.5 percent. This increase in the percentage of R&D funds allocated to the program was accompanied by a stronger emphasis on encouraging the commercialization of SBIR-funded tech- nologies.15 Legislative language explicitly highlighted commercial potential as a criterion for awarding SBIR grants.16 The Small Business Reauthorization Act of 2000 (P.L. 106-554) extended SBIR until September 30, 2008. It also called for an assessment by the National Research Council of the broader impacts of the program, including those on em- ployment, health, national security, and national competitiveness. 17 1.1.3 Previous Research on SBIR The current NRC assessment represents a significant opportunity to gain a better understanding of one of the largest of the nation’s early-stage finance pro- grams. Despite its size and 24-year history, the SBIR program has not previously been comprehensively examined. While there have been some previous studies, most notably by the General Accounting Office and the Small Business Admin- istration, these have focused on specific aspects or components of the program. 18 14 See National Research Council, The Goernment Role in Ciilian Technology: Building a New Alliance, Washington, DC: National Academy Press, 1992, p. 29. 15 See Robert Archibald and David Finifter, “Evaluation of the Department of Defense Small Busi- ness Innovation Research Program and the Fast Track Initiative: A Balanced Approach,” in National Research Council, The Small Business Innoation Research Program: An Assessment of the Depart- ment of Defense Fast Track Initiatie, Charles W. Wessner, ed., Washington, DC: National Academy Press, 2000, pp. 211-250. 16 In reauthorizing the program in 1992 (PL 102-564) Congress expanded the purposes to “empha- size the program’s goal of increasing private sector commercialization developed through Federal research and development and to improve the federal government’s dissemination of information con- cerning the small business innovation, particularly with regard to woman-owned business concerns and by socially and economically disadvantaged small business concerns.” 17The current assessment is congruent with the Government Performance and Results Act (GPRA) of 1993: . As characterized by the GAO, GPRA seeks to shift the focus of government decision making and accountability away from a preoc- cupation with the activities that are undertaken—such as grants dispensed or inspections made—to a focus on the results of those activities. See . 18An important step in the evaluation of SBIR has been to identify existing evaluations of SBIR. These include U.S. Government Accounting Office, Federal Research: Small Business Innoation Research Shows Success But Can be Strengthened, Washington, DC: U.S. General Accounting Of- fice, 1992; and U.S. Government Accounting Office, “Evaluation of Small Business Innovation Can Be Strengthened,” Washington, DC: U.S. General Accounting Office, 1999. There is also a 1999 unpublished SBA study on the commercialization of SBIR surveys Phase II awards from 1983 to 1993 among non-DoD agencies.

OCR for page 12
 AN ASSESSMENT OF THE SBIR PROGRAM There have been few internal assessments of agency programs.19 The academic literature on SBIR is also limited.20 Writing in the 1990s, Joshua Lerner of the Harvard Business School posi- tively assessed the program, finding “that SBIR awardees grew significantly faster than a matched set of firms over a ten-year period.” 21 Underscoring the importance of local infrastructure and cluster activity, Lerner’s work also showed that the “positive effects of SBIR awards were confined to firms based in zip codes with substantial venture capital activity.” These findings were consistent with both the corporate finance literature on capital constraints and the growth literature on the importance of localization effects.22 To help fill this assessment gap, and to learn about a large, relatively under- evaluated program, the National Academies’ Committee for Government-Industry Partnerships for the Development of New Technologies was asked by the De- partment of Defense to convene a symposium to review the SBIR program as a whole, its operation, and current challenges. Under its chairman, Gordon Moore, Chairman Emeritus of Intel, the Committee convened government policymak- ers, academic researchers, and representatives of small business for the first comprehensive discussion of the SBIR program’s history and rationale, review 19Agency reports include an unpublished 1997 DoD study on the commercialization of DoD SBIR technologies. NASA has also completed several reports on its SBIR program. Following the authorizing legislation for the NRC study, NIH launched a major review of the achievements of its SBIR program. 20 Early examples of evaluations of the SBIR program include S. Myers, R. L. Stern, and M. L. Rorke, A Study of the Small Business Innoation Research Program, Lake Forest, IL: Mohawk Research Corporation, 1983, and Price Waterhouse, Surey of Small High-tech Businesses Shows Federal SBIR Awards Spurring Job Growth, Commercial Sales, Washington, DC: Small Business High Technology Institute, 1985. A 1998 assessment by Scott Wallsten of a subset of SBIR awardees that were publicly traded (most SBIR awardees are not public) determined that SBIR grants do not contribute additional funding but instead replace firm-financed R&D spending “dollar for dollar.” See S. J. Wallsten, “Rethinking the Small Business Innovation Research Program,” in Inesting In Innoation, Lewis M. Branscomb and J. Keller, eds., Cambridge, MA: The MIT Press, 1998. While Wallsten’s paper has the virtue of being one of the early attempts to assess the impact of SBIR, Josh Lerner questions whether employing a regression framework to assess the marginal impact of public funding on private research spending is the most appropriate tool in assessing public efforts to assist small high-technology firms. He points out that “it may well be rational for a firm not to increase its rate of spending, but rather to use the funds to prolong the time before it needs to seek additional capital.” Lerner suggests that “to interpret such a short run reduction in other research spending as a negative signal is very problematic.” See Joshua Lerner, “Public Venture Capital: Rationales and Evaluation” in The Small Business Innoation Research Program: Challenges and Opportunities, op. cit., p. 125. See also Joshua Lerner, “Angel Financing and Public Policy: An Overview,” Journal of Banking and Finance, 22(6-8):773-784, and Joshua Lerner, “The Government as Venture Capitalist: The Long-run Impact of the SBIR Program,” The Journal of Business, 72(3):285-297, 1999. 21 See Joshua Lerner, “The Government as Venture Capitalist: The Long-Run Effects of the SBIR Program,” op. cit. 22 See Michael Porter, “Clusters and Competition: New Agendas for Government and Institutions,” in On Competition, Boston, MA: Harvard Business School Press, 1998.

OCR for page 12
9 INTRODUCTION of existing research, and identification of areas for further research and program improvements.23 The Moore Committee reported that: SBIR enjoyed strong support in parts of the federal government, as well n as in the country at large. At the same time, the size and significance of SBIR underscored the need n for more research on how well it is working and how its operations might be optimized. There should be additional clarification about the primary emphasis on n commercialization within SBIR, and about how commercialization is defined. There should also be clarification on how to evaluate SBIR as a single n program that is applied by different agencies in different ways. 24 Subsequently, the Department of Defense requested the Moore Committee to review the operation of the SBIR program at Defense with a particular focus on the role played by the Fast Track Initiative. This major review involved sub- stantial original field research, with 55 case studies, as well as a large survey of award recipients. The response rate was relatively high, at 72 percent. 25 It found that the SBIR program at Defense was contributing to the achievement of mis- sion goals—funding valuable innovative projects—and that a significant portion of these projects would not have been undertaken in the absence of the SBIR funding.26 The Moore Committee’s assessment also found that the Fast Track Program increases the efficiency of the Department of Defense SBIR program by encouraging the commercialization of new technologies and the entry of new firms to the program.27 More broadly, the Moore Committee found that SBIR facilitates the devel- opment and utilization of human capital and technological knowledge. 28 Case studies have shown that the knowledge and human capital generated by the SBIR program has economic value, and can be applied by other firms. 29 And, by acting as a “certifier” of promising new technologies, SBIR awards encourage further private sector investment in an award winning firm’s technology. Based on this and other assessments of public-private partnerships, the Moore Committee’s Summary Report on U.S. Government-Industry Partnerships recom- 23 See National Research Council, The Small Business Innoation Research Program: Challenges and Opportunities, Charles W. Wessner, ed., Washington, DC: National Academy Press, 1999. 24 Ibid. 25 See National Research Council, The Small Business Innoation Research Program: An Assess- ment of the Department of Defense Fast Track Initiatie, op. cit., p 24. 26 Ibid—see Chapter III: Recommendations and Findings, p. 32. 27 Ibid, p. 33. 28 Ibid, p. 33. 29 Ibid, p. 33.

OCR for page 12
0 AN ASSESSMENT OF THE SBIR PROGRAM BOX 1-1 The Moore Committee Report on Public-Private Partnershipsa In a program-based analysis led by Gordon Moore, Chairman Emeritus of Intel, the National Academies Committee on Government-Industry Partnerships for the Develop- ment of New Technologies found that “public-private partnerships, involving cooperative research and development activities among industry, universities, and government laboratories can play an instrumental role in accelerating the development of new technologies from idea to market.” Partnerships Contribute to National Missions “Experience shows that partnerships work—thereby contributing to national mis- sions in health, energy, the environment, and national defense—while also contributing to the nation’s ability to capitalize on its R&D investments. Properly constructed, oper- ated, and evaluated partnerships can provide an effective means for accelerating the progress of technology from the laboratory to the market.” Partnerships Help Transfer New Ideas to the Market “Bringing the benefits of new products, new processes, and new knowledge into the market is a key challenge for an innovation system. Partnerships facilitate the transfer of scientific knowledge to real products; they represent one means to improve the output of the U.S. innovation system. Partnerships help by bringing innovations to the point where private actors can introduce them to the market. Accelerated progress in obtaining the benefits of new products, new processes, and new knowledge into the market has positive consequences for economic growth and human welfare.” Characteristics of Successful Partnerships “Successful partnerships tend to be characterized by industry initiation and leader- ship, public commitments that are limited and defined, clear objectives, cost sharing, and learning through sustained evaluations of measurable outcomes, as well as the application of the lessons to program operations.b At the same time, it is important to recognize that although partnerships are a valuable policy instrument, they are not a panacea; their demonstrated utility does not imply that all partnerships will be success- mended that “regular and rigorous program-based evaluations and feedback is essential for effective partnerships and should be a standard feature,” adding that “greater policy attention and resources to the systematic evaluation of U.S. and foreign partnerships should be encouraged.”30 Drawing on these recommendations, the December 2000 legislation man- dated the current comprehensive assessment of the nation’s SBIR program. This NRC assessment of SBIR is being conducted in three phases. The first 30 SeeNational Research Council, Goernment-Industry Partnerships for the Deelopment of New Technologies: Summary Report, Charles. W. Wessner, ed., Washington, DC: National Academy Press, 2002, p. 30.

OCR for page 12
 INTRODUCTION ful. Indeed, the high-risk—high-payoff nature of innovation research and development assures some disappointment.” Partnerships Are a Complement to Private Finance “Partnerships focus on earlier stages of the innovation stream than many venture investments, and often concentrate on technologies that pose greater risks and offer broader returns than the private investor normally finds attractive.c Moreover, the lim- ited scale of most partnerships—compared to private institutional investments—and their sunset provisions tend to ensure early recourse to private funding or national procurement. In terms of project scale and timing in the innovation process, public- private partnerships do not displace private finance. Properly constructed research and development partnerships can actually elicit ‘crowding in’ phenomena, with public investments in R&D providing the needed signals to attract private investment.”d aNational Research Council, Government-Industry Partnerships for the Development of New Technologies: Summary Report, Charles W. Wessner, ed., Washington, DC: The National Acad- emies Press, 2003, p. 30. bFeatures associated with more successful partnerships are described in the Introduction to this report. cSome programs also support broadly applicable technologies that, while desirable for society as a whole, are difficult for individual firms to undertake because returns are difficult for individual firms to appropriate. A major example is the Advanced Technology Program. dDavid, Hall, and Toole survey the econometric evidence over the past 35 years. They note that the “findings overall are ambivalent and the existing literature as a whole is subject to the criticism that the nature of the “experiment(s)” that the investigators envisage is not adequately specified.” It seems that both crowding out and crowding in can occur. The essential finding is that the evidence is inconclusive and that assumptions about crowding out are unsubstantiated. The outcome appears to depend on the specifics of the circumstance, and these are not adequately captured in available data. See Paul A. David, Bronwyn H. Hall, and Andrew A. Toole, “Is Public R&D a Complement or Substitute for Private R&D? A Review of the Econometric Evidence,” NBER Working Paper 7373, October 1999. Relatedly, Feldman and Kelley cite the “halo effect” created by ATP awards in helping firms signal their potential to private investors. See Maryann Feldman and Maryellen Kelley, “Leveraging Research and Development: The Impact of the Advanced Technology Program,” in National Research Council, The Advanced Technology Program, Charles W. Wessner, ed., Washington, DC: National Academy Press, 2001. phase developed a research methodology that was reviewed and approved by an independent National Academies panel of experts. Information available about the program was also gathered through interviews with officials at the relevant federal agencies and through two major conferences where these officials were invited to describe program operations, challenges, and accomplishments. These conferences highlighted the important differences in agency goals, practices, and evaluations. They also served to describe the evaluation challenges that arise from the diversity in program objectives and practices.31 31Adapted from National Research Council, SBIR: Program Diersity and Assessment Challenges, op. cit.

OCR for page 12
 AN ASSESSMENT OF THE SBIR PROGRAM The second phase of the study implemented the research methodology. The Committee deployed multiple survey instruments and its researchers conducted case studies of a wide variety of SBIR firms. The Committee then evaluated the results, and developed the recommendations and findings found in this report for improving the effectiveness of the SBIR program. The third phase of the study will provide an update of the survey and related case studies, as well as explore other issues that emerged in the course of this study. It will, in effect, provide a second snapshot of the program and of the agencies’ progress and challenges. 1.1.4 The Structure and Diversity of SBIR Eleven federal agencies are currently required to set aside 2.5 percent of their extramural research and development budget exclusively for SBIR awards and contracts. Each year these agencies identify various R&D topics, represent- ing scientific and technical problems requiring innovative solutions, for pursuit by small businesses under the SBIR program. These topics are bundled together into individual agency “solicitations”—publicly announced requests for SBIR proposals from interested small businesses. A small business can identify an appropriate topic that it wants to pursue from these solicitations and, in response, propose a project for an SBIR grant. The required format for submitting a proposal is different for each agency. Proposal selection also varies, though peer review of proposals on a competitive basis by experts in the field is typical. Each agency then selects through a competitive process the proposals that are found to best meet program selection criteria, and awards contracts or grants to the proposing small businesses. In this way, SBIR helps the nation capitalize more fully on its investments in research and development. 1.1.4.1 A Three-Phase Program As conceived in the 1982 Act, the SBIR grant-making process is structured in three phases: • Phase I grants essentially fund a feasibility study in which award winners undertake a limited amount of research aimed at establishing an idea’s scientific and commercial promise. The 1992 legislation standardized Phase I grants at $100,000. Approximately 15 percent of all small busi- nesses that apply receive a Phase I award. • Phase II grants are larger—typically about $500,000 to $850,000—and fund more extensive R&D to develop the scientific and technical merit and the feasibility of research ideas. Approximately 40 percent of Phase I award winners go on to this next step.

OCR for page 12
 INTRODUCTION in the United States have a strong commercialization culture, and there is great variation in the level of success among those universities that do.68 1.4 ASSESSING SBIR Regular program and project analysis of SBIR awards are essential to under- stand the impact of the program. A focus on analysis is also a means to clarify program goals and objectives and requires the development of meaningful met- rics and measurable definitions of success. More broadly, regular evaluations contribute to a better appreciation of the role of partnerships among government, university, and industry. Assessments also help inform public and policy makers of the opportunities, benefits, and challenges involved in the SBIR innovation award program. As we have noted before, despite its large size and 25-year history, SBIR has not done especially well with regard to evaluation. As a whole, the pro- gram has been the object of relatively limited analysis. This assessment of the SBIR program is the first comprehensive assessment ever conducted among the departments and agencies charged with managing the bulk of the program’s resources. A major challenge has been the lack of data collection and assessment within the agencies and the limited number and nature of external assessments. Despite the challenges of assessing a diverse and complex program, the NRC assessment has sought to document the program’s achievements, clarify common miscon- ceptions about the program, and suggest practical operational improvements to enhance the nation’s return on the program. 1.4.1 The Challenges of Assessment At its outset, the NRC’s SBIR study identified a series of assessment chal- lenges that must be addressed.69 1.4.1.1 Recognizing Program Diversity One major challenge is that the same administrative flexibility that allows each agency to adapt the SBIR program to its particular mission, scale, and working culture makes it difficult, and often inappropriate, to compare programs across agencies. NSF’s operation of SBIR differs considerably from that of the Department of Defense, for example, reflecting, in large part, differences in the 68 Donald Siegel, David Waldman, and Albert Link, “Toward a Model of the Effective Transfer of Scientific Knowledge from Academicians to Practitioners: Qualitative Evidence from the Com- mercialization of University Technologies,” Journal of Engineering and Technology Management, 21(1-2): March-June 2004, pp. 115-142. 69 See National Research Council, SBIR: Program Diersity and Assessment Challenges, op. cit.

OCR for page 12
 AN ASSESSMENT OF THE SBIR PROGRAM extent to which “research” is coupled with procurement of goods and services. Although SBIR at each agency shares the common three-phase structure, the SBIR concept is interpreted uniquely at each agency and each agency’s program is best understood in its own context. 1.4.1.2 Different Agencies, Divergent Programs The SBIR programs operated by the five study agencies (DoD, NIH, NSF, NASA, and DoE) are perhaps as divergent in their objectives, mechanisms, op- erations, and outcomes as they are similar. Commonalities include: • The three-phase structure, with an exploratory Phase I focused on fea- sibility, a more extended and better funded Phase II usually over two years, and for some agencies a Phase III in which SBIR projects have some significant advantages in the procurement process but no dedicated funding; • Program boundaries largely determined by SBA guidelines with regard to funding levels for each phase, eligibility, and • Shared objectives and authorization from Congress, including adherence to the fundamental congressional objectives for the program, i.e., compli- ance with the 2.5 percent allocation for the program. However important this shared framework, there is also a profusion of differ- ences among the agencies. In fact, the agencies differ on the objectives assigned to the program, program management structure, award size and duration, selec- tion process, degree of adherence to standard SBA guidelines, and evaluation and assessment activities. No program shares an electronic application process with any other agency; there are no shared selection processes, though there are “shared” awards companies (but not projects). There are shared outreach activi- ties, but no systematic sharing (or adoption) of best practices. The following section summarizes some of the most important differences, drawing directly on a more detailed discussion in Chapter 5, focused on program management. 1.4.1.3 Award Size and Duration Most agencies follow the SBA guidelines for award size ($100,000 for Phase I and $750,000 for Phase II) and duration (6 months/2 years) most of the time. However, some agencies have reduced the size of awards (e.g., several DoD components for Phase I, NSF for Phase II), partly in order to create funding either to bridge the gap between Phase I and Phase II, or to create incentives for

OCR for page 12
5 INTRODUCTION companies to find matching funding for an extended Phase II award (e.g., NSF Phase IIB). NIH, however, has in many cases extended both the size and duration of Phase I and Phase II awards. The 2006 GAO study indicated that more than 60 percent of NIH awards from 2002 to 2005 were above the SBA guidelines; dis- cussions with NIH staff indicate that no-cost extensions have become a standard feature of the NIH SBIR program. These operational differences reflect the differences in agency objectives, means (or lack thereof) for follow-on funding, and circumstances (e.g., time required for clinical trials at NIH). 1.4.1.4 Balancing Multiple Objectives Congress, as indicated earlier in this chapter, has mandated four core objec- tives for the agencies, but has not, understandably, set priorities among them. Recognizing the importance the Congress has attached to commercialization, all of the agencies have made efforts to increase commercialization from their SBIR programs. They also all make considerable efforts to ensure that SBIR projects are in line with agency research agendas. Nonetheless, there are still important differences among them. The most significant difference is between acquisition agencies and non- acquisition agencies. The former are focused primarily on developing technolo- gies for the agency’s own use. Thus at DoD, a primary objective is developing technologies that will eventually be integrated into weapons systems purchased or developed by DoD. In contrast, the nonacquisition agencies do not, in the main, purchase out- puts from their own SBIR programs. These agencies—NIH, NSF, and parts of DoE—are focused on developing technologies for use in the priate sector. This core distinction largely colors the way programs are managed. For example, acquisition programs operate almost exclusively through contracts— award winners contract to perform certain research and to deliver certain speci- fied outputs; nonacquisition programs operate primarily through grants—which are usually less tightly defined, have less closely specified deliverables, and are viewed quite differently by the agencies—more like the basic research conducted by university faculty in other agency-funded programs. Thus, contract-type research focuses on developing technology to solve specific agency problems and/or provide products; grant-type research funds activities by researchers.

OCR for page 12
 AN ASSESSMENT OF THE SBIR PROGRAM 1.4.1.5 Topics, Solicitations, and Deadlines Technical topics are used to define acceptable areas of research at all agen- cies except NIH, where they are viewed as guidelines, not boundaries. There are three kinds of topic-usage structures among the agencies: • Procurement-oriented approaches, where topics are developed carefully to meet the specific needs of procurement agencies; • Management-oriented approach, where topics are used at least partly to limit the number of applications; • Investigator-oriented approaches, where topics are used to publicize areas of interest to the agency, but are not used as a boundary condition. Acquisition agencies (NASA and DoD) use the procurement-oriented ap- proach; NIH uses the investigator-oriented approach, and NSF and DoE use the management-oriented approach. Agencies publish these topics of interest in their solicitations—the formal notice that awards will be allocated. Solicitations can be published annually or more often, depending on the agency, and agencies can also offer multiple annual deadlines for applications (as do NIH and some DoD components), or just one (DoE and in effect NSF). 1.4.1.6 Award Selection Agencies differ in how they select awards. Peer review is widely used, some- times by staff entirely from outside the agency (e.g., NIH), sometimes entirely from internal staff (e.g., DoD), sometimes a mix of internal and external staff, e.g., NSF. Some agencies use quantitative scoring (e.g., NIH); others do not (e.g., NASA). Some agencies have multiple levels of review, each with specific and significant powers to affect selection; others do not. Companies unhappy with selection outcomes also have different options. At NIH, they can resubmit their application with modifications at a subsequent deadline. At most other agencies, resubmission is not feasible (where topics are tightly defined, the same topic may not come up again for several years, or at all). Most agencies do not appear to have widely used appeal processes, although this is not well documented. The agencies also differ in how they handle the specific issue of commercial review—assessing the extent to which the company is likely to be successful in developing a commercial product, in line with the congressional mandate to sup- port such activities. DoD, for example, has developed a quantitative scorecard to use in assess- ing the track record of companies that are applying for new SBIR awards. Other agencies do not have such formal mechanisms, and some, such as NIH, do not

OCR for page 12
7 INTRODUCTION provide any mechanism for bringing previous commercialization records for- mally into the selection process. This brief review of agency differences underscores the need to evaluate the different agency programs separately. At the same time, opportunities to apply best practices concerning selected aspects of the program do exist. 1.4.1.7 Assessing SBIR: Compared to What? The high-risk nature of investing in early-stage technology means that the SBIR program must be held to an appropriate standard when it is evaluated. An assessment of SBIR should be based on an understanding of the realities of the distribution of successes and failures in early-stage finance. As a point of comparison, Gail Cassell, Vice President for Scientific Affairs at Eli Lilly, has noted that only one in ten innovative products in the biotechnology industry will turn out to be a commercial success. Similarly, venture capital firms generally anticipate that only two or three out of twenty or more investments will produce significant returns.70 In setting metrics for SBIR projects, it is therefore impor- tant to have realistic expectations of success rates for new firms, for firms with unproven but promising technologies, and for firms (e.g., at DoD and NASA) which are subject to the uncertainties of the procurement process. Systems and missions can be cancelled or promising technologies can not be taken up, due to a perception of risk and readiness that understandably condition the acquisition process. It may even be a troubling sign if an SBIR program has too high a suc- cess rate, because that might suggest that program managers are not investing in a sufficiently ambitious portfolio of projects. 70While venture capitalists are a referent group, they are not directly comparable insofar as the bulk of venture capital investments occur in the later stages of firm development. SBIR awards often occur earlier in the technology development cycle than where venture funds normally invest. Nonetheless, returns on venture funding tend to show the same high skew that characterizes commercial returns on the SBIR awards. See John H. Cochrane, “The Risk and Return of Venture Capital,” Journal of Financial Economics, 75(1):3-52, 2005. Drawing on the VentureOne database, Cochrane plots a his- togram of net venture capital returns on investments that “shows an extraordinary skewness of returns. Most returns are modest, but there is a long right tail of extraordinary good returns. 15 percent of the firms that go public or are acquired give a return greater than 1,000 percent! It is also interesting how many modest returns there are. About 15 percent of returns are less than 0, and 35 percent are less than 100 percent. An IPO or acquisition is not a guarantee of a huge return. In fact, the modal or ‘most probable’ outcome is about a 25 percent return.” See also Paul A. Gompers and Josh Le- rner, “Risk and Reward in Private Equity Investments: The Challenge of Performance Assessment,” Journal of Priate Equity, 1(Winter 1977):5-12. Steven D. Carden and Olive Darragh, “A Halo for Angel Investors,” The McKinsey Quarterly, 1, 2004, also show a similar skew in the distribution of returns for venture capital portfolios.

OCR for page 12
 AN ASSESSMENT OF THE SBIR PROGRAM 1.4.2 Addressing SBIR Challenges Realizing the potential of the SBIR program depends on how well the program is managed, adapted to the various agency missions, and evaluated for impact and improvement. As a part of this evaluation, the National Research Council Committee assessing SBIR has highlighted several structural and opera- tional challenges faced by the program.71 These include: 1.4.2.1 Improving the Phase III Transition SBIR is designed with two funded Phases (I and II) and a third, nonfunded, Phase III. The transition from Phase II to the nonfunded Phase III is often uncer- tain. Projects successfully completing Phase II are expected to attract Phase III funding from non-SBIR sources within federal agencies, or to deploy products directly into the marketplace. There are, however, widely recognized challenges in transitioning to Phase III. As alluded to above, these challenges include appro- priate timing for inclusion in procurement activities, sufficient demonstration of project potential and firm capability, and sometimes the inability to communicate a project’s potential to acquisition officials. Many of these obstacles can impede take-up in the agencies that can procure SBIR funded technologies. Some agencies have sought, with the approval of SBA, to experiment with SBIR funding beyond Phase II in order to improve the commercialization po- tential of SBIR funded technologies. NSF, notably, has pioneered use of the Phase IIB grant, which allows a firm to obtain a supplemental follow-on grant ranging from $50,000 to $500,000 provided that the applicant is backed by two dollars of third-party funding for every one dollar of NSF funding provided.72 This private-sector funding validates the technology’s potential for the Phase IIB award, while providing a positive incentive for potential private investors. Ongoing program experimentation to improve program outcomes, as seen at NSF, is a sign of proactive management and is essential to the future health of the SBIR program. However, such experimentation has to be coupled with more regular evaluation to assess if these experiments are yielding positive results that should be reinforced, or if the program needs to be fine-tuned or substantially revised. 1.4.2.2 Increased Risks for Agency Procurement Officials As with private markets, Phase III transitions can be difficult to achieve in the case of agency procurement. This can occur when agency acquisition offi- 71 For a full list of the Committee’s proposals, see Chapter 2: Findings and Recommendations. 72 SeeNational Research Council, An Assessment of the SBIR Program at the National Science Foundation, op. cit.

OCR for page 12
9 INTRODUCTION BOX 1-8 DoD Risks Associated with SBIR Procurement from Small, Untested Firms Technical Risks. This includes the possibility that the technology would not, in the end, prove to be sufficiently robust for use in weapons systems and space missions. Company Risks. SBIR companies are by definition smaller and have fewer resources to draw on than prime contractors have. In addition, many SBIR com- panies have only a very limited track record, which limits program manager confi- dence that they will be able to deliver their product on time and within budget. Funding Limitations. The $750,000 maximum for Phase II might not be enough to fund a prototype sufficiently ready for acquisition, necessitating other funds and more time. Testing Challenges. SBIR companies are often unfamiliar with the very high level of testing and engineering specifications (mil specs) necessary to meet DoD acquisition requirements. Scale and Scope Issues. Small companies may not have the experience and resources necessary to scale production effectively to amounts needed by DoD. In addition, the procurement practices of an agency like DoD may not be adapted to include small firms in its large awards.a Timing Risks. DoD planning, programming, and budgets work in a two-year cycle, and it is difficult for Program Executive Officers to determine whether a small firm will be able to create a product to meet program needs in a timely manner, even if the initial research has proven successful. aThis is a correction of the text in the prepublication version released on July 27, 2007. cials hesitate to employ the results of SBIR awards because of the added risks, both technical and personal, associated with contracting through small firms. As a result of these disincentives, program managers in charge of acquisitions have traditionally not seen SBIR as part of their mainstream activities, often prefer- ring to take the cautious route of procuring technology through the established prime contractors. Risk aversion is by no means peculiar to the Department of Defense. Pro- gram officers at NASA usually have only one opportunity to get their projects right, given limited opportunities for in-flight adjustments. Recognizing this constraint, some NASA program managers are uncertain that SBIR can deliver reliable technology on time and at a manageable level of risk. It is important to recognize those risks associated with the program and to develop measures to reduce negative incentives that cause procurement officers

OCR for page 12
50 AN ASSESSMENT OF THE SBIR PROGRAM to avoid contracting with SBIR companies. Enhanced use of the program can help introduce innovative and often low-cost solutions to the mission needs of the Department of Defense and NASA. 1.4.2.3 Growing Alignment in Incentives As we noted earlier, there are advantages for entrepreneurs as well as agency officials in the SBIR concept. It is interesting that some services (e.g., Navy) and many prime contractors are finding the SBIR program to be directly relevant to their interests and objectives. With the right positive incentives and manage- ment attention, the performance and contributions of the SBIR program can be improved. The experience of the Navy’s SBIR program demonstrates that the SBIR program works well when each of these participants recognizes program benefits and is willing to take part in facilitating the program’s operations. BOX 1-9 Attributes of the Navy’s Submarine SBIR Program The Submarine Program Executive Office is widely considered to be one of the more successful Phase III program at DoD. The program takes a number of steps to use and support SBIR as an integral part of the technology development process. Acquisition Involvement. SBIR opportunities are advertised through a program of “active advocacy.” Program managers compete to write topics to solve their problems. Topic Vetting. Program Executive Officers keep track of all topics. Program managers compete in a rigorous process of topic selection. SBIR contracts are considered a reward, not a burden. Treating SBIR as a Program, including follow-up and monitoring of small busi- nesses to keep them alive till a customer appears. Program managers are en- couraged to demonstrate commitment to a technology by paying half the cost of a Phase II option. Providing Acquisition Coverage, which links all SBIR awards to the agency’s acquisition program. Awarding Phase III Contracts within the $75 million ceiling that avoids triggering complex Pentagon acquisition rules. Brokering Connections between SBIR and the prime contractors. Recycling unused Phase I awards, a rich source for problem solutions.

OCR for page 12
5 INTRODUCTION Most notably, the Program Executive Office (PEO) for the Navy’s submarine program has shown how SBIR can be successfully leveraged to advance mission needs. Its experience demonstrates that senior operational support and additional funding for program management provides legitimacy and the means needed for the program to work more effectively. In addition to funding program operations, this additional support allows for the outreach and networking initiatives (such as the Navy Forum), as well as other management innovations, that contribute to enhanced matchmaking, commercialization, and to the higher insertion rates of SBIR technologies into the Navy’s ships, submarines, and aircraft. 1.4.2.4 Creating a Culture of Evaluation Continuous improvement comes through institutionalized learning. Regu- lar internal assessments that are transparent and based on objective criteria are necessary for the agencies to continue to experiment with and adapt the SBIR program to meet their changing mission needs. In addition, external assessments are important to improve the public’s understanding of the nature of high-risk, high-payoff investments and the increasingly important role that SBIR plays in the nation’s innovation system. The failure of individual projects does not indi- cate program failure, yet a failure to evaluate expenditures against outcomes is a program failure. Prior to the start of the Academies’ assessment, only very limited assess- ment and evaluation had been done at any of the SBIR agencies. Insufficient data collection, analytic capability, and reporting requirements, together with the decentralized character of the program, mean that there is very little data to use in evaluating the connection between outcomes and program management and practices. One encouraging development in the course of the NRC study is that agen- cies are showing increasing interest in regular assessment and evaluation. 1.4.2.5 Monitoring and Assessment The agencies differ in their degree of interest in monitoring and assessment. DoD has led the way in external, arms-length assessment. All agencies have now completed at least one effort to assess outcomes from their programs. At NIH, a survey of Phase II recipients has been followed by a second survey tracking the same recipients, which has generated important results. At DoD, the Company Commercialization Report is being used by some components to track outcomes; efforts are also underway to track Phase III contracts directly through the DD350 reporting mechanism. At NASA, NSF, and DoE less formal surveys have been used for similar purposes. The extent and rigor of these efforts vary widely, partly because the available resources for such activities vary widely as well.

OCR for page 12
5 AN ASSESSMENT OF THE SBIR PROGRAM 1.5 CHANGING PERCEPTIONS ABOUT SBIR As the SBIR program matures and more is known about its accomplishments and potential, it is increasingly viewed by the agencies to be part of their wider portfolio of agency R&D investments. This is a welcome change. SBIR was for many years viewed as the stepchild by the research community at each agency. Created by Congress in 1982, SBIR imposed a specific mandate on agencies funding significant amounts of extramural R&D. The perception that SBIR was a tax on “real” research meant that a number of agencies made limited efforts to integrate SBIR into agency R&D strategy. Often, agencies tried to simply piggyback SBIR activities on existing approaches. Selection procedures, for example, were often copied from elsewhere in the agency, rather than being designed to meet the specific needs of the small busi- ness community. Furthermore, in the early years agencies generally made no effort to determine outcomes from the program, or to design and implement management reforms that would support improved outcomes. Beginning in the late 1990s, this limited view of SBIR began to change. Initiatives taken by senior management at the Department of Defense and by SBIR program managers at the operational level, which demonstrated that SBIR projects can make a real difference to agency missions, began to alter wider agency perspectives about the program’s potential. As a result, more acquisition and technical officials have become interested in cooperating with the program. As noted above, in some parts of some agencies, SBIR is now viewed as a valu- able vehicle within the wider portfolio of agency R&D investments. Indeed, the agencies are increasingly coming to see that SBIR offers some unique benefits in terms of accessing pools of technology and capabilities other- wise not easily integrated into research programs dominated by prime contractors and agency research labs. At the NRC’s conference on the challenge of the SBIR Phase III transition, for example, senior representatives of the Department of De- fense affirmed the program’s role in developing innovative solutions for mission needs. Further underscoring the program’s relevance, several prime contractors’ representatives at the conference stated that they have focused management at- tention, shifted resources, and assigned responsibilities within their own manage- ment structures to capitalize on the creativity of SBIR firms. 1.5.1 SBIR Around the World Increasingly, governments around the world view the development and trans- formation of their innovation systems as an important way to promote the com- petitiveness of national industries and services. They have adopted a variety of policies and programs to make their innovation systems more robust, normally developing programs grounded in their own national needs and experiences. Nevertheless, governments around the world are increasingly adopting SBIR- type programs to encourage the creation and growth of innovative firms in their

OCR for page 12
5 INTRODUCTION economies. Sweden and Russia have adopted SBIR-type programs. The United Kingdom’s SIRI program is similar in concept. The Netherlands has a pilot SBIR program underway and is looking to expand its scope. Asia, Japan, Korea, and Taiwan have also adopted the SBIR concept with varying degrees of success, as a part of their national innovation strategies. This level of emulation across national innovation systems is striking and speaks to the common challenges addressed by SBIR awards and contracts. 1.6 CONCLUSION This report provides a summary of the study that Congress requested when reauthorizing the SBIR program in 2000. Drawing on the results of newly com- missioned surveys, case studies, data and document analyses, and interviews of program staff and agency officials, the NRC assessment of SBIR has examined how the program is meeting the legislative objectives of the program. As with any analysis, this assessment has its limitations and methodological challenges. These are described more fully below. Nonetheless, this cross-agency report, along with `individual reports on the SBIR programs at the Department of Defense, the National Institutes of Health, the Department of Energy, the National Aero- nautics and Space Administration, and the National Science Foundation, provide the most comprehensive assessment to date of the SBIR program. In addition to identifying the challenges facing SBIR today, the NRC Committee responsible for this study has also recommended operational improvements to the program. By strengthening the SBIR program, the Committee believes that the capacity of the United States to develop innovative solutions to government needs and promising products for the commercial market will be enhanced.