Small Business Innovation at the U.S. Department of Energy: Framework for Evaluating the DOE SBIR/STTR Programs
DOE was formed in 1978 in the midst of an energy crisis, with wildly fluctuating oil prices, civilian gasoline rationing, and industrial energy shortages. Dedicated to a mission to “ensure America’s security and prosperity by addressing its energy, environmental, and nuclear challenges through transformative science and technology solutions” (DOE, n.d.a), DOE was formed through the amalgamation of energy-related programs scattered throughout the federal government, including defense responsibilities that included the design, construction, and testing of nuclear weapons dating from the Manhattan Project, along with 17 national laboratories. Over the past 40 years, DOE has funded a number of research initiatives that support energy alternatives.
The Small Business Innovation Research (SBIR) program was created at about the same time that DOE was formed. At its core, SBIR adds a critical component to satisfying DOE’s mission to innovate and develop new technologies. The energy sector is dominated by large incumbent firms, yet young, small firms are more likely to experiment with breakthrough technology. Called “America’s seed fund,” the SBIR program has funded research on innovation projects important to DOE’s mission to secure the nation’s energy future. In 1992, the Small Business Technology Transfer (STTR) program was created to provide additional funding opportunities to small businesses that collaborate with nonprofit research institutions.
Pinpointing the appropriate objectives to evaluate the DOE SBIR/STTR programs is not a simple task. Over time, the policy and academic discourse around the programs has evolved. At the inception of the SBIR program there were concerns about American competitiveness and the need to incentivize the commercial application of American science. Over the 35 years of the program’s existence, common understandings of what SBIR should be doing have shifted in
subtle ways, often reflecting policy imperatives that were salient at certain times or interpreted through academic studies that capitalized on newly available data. These competing objectives often do not align, creating confusion about the program and its impact on the American economy.
The purpose of this chapter is to provide a framework for understanding the multifaceted nature of the DOE SBIR/STTR programs and the different types of public benefits they provide based on both the programs’ stated legislative objectives and relevant economic theory. This chapter begins by considering the broad mandate for the DOE SBIR/STTR programs. It then provides a review of the literature examining the programs’ direct and indirect impacts, followed by a consideration of the challenges in evaluating the programs. For instance, it is challenging, if not impossible, to define a perfect counterfactual to prove unequivocally that the numerous benefits of SBIR/STTR, and DOE in general, would not have occurred without the agency’s investment. This chapter concludes by describing the committee’s approach to dealing with the evaluation challenges, in light of the programs’ stated objectives, economic rationale, and the best available data for evaluation.
A key component of the SBIR/STTR programs is the realization of innovation by small firms. Innovation—the creation of new products, more efficient processes, and better organized firms and networks of firms—is critical for economic growth and the realization of economic development and international competitiveness (Feldman et al., 2016). Small firms have an essential role to play in the creation of innovation by conducting research and pursuing ideas that have transformative potential. Essential to innovation is the idea that government investment is required for early-stage idea development due to the existence of market failures that lead to underinvestment by private firms. Yet, of course, to commercialize technological innovation requires a supporting system of private firms, both suppliers and customers; follow-on investors; and developed product markets (Nelson, 1993). The SBIR/STTR programs are an important component of the U.S. system of innovation, focusing on high-risk research and seeding small firms that set the system in motion.
Small businesses accounted for 66 percent of new job creation from 2000 to 2017.1 Young start-ups, in particular, have emerged as the primary drivers of this trend (R. A. Decker, Haltiwanger, Jarmin, and Miranda, 2016; R. Decker, Haltiwanger, Jarmin, and Miranda, 2014; Haltiwanger, Jarmin, and Miranda, 2013). One of the major transitions in the literature since the most recent National Academies assessment of SBIR/STTR is this emergence of scholarly attention toward younger firms rather than just small firms. While the barriers and transactions costs facing small businesses are well understood as justifications for government intervention, it has become clear that younger small businesses are
1 Source: 2018 Business Employment Dynamics from the U.S. Bureau of Labor Statistics.
the dominant drivers of traditional metrics of economic growth (Haltiwanger, Jarmin, and Miranda, 2013). Firm age, therefore, is an important moderating variable in assessments of any program that aims to support small firms. Any review of the literature on SBIR/STTR would be remiss not to identify this caveat before interpreting the results of existing evaluations, particularly those that do not account for firm age.
This systems-based view of the role of government in financing research and development is critical for understanding the complete impact of and rationale for SBIR/STTR. Routine conceptions of government performance in research and development funding often focus exclusively on volume of output per unit of investment. The committee hopes that the results and discussion in the forthcoming chapters make clear that such a narrow conception ignores the indispensable role of SBIR/STTR within a much larger ecosystem of energy innovation. The value of SBIR/STTR is much deeper and broader than its ability to correct the market failure associated with the undersupply of innovation. The committee evaluated SBIR/STTR along four key intended applications of the larger SBIR program. Having laid out the foundations and mechanics of SBIR/STTR and reviewed the legislative mandates for the programs in Chapter 1, the committee selected outcomes and reviewed relevant literature along four key dimensions, or potential sources of value, for SBIR/STTR:
- Stimulating technological innovation
- Helping agencies meet federal research and development needs
- Serving as an engine for the creation of human capital through both firm growth and a broader pool of entrepreneurs
- Promoting commercialization of products and technologies
First, the SBIR/STTR programs should stimulate innovation. For DOE, the SBIR/STTR programs stimulate technological innovation and facilitate the commercialization of pathbreaking technologies in myriad ways: generating patents, producing collaborative partnerships that result in technology transfer, broadening the geographic scope of DOE’s research activities, and regularly identifying and supporting technological and commercial breakthroughs. And yet, none of these impacts is amenable to a simple input-output analysis. The web of interconnected partnerships resulting from an award or series of awards can span numerous entities, including the firm, the agency, and other collaborators, contractors, and university partners. The value of SBIR/STTR to each of these entities is practically impossible to measure. Similarly, a single firm might use an award to generate a product whose societal value justifies a decade’s worth of expenditure on the programs, but this impact will not show up on the margin when calculating an average total effect across all participants.
The committee cautions against a narrow focus on a sole set of outcomes in evaluating SBIR/STTR. Instead, this report emphasizes two distinct, though both important, forms of evidence for program impacts: direct impacts on grantees and the more dispersed impacts that accrue across energy innovation ecosystems. The former typifies the bulk of extant literature on SBIR/STTR and related programs as indicated by the committee’s literature survey below. The latter type of impact has received less scholarly attention, especially for SBIR/STTR specifically. These impacts require a more holistic approach to program evaluation that includes not just direct effects on firm outputs, such as patents, publications, sales, products, and jobs, but also on benefits that accrue to grantee partners, universities, other firms, and the agency itself (Furman, Porter, and Stern, 2002).
For instance, early scholars of innovation policy tended to assess SBIR/STTR through the lens of appropriability, or innovators’ inability to capitalize on the full value of their efforts due to inevitable knowledge spillovers. The supply of innovations, therefore, is sub-optimal without government intervention. While any framework for evaluating SBIR/STTR that ignores this key outcome is incomplete, the program may be viewed more fully for its contribution to the American system of energy innovation. It is reasonable then to ask what level of innovation SBIR/STTR supports for a given level of public funding. Chapter 5 is devoted to the evaluation of SBIR/STTR’s innovation outputs and includes measures of innovation outcomes. The results of Chapter 5 highlight that direct innovation, measured through awardee innovative output, accounts for only one aspect of the public goods rationale for SBIR/STTR. Direct financial incentives deal with the classic appropriability problem by allowing awardees to recoup some of the social gains of their innovations. But SBIR/STTR also provides additional public benefits from spillover effects that result from those innovations, which the committee estimates in Chapter 5. These spillovers generate further innovation in related technology and are one indication of the programs’ role in a broader system of innovative activity that is much more complex and wide-reaching than the direct interaction between DOE and awardees.
Help Agencies Meet Research and Development Goals
Second, the SBIR/STTR should facilitate agency objectives by supporting basic research and procurement using a network of small firms and their university partners. For over 30 years DOE has used SBIR (and for over 20 years used STTR) to feed the energy ecosystem by directing resources strategically toward the agency’s technological mission areas. These mission areas are, of course, rooted in the rationale for the agency itself, which has historically aimed to foster technological growth trajectories that are inherently uncertain, and which unfold over long time horizons in ways that are often fundamental to the subsequent construction of private market solutions.
The SBIR/STTR programs represent a key lever for DOE to direct resources strategically toward socially desirable mission areas. In addition to promoting indirect innovative benefits through technology spillovers, setting the nation’s technological agendas, and integrating resources into broader innovation ecosystems, DOE uses SBIR/STTR to meet critical mission objectives. While difficult to measure with available data, DOE’s offices can leverage their funding opportunity announcements to attract specialists whose use of the funding will contribute to ongoing agency efforts in niche technological areas. The program structure itself also fosters such collaborations by encouraging partnerships with DOE national labs while creating potential pathways to procurement. This lens suggests the need for careful consideration of multiple award recipients (MARs), a source of controversy in SBIR/STTR assessment, which we address later in this chapter and in Chapter 4.
Creation of Human Capital
Third, the programs should serve as an engine for the creation of human capital through both firm growth and a broader, more diverse, pool of entrepreneurs who might contribute to energy innovation ecosystems. A rationale for SBIR/STTR lies in its capacity to attract the best ideas from a larger and more diverse population of entrepreneurs, many of whom inequitably face barriers to market entry in the absence of government involvement.
Commercialization of New Technologies
Fourth, the SBIR/STTR programs should promote commercialization of new technologies. A legislative mandate for SBIR is to examine its ability to alleviate capital market imperfections by acting as a source of seed funding for small start-ups. The re-branding of the program as “America’s seed fund” reflects the core program objective of helping competitive, but capital-constrained, small businesses weather the “valley of death.” One way of understanding the programs’ effectiveness, therefore, is to assess whether participating firms are achieving outcomes such as follow-on funding, commercialization of products, and development of management teams that reflect present or forthcoming commercial success (see Chapter 5).
Collaboration among agency offices, awardees, national labs, universities, and other commercial and academic partners represents an additional overarching task for evaluating the programs. These partnerships are a direct requirement of the STTR program and influence each of the four dimensions mentioned above. Collaboration outcomes are considered in Chapter 4.
The committee recommends the use of these lenses in concert when considering the evidence for SBIR/STTR’s effectiveness. This approach encourages sensitivity to the tradeoffs that can occur for any program seeking to promote positive firm outcomes along these four dimensions and better reflects the realities of how government funds science. Some of these tradeoffs, along with
several additional assessment challenges, are highlighted below. With the theoretical underpinnings of the rationale for SBIR/STTR in place, we move to a discussion of the empirical literature.
The variety of theoretical lenses for assessing SBIR/STTR suggests multiple outcome measures. Some of the outcomes are better represented in current literature than others. While not a dominant outcome in the literature, employment outcomes associated with the SBIR/STTR program have been examined in a few studies (Lerner, 2000; Siegel and Wessner, 2012; Wallsten, 2000; Howell, 2019; and Howell and Brown, 2020). Other studies have explored SBIR/STTR’s impact on sales (Siegel and Wessner, 2012), follow-on private financing (Howell, 2017; Lanahan and Armanios, 2018; Toole and Czarnitzki, 2010; Wallsten, 2000), patents (Howell, 2017; Siegel and Wessner, 2012), and copyrights and trademarks (Siegel and Wessner, 2012). Scholars have used the SBIR/STTR programs to study narrow single outcomes, often conflating the objective of the programs with their own topic of study.
Table 2-1 highlights six exemplary studies that assess the impact of SBIR directly. On the whole, there is evidence for numerous benefits of the program. Other studies present mixed results suggesting limitations of the research and opportunities for improvement. Lerner’s (2000) study found evidence that SBIR awards boost sales and employment, but that these effects were concentrated in geographic regions with high levels of venture capital activity. Lerner’s study also found that the benefits of SBIR were strongest in high-technology industries. A subsequent study found similar positive effects of the program on commercialization and sales (Audretsch, Link, and Scott, 2002). Other studies provide evidence that SBIR/STTR leads to increased follow-on financing from the private sector (Howell, 2017; Toole and Czarnitzki, 2007) and the creation of intellectual property (Howell, 2017; Siegel and Wessner, 2012).
Some studies indicate possible limitations of SBIR/STTR and areas for further evaluation and improvement. Wallsten (2000) found evidence that SBIR/STTR crowds out private financing, and also highlighted the difficulty in determining the direction of causality in evaluations of R&D funding programs. This is perhaps the chief empirical challenge facing program evaluators. Is it that SBIR/STTR awards are helping firms succeed where they otherwise might not? Or are the awards going to firms that would have achieved positive outcomes anyway? It is almost certain that both circumstances are at play, but causal evidence of program impacts require that we understand which firms are capturing program benefits that are additional to what they would have seen otherwise.
This study builds upon prior studies of SBIR/STTR carried out by the National Academies (NRC, 2008; NASEM, 2016). Those studies took a survey-based approach. Since the 2016 report, researchers have begun to undertake more rigorous evaluations that use a variety of data to provide rigorous evidence,
TABLE 2-1 Studies Assessing the Impact of SBIR/STTR Awards
|Lerner, 2000||SBIR and GAO||Matched Sample||Awards have a positive effect on sales and employment; effect is moderated in regions with private resources; multiple awards do not increase performance|
|Howell, 2017||SBIR/STTR DOE||RDD (applicants)||An award doubles the likelihood of private financing; stronger effect was seen on financially constrained firms. (Multiple measures examined)|
|Audretsch, Link and Scott, 2002||SBIR DOD||Survey and case study||Awards have positive effects on commercialization and sales|
|Siegel and Wessner, 2012||SBIR DOD||Regression||University ties yield higher levels of performance. (Multiple measures)|
|Toole and Czarnitski, 2007||SBIR DHHS||Regression||University academics add value to SBIR firms when seeking private financing|
|Wallsten, 2000||SBIR and GAO||Matched Sample||Awards inhibit funding from the private sector; larger eligible firms secure more SBIR grants but the grants do not impact employment|
NOTE: DOD = Department of Defense; GAO = Government Accountability Office.
summarized here. Chapter 3 is a qualitative assessment of organizational practices while Chapter 4 provides a descriptive analysis of the landscape of SBIR/STTR awardees. Chapter 5 relies on econometric evidence using multiple data sources.
SBIR/STTR have multiple objectives that suggest different outcomes for evaluation. Not all applicants seek private financing (Howell, 2017), for example, and many are focused on innovation goals specific to agency needs or have private goals that require long time horizons for commercialization (Lanahan and Armanios, 2018). The conflicted nature of program goals complicates the interpretation of the results of single studies. For instance, it is tempting to interpret Lerner’s (2000) finding that multiple awards do not increase performance is taken as evidence that multiple winners may be problematic. In some cases, this may be true, but one of the chief findings of this committee, for
example, is that repeat winners often serve critical research and procurement needs for DOE, as discussed in Chapter 4.
Another important limitation of much of the current SBIR/STTR evaluation literature is its heavy reliance on interviews (Link and Scott, 2000), surveys (Siegel and Wessner, 2012), case studies (Audretsch, Link, and Scott, 2002), and non-experimental research designs that struggle to determine the true causal effect of the programs. The complete lack of experimental research on SBIR/STTR in the nearly 40 years since the inception of the SBIR program suggests a strong need for federal agencies to consider conducting or funding such studies and providing researchers access to data on the performance of applicants. Seminal studies that capitalize on rich administrative data (Wallsten, 2000) are due for an update. Finally, regression studies often exploit rich detail on firm characteristics (Siegel and Wessner, 2012; Toole and Czarnitzki, 2007), but it is extremely difficult to measure and adjust for all relevant features of firms that may explain differing performance between awardees and non-awardees.
The Howell (2017) study is the only SBIR/STTR study to make use of applicant data for all firms for two DOE program offices (Energy Efficiency and Renewable Energy, and Fossil Energy), including those denied awards. This study represents a notable advantage over others given its ability to compare winners and losers in order to estimate additional impacts of an SBIR/STTR award. Howell’s is perhaps the most relevant benchmark and empirical standard for this committee’s work for three reasons: 1) it focuses specifically on DOE, 2) her access to applicant data enabled a strong quasi-experiment that allowed her to compare winners and losers just above and below a merit-based cutoff, and 3) her results are recent enough to warrant extension to current evaluation. Using a regression discontinuity design, Howell found real value for Phase I but not as much for Phase II in terms of private follow-on financing and patents. While providing perhaps the strongest causal evidence available on DOE SBIR/STTR outcomes, the nature of the regression discontinuity design means the exclusion of potential at-risk firms who never applied or who performed very well or very poorly on the reviewer merit scale. To date, no studies have tried to characterize this “at-risk” sample of firms that resemble applicants and program targets, but for some reason, such as lack of information, do not apply for an award. The outcomes chapter of this report, Chapter 5, addresses this challenge by matching winners to non-winners across a nationally representative sample of U.S. firms.
An additional body of work has developed around moderating factors of SBIR effectiveness. Joshi, Inouye, and Robinson (2018) examined workforce diversity within granting agencies and found a strong link between representation of women and underrepresented minority groups at those agencies and the successful conversion of Phase I to Phase II grants for those same groups. Agencies with lower representation saw lower conversion rates for women and minorities. DOE is the third lowest of all mission agencies in terms of representation of these groups (Joshi, Inouye, and Robinson, 2018). Other work has shown that the performance of the program varies depending on geography, with awardees doing better in areas with greater access to resources and with
The conflicting evidence on SBIR/STTR’s performance partially reflects the fact that there are multiple pathways along which firms can evolve through the programs and multiple institutional forces at play in shaping program outcomes. For instance, some firms receive a Phase I and move quickly to other private funding sources. Others proceed to Phase II, perhaps generating new SBIR/STTR Phase I applications and ultimately procurement relationships. Conventional empirical approaches tend to miss the qualitative differences in these experiences. Further, there are a myriad of institutions besides the federal government that play a role in determining SBIR/STTR effectiveness. Seventeen U.S. states have non-competitive state matching programs for SBIR/STTR winners and 45 states offer some form of Phase 0 support (Lanahan and Feldman, 2015).
Lanahan and Feldman’s (2017) study of non-competitive state matching programs encouraged evaluation of SBIR/STTR within a broader mix of policies and institutions rather than viewing the program in a silo of direct inputs and outputs. The results revealed that the matching state programs increase the likelihood of Phase I applications and the successful conversion of Phase I to Phase II awards. Additional public funds increased the quality of SBIR/STTR Phase II applications. The results indicate that additional SBIR/STTR funding leads to greater application intensity, which raises overall quality of funded proposals.
Additional influences on SBIR/STTR effectiveness include regional institutions and collaborative patterns among awardees, universities, and national laboratories. The value of these collaborations on innovative activity more generally is clear. A substantial literature supports the finding that technology transfer programs in universities of high research quality induces commercialization (Lockett and Wright, 2005; Siegel, Waldman, and Link, 2003).
Haeussler and Colyvas (2011) surveyed more than 2000 life scientists in the United Kingdom and Germany and identified numerous drivers of commercial activity among academic researchers and universities. The study found that publications and patents matter in terms of predicting commercial success and collaboration with the private sector. The aforementioned study found that publications and views of the importance of patents were positive predictors of commercial activity and consulting, and these effects are most pronounced in fields such as engineering and clinical medicine, in which commercial applications are more viable.
Lastly, studies have found that procurement and commercialization linkages represent another major source of value for SBIR/STTR. A previous SBIR assessment by the National Academies (NRC, 2008) described SBIR as a path to procurement, especially for the Department of Defense. While no studies specifically analyze the causal effects of SBIR/STTR on procurement, there is substantial evidence that government procurement in general has produced
With the above framework for the rationale for SBIR/STTR in place, this section identifies empirical challenges in evaluating the program. Underlying this broad assessment challenge are the questions: What types of outcomes should evaluators observe? How should they weigh each of those outcomes in determining the overall effectiveness of the program? Are the most important outcomes even observable? Which ones are not? What types of evidence matter most (i.e., quantitative versus qualitative, “extreme” successes versus average effects versus notable failures), and across what time scales and domains? Answers to these questions will likely depend on the specific area of SBIR/STTR under consideration and the associated assessment lens. Chapter 1 of this report identified the central challenges of assessment for programs like SBIR/STTR. This section examines those challenges in greater depth, highlighting attempts to deal with those challenges in the current SBIR/STTR literature.
Published studies of SBIR/STTR’s impacts have examined a wide array of outcomes, including firm-level sales (Lerner, 2000; Audretsch, Link, and Scott, 2002), employment (Lerner, 2000; Wallsten, 2000), follow-on financing (Toole and Czarnitski, 2007; Howell, 2017); patenting behavior (Toole and Czarnitski, 2007; Siegel and Wessner, 2012); and certification (Keller and Block, 2013; Lanahan and Armanios, 2018). These studies represent some of the best attempts to generate quantitative estimates of program effectiveness. Each outcome assumes a distinct objective for the programs. For instance, estimates of the programs’ effect on patenting behavior suggest an innovation-based rationale, while studies of the programs’ effect on employment outcomes reflect an expectation that the programs create jobs directly in awarded firms. The list above also prizes quantitative outcomes that indicate a direct attribution of program effects at the firm level. For example, firms may leverage infrastructure at government labs and universities, which affects the quality of the research and speed to market. These unobserved and more qualitative inputs reflect the difficulty in measuring and attributing more complex outcomes, such as overall technological output in key DOE mission areas. The next section discusses some more specific assessment challenges in greater detail, highlighting the committee’s approach to dealing with those challenges and appealing to relevant literature.
Challenges of Measurement and Attribution
Assessment of SBIR/STTR is challenging for several reasons. For one, it is difficult to observe and measure certain outcomes of program success. Realizing the full potential of DOE SBIR/STTR investment requires long time horizons, cumulative activity, and large infrastructure investment. In addition,
measurement of the the production of direct innovation is difficult. Ideally, the committee would like data on new product innovation introduced to the market or new energy efficient production processes, as this would enable an evaluation of the commercialization pathway associated with DOE-sponsored science. Yet the ideal data do not exist. Tracing commercialization in the energy sector is further complicated by the focus on business-to-business markets rather than the more publicized announcements of new products for consumer products. DOE SBIR/STTR projects often produce innovation, such as efficiency gains for the electricity grid, that provide a public good but are difficult to price.
Similarly, DOE creates valuable knowledge spillovers as SBIR/STTR funding is targeted to specific technological domains through funding opportunity announcements; however the resulting benefits are difficult to measure since they do not accrue to the direct beneficiary of the award but instead benefit other companies that use the technical knowledge produced. This report addresses this difficulty by narrowing the unit of analysis from the firm to the funding opportunity announcement. This permits an analysis based on the specific knowledge created during the research associated with the announcement. Analysis of specific technological areas based on patent abstract text permits a determination of the broader ecosystem effects of SBIR/STTR funding. These effects are considered in Chapter 5.
Another key example of this type of assessment challenge lies in measuring economic outcomes. Program evaluations, by their very nature, focus on marginal gains from an intervention for the marginal firms, with program impacts reported in terms of average effects. As illustrated in Chapter 5, many of SBIR/STTR’s economic impacts derive in large part from a small subset of firms who achieve outstanding results. This study reports economic findings at the far-right end of the distribution of firm performance, in addition to marginal effects.
To be sure, there persists an inevitable degree of uncertainty in direct attribution of program effects across all relevant aspects of the systems of innovation in which SBIR/STTR is embedded. It is not realistic to expect that all program impacts be precisely measurable. As described in Mazzucato (2015), there is “strong uncertainty underlying technological innovation” and “feedback effects that exist between innovation, growth, and market structure.” These make a complete assessment of SBIR/STTR on all relevant dimensions impossible.
Conflict Among Program Objectives
A second challenge surfaces in the apparent conflict between certain program goals. For example, the programs are expected by many to create jobs, but it is not reasonable to expect many young firms to do so in the short term given SBIR/STTR’s explicit incentives for firms to spend award money on partnerships with university and national lab experts who are already employed. This conflict may be even more pronounced when SBIR/STTR awards are used as a procurement tool given the increased incentive for the agency and awardee to
encourage partnership with a national lab rather than to create new external positions.
While some firms may not demonstrate the same growth in economic outcomes as others, they may be doing more to advance the needs and mission of DOE. This is particularly true for firms using SBIR/STTR to fund niche technological objectives that are central to DOE’s scientific needs. These firms may also have more basic research objectives that are relevant to agency missions, or their proposed innovations may be “high risk/high reward” (Lanahan and Armanios, 2018.) In either case, commercialization and firm growth may be difficult and occur along very long time horizons even as substantial benefits accrue from basic research and potential agency procurement. This provides some justification for continued support of MARs in cases in which growth in commercialization and conventional economic outcomes proceeds at a relatively slow pace.
Perspectives on Multiple Award Recipients
Multiple award recipients have been a source of controversy in evaluating the SBIR since at least the 1992 (Public Law 102-564) re-authorization of the program (GAO, 1992). Lerner’s (2000) use of the derogatory label, “mills,” reflected concern that repeat winners may obstruct SBIR program objectives, namely, using it to compensate for externalities due to information spillovers and as a way to certify firms to outside capital investors. Since then a debate among scholars and administrators has emerged as to the role MARs play in meeting SBIR/STTR objectives.
One of the primary concerns about MARs is that they may depend on SBIR/STTR for a disproportionate share of their revenue and, in fact, may not seek other forms of revenue outside of the program (Howell, 2015). Recent work regarding the Department of Defense has shown that 78 percent of MARs receive over half of their federal contracting revenue through SBIR (Tingle, 2016). Other studies of MAR commercialization present conflicting evidence. A National Academies study (NRC, 2008) found that, at least among Department of Defense SBIR firms, MARs have higher average commercialization records than non-MARs. Meanwhile, Link and Scott (2009) found the opposite. Both evaluations relied on survey data (NRC, 2008) and crude measures of economic outcomes such as binary indicators of commercialization (Link and Scott, 2009). Howell (2015) corroborated the negative findings above with more rigorous data and methodology. Using administrative data on a subset of SBIR/STTR awards and applications within DOE (those for the Office of Energy Efficiency and Renewable Energy and Office of Fossil Energy) she found that first-time winners were much more likely to commercialize and to receive venture capital than those having received multiple awards.
Missing from these analyses is an assessment of MARs performance on important non-commercial outcomes such as procurement and basic research. As discussed above, there are multiple and conflicting objectives for the SBIR
program as defined in the legislation and in the Small Business Administration’s stated goals. While the literature has viewed SBIR/STTR as small business finance programs aimed at new firms with high growth potential, it is clear from the legislation authorizing SBIR/STTR and the stated goals of the programs that the programs were intended to have a broader impact.
This Committee emphasizes that there is substantial variation within MARs. Firms that win multiple awards may differ from one another in several important ways. For instance, a frequent winner that is struggling to commercialize due to the non-incremental nature of its technology is quite different from one that acquires frequent grants as part of its business model. Second, firms may establish long SBIR/STTR track records as part of a mutually symbiotic relationship with the funding agency, especially in cases in which SBIR/STTR winners are uniquely equipped to meet specific procurement needs. These firms develop deep relationships with their funders over years of SBIR/STTR activity within a single agency. This vertical accumulation of awards within a single agency may lead firms to help expand agency capacities well beyond what a typical SBIR/STTR awardee can accomplish. On the other hand, more horizontally oriented firms may be searching for awards across multiple agencies to match their own specific technologies or to take advantage of an established familiarity with the application process. The committee distinguishes between these horizontal and vertical MARs in this report and presents descriptive statistics for both groups. Chapter 3 provides evidence for these conclusions drawn from interviews with DOE SBIR/STTR award managers.
The current SBIR/STTR literature reflects a dominant priority of determining additionality in estimating program outcomes. In other words, do evaluations reflect a plausible comparison between what happened under SBIR/STTR and what would have happened had the program not been implemented? The literature is populated with numerous attempts to evaluate SBIR/STTR by identifying this counterfactual condition and using it as a benchmark for measuring program success. Typically, this involves the use of one of the three primary approaches to inferring causal impacts of the program. First, random experiments represent the ideal platform for causal inference because they virtually guarantee that any observed impact is “additional” in the sense that it would not have occurred without the program. However, since SBIR/STTR awards are made based on application quality, the process that separates recipients from non-recipients is inherently non-random. Second, quasi-experimental approaches can often approximate random assignment. Howell’s (2017) example exploits a merit-based assignment threshold to compare program outcomes for those just above to those just below the cutoff. This approach is celebrated in the economics literature for its ability to uncover causal effects. However, a downside of such methods is that they are only amenable to a narrow set of outcomes and often allow inference to only a small subset of the population of awardees and
potential awardees. In addition, these types of studies are only possible if an agency provides access to administrative data.
A third category of evaluations proceeds along non-experimental lines. Such studies may attempt to infer additionality by creating matched “twins” for winners (e.g., Lerner, 2000) from a population of non-winners based on a number of observable firm characteristics. Other studies control for those characteristics, statistically, to rule out competing explanations for what appear to be the effects of the programs (e.g., Siegel and Wessner, 2012). A final set of approaches acknowledges that certain important outcomes are not measurable within a quantitative causal inference framework and use qualitative data to uncover mechanisms and process details that are critical to understanding program outputs.
This last approach is least well-represented in the SBIR/STTR literature, perhaps because it stacks up poorly against the others in terms of its ability to meet strict empirical requirements of additionality. However, the committee believes that qualitative and process-oriented studies of SBIR/STTR are indispensable in addressing some of the assessment challenges identified above. For one, such studies can better account for multiple and conflicting outcomes at once. Further, they are likely the best means of elucidating causal mechanisms that explain the details of program impacts and clarifying opportunities for improvements to process.
This committee’s assessment of SBIR/STTR employs the multi-part rationale for the programs discussed above. It examines multiple outcomes, accounting for different program objectives. In selecting those outcomes, the committee prioritized its formal Statement of Task while also considering the gaps in prior literature such as the need for consideration of qualitative data on process, extreme outliers, and outcomes such as procurement and basic research which may compete with more straightforward economic outputs such as job growth and product commercialization. As such, our assessment employs both qualitative primary data and quantitative administrative data at the firm, funding opportunity announcement, and program office levels.
The committee also placed great emphasis on empirical approaches that explicitly address additionality concerns. Chapters 5 presents results of statistical models designed to approximate the counterfactual of what would have happened to awardees in the absence of SBIR/STTR funding. However, these approaches are not sufficient, on their own, for complete evaluation of SBIR/STTR along the dimensions requested. The committee therefore used qualitative interviews to assess outcomes and processes that are not amenable to quantitative and experimental research designs. This approach also considers the need for good descriptive data, especially at the far right and left ends of the distribution of performance outcomes.
Finally, throughout its analysis, the report emphasizes that individual estimates of program performance should not be interpreted in isolation. Rather, these results are part of SBIR/STTR’s performance within a complex innovation system. Strong performance in certain areas may mean corresponding deficits in others, and individual firm performance will not capture the full value of return to SBIR/STTR investment given spillovers and unobservable influences within these systems. In-depth interviews and detailed description of patterns and findings at multiple levels of SBIR/STTR process are essential for explaining these dynamics.
This page intentionally left blank.