Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 26
7. Methodology Development: Primary Research The wide scope of the current study and gaps in the existing data will necessitate a considerable amount of primary research. The approach adopted is to select the methodological elements best suited to complement and supplement existing information. The study objectives will be realized using the most efficient combination of methods.59 These include analyzing existing studies and databases, interviewing program officials, surveying various program and technical managers and project participants, carrying out case studies, using control groups and counterfactual approaches to isolate the effects of the SBIR program, and other methods such as econometric, sociometric, and bibliometric analysis. These tools will be used on an as needed, limited basis to address questions for which they are best suited.60 A dictionary of variable names with definitions that are common across all of the instruments will be developed. This dictionary will form a part of the training materials used by interviewers, survey managers, and those populating variables with administrative data. Surveys Surveys are an important methodological element of the study. Program staff will be interviewed, with these interviews focusing (at least initially) on process issues – mechanisms, selection procedures, etc. - and on the contribution of the program to the agency. This will include understanding the motivations and objectives of the program managers. What are their goals and incentives? How is their performance within the agency SBIR program judged? Development of a core questionnaire and also a basic reporting template may be appropriate even though interviews with more senior program managers are likely to be free ranging with many open-ended questions and also more agency specific than those with participants. A core template with five derivative templates (one for each of the five agencies identified in the legislation) seems a promising approach. Higher-level research officials, such as deputy institute directors, may be interviewed about the SBIR in comparison with other research support by the agency. SBIR award recipients will also be surveyed. The key issue here will be to identify the correct respondent, one who both knows the answers and is willing to fill out the instrument. The survey will begin by contacting those already in the database of firm information, which covers all applicants for SBIR Phase I or Phase II grants.61 The database includes the name of the SBIR Point of Contact (POC) for that firm (along with phone, address, and email). In fact, the database covers many firms that have also received awards from NSF, NASA, and DoE. Most NIH awardees do not submit proposals to these agencies and, therefore, are not covered. Surveys will be field-tested ensure that they are effective and encourage compliance. The first step will be to develop a short survey to cover those firms lacking a point of contact (POC). This survey will ask for information about the POC and solicit information on a very small set of firm-related questions. This will facilitate development of a comprehensive database of POC’s. Subsequent recipient surveys will be directed to these POC’s, although it is likely that certain information will require responses at the corporate level of the firm, and at the level of the primary investigator (PI). The following questionnaires and surveys will likely be administered: • Survey of program managers, focusing on major strategic questions and overall program issues and concerns; • Survey of technical managers focusing on operations and issues of program implementation; • Survey of SBIR Phase II participants, focusing both on outcomes from SBIR grants (especially commercial outcomes) and on program management issues from the recipient perspective. This survey is likely to have both a general and an agency-specific component. It is also likely to have a section focused on company impacts (as opposed to project impacts); • Survey of SBIR Phase I participants, focusing on initial selection and support issues; • Additional limited surveys focusing on particular aspects of the program, possibly at specific agencies, can be initiated, with limiting parameters to be specified. 59 For a review of methodologies for evaluating technology programs, see D. Campbell, Research Design for Program Evaluation, Beverly Hills: Sage, 1984. See also L. Georghiou and D. Roessner, “Evaluating Technology Programs,” Research Policy, 29, 2000. 60 See additional discussion related to the counterfactual issue in Section 7 of this chapter, pp. 32-33. 61 Available from BRTRC, the consulting/survey firm with whom NRC worked in the 1999 Fast Track study. 26
OCR for page 27
Each of the survey instruments will have a stated purpose and each will be “mapable” to the objectives of the study to which they relate.62 All surveys will be pre-tested. These surveys are discussed in more detail below. Program manager survey The program manager survey will focus on strategic management issues and on manager views of the program. It will be designed to capture senior agency views on the operations of the SBIR program focused on concerns such as funding amounts and flexibility, outreach, topic development, top-level agency support for SBIR, and evaluation strategies. The survey may be administered through face-to-face interviews with senior managers, by telephone, by mail, via electronic questionnaire, or through some combination or these approaches. All senior program managers at the agency and all program managers at the sub-unit level (e.g., NIH institutes, DoD agencies) are to be covered. Altogether, there are approximately 45 program managers at this level in the five study agencies. Technical manager survey While program managers should have a strategic view of the SBIR program at their agency, the program is to a considerable extent operated by other managers. The responsibilities of these technical managers (or TMs) are focused on the development of appropriate topics, appointment of selection panels, process management (e.g., ensuring that reviews are received on time and that the selection and management process meets approved timelines), and contacts with the grant recipients themselves. The Committee plans to conduct informal interviews with selected TMs. In addition, a survey instrument is currently being designed which will be sent to each TM in each agency. This instrument will address technical management issues, and will focus on the relationship between SBIR projects and non-SBIR components of each agency’s research and development program. TMs, for example, may play a pivotal role in the subsequent take-up of SBIR-funded research within DoD, and the survey is aimed at enhancing assessment of that possibility. The survey will therefore be delivered to all TMs in the five agencies. Approximately 200-300 potential survey recipients are anticipated. SBIR Phase I recipient survey In order to identify characteristics of firms and projects that received SBIR Phase I awards only, the Committee anticipates the implementation of a survey of SBIR Phase I recipients. The objective of this survey is to enhance understanding about project outcomes, and to identify possible weaknesses in the SBIR Phase I—Phase II transition that may have excluded worthy projects from SBIR Phase II funding. (It should be understood that the Committee has no preconceptions on this issue—only that this is an important transition point and winnowing mechanism in SBIR, and should therefore be reviewed.) As there have been more than 40,000 SBIR Phase I grants made, it is not feasible to cover all SBIR Phase I winners. Therefore, the Committee will developed an initial set of selection criteria, aimed at ensuring that outcomes are assessed for a range of potential independent variables. These will include: • Size of firm • Geographic location • Women and minority ownership • Agency • Multiple vs. single award winners • Industry sector SBIR Phase II recipient surveys The SBIR Phase II recipient survey will be a central component of the research methodology. It will address commercial outcomes, process issues, and post-SBIR concerns about subsequent support for successful companies. Surveys must provide data that will allow the Committee to address the various questions defined in sections 3 and 4. Specifically, survey methodologies will need to differentiate between: • Funded and unfunded applications 62 “Mapability” means that questions on the survey instrument must map, individually or by groups, to the objectives of the study. A survey is a methodological tool for collecting information to meet a study’s objective. 27
OCR for page 28
• Women led/minority led businesses • Different geographical regions or perhaps clusters of zip codes • SBIR Phase I vs. Phase II awards • Firms by size: single-person companies vs. micro corporations vs. relatively large established companies (100+ employees?). 63 • Firms by total revenues and by revenues attributable to the SBIR-related commercialization • Firms by employment effects • Recipients of single vs. multiple awards • Other criteria, including the procedural efficiency of converting from Phase I to Phase II The Committee is also interested in finding relevant points of comparison between research quality and research value. However, such comparisons are complicated because SBIR and non-SBIR funding is differentiated not only by the size of the firm but also by the kind of research, by funding rationale, and by time horizon. For example, NSF views SBIR as a tool for funding research that leads to commercialization, while the remaining 97.5 percent of NSF funding is for non-commercial research. Here, a comparison would be inappropriate. In addition, non-SBIR grants operate under different timeframes and are usually at a different phase of the R&D cycle, requiring different resource commitments. To address this point, the Committee will consider if the Phase II survey should be expanded to identify awards that have received some form of quality recognition from and outside agency. For example, if the only competitors for such recognition are other SBIR projects, (as is the case with the Tibbetts Award) this may identify the best SBIR projects but say little about comparisons to non-SBIR projects. All of these data will be collected on an agency-by-agency basis, to ensure sufficient data for the statistical analysis of each agency. The result will be a survey matrix, with an x-axis showing potential explanatory variables such as multiple- vs. single-award winners, and the y-axis showing the individual agencies.64 (Each cell of the matrix is important to the extent that the specified data help to address study objectives. Detailed articulation between objectives and survey instruments will be an early stage task for SBIR Phase II. See Annex F for a prototype of this matrix. Background Award numbers. Although data inconsistencies mean that the number of SBIR Phase II awards from 1992 – 2000 is not known exactly, it is estimated that this number is at about 10,800. Based on the three published reports, about 7 percent of these SBIR Phase II awards are from the smaller agencies. Thus, it is estimated that about 10,000 awards have been made by the five study agencies. There are no good data concerning the distribution by firm (some firms have received more than 100 awards, many others just one). Existing Commercialization Data DoD has data by project for 10,372 SBIR Phase II projects. (This includes projects from 1983). Since 1999, firms who have submitted SBIR or STTR proposals to DoD have had to enter firm information and information on sales and investments for all of the SBIR Phase II awards they have received, regardless of awarding agency. The DoD commercialization database contains information on approximately 75 percent of DoD Phase II awards from 1992 to 2000, 67 percent of NASA and DoE awards, 54 percent of NSF awards, and 16 percent of NIH/HHS awards. DoE has provided commercialization data by product, which cannot be directly associated to projects as this may lead to a double counting of awards to firms. NASA does have data by project, although this does not appear to correspond directly to DoD data. 63 Responses to questions about size are often faulty. Some proposal writers enter the size of their division of the company, rather than whole company. Some pull a number out of the air based on the last estimate they heard. A company may apparently vary substantially in size on several proposals that were awarded the same year (even proposals submitted within days of each other.) However, by grouping the sizes in broad groups most of this type of variation can be avoided. One should keep in mind that companies may be very small, for early awards, grow, while continuing to submit, eventually becoming no longer eligible (over 500) then shrink and start submitting again. What is relevant is the size at the time of the award. 64 The matrix is provided in Annex G. 28
OCR for page 29
Sampling Approaches and Issues The question of sampling is of central importance here, and a more extended discussion of the issues raised can be found in Annex G. The Committee proposes to use an array of sampling techniques, to ensure that sufficient projects are surveyed to address a wide range of both outcomes and potential explanatory variables, and also to address the problem of skew noted earlier. • Random Sample. After integrating the 10,000 awards into a single database, a random sample of approximately 20 percent will be sampled for each year; e.g., 20 percent of the 1992 awards. Generating the total sample one year at a time will allow improved access to changes in the program over time, as otherwise the increased number of awards made in recent years could dominate the sample. • Random sample by agency. Surveyed awards will then be grouped by agency; additional respondents will be randomly selected as required to ensure that at least 20 percent of each agency’s awards were included in the sample. • Top Performers. In addition to the random sample, the problem of skew will be dealt with by ensuring that all projects meeting a specific commercialization threshold will be surveyed—most likely $5 million in sales or $5 million in additional investment (derived from the commercialization database). Estimates from current DoD commercialization data indicate that the “top performer” part of the survey would cover approximately 385 projects. • Firm surveys: 100 percent of the projects that went to firms with only one or two awards will be polled— these are estimated at approximately 30 percent of the 10,000 SBIR Phase II awards, based on data from 1983 to 1993. These are the hardest firms to find: address information is highly perishable, so response rates are much lower. • Coding The project database will track which survey corresponds with each response. For example, it is possible for a randomly sampled project from a firm that had only two awards to be a top performer. Thus, the response could be coded as a random sample for the program, a random sample for the awarding agency, a top performer, and as part of the sample of single or double winners. In addition, the database will code the response for the array of potential explanatory or demographic variables listed earlier. • Total number of surveys: With the random sample set at 20 percent, the approach described above will generate approximately 5500 project surveys, and approximately 3000 firm surveys (assuming that each firm receiving at least one project survey also received a firm survey). Although this approach samples more than 50 percent of the awards, multiple award winners would be asked to respond to surveys covering about 20 percent of their projects. Projected response rates. The response rate is expected to be highly variable. It will depends partly on the quality of the address information, which is itself a function of the effort expended on address collection and verification before surveys are administered, and partly on the extent of follow up of non-respondents. The latter is especially important: one agency manager noted that his survey had a final response rate of 70-80 percent, but that the initial rate before follow-up phone calls was approximately 15 percent. As noted in Siegel, Waldman, and Youngdahl (1997), response rates to technology surveys are notoriously low, averaging somewhere in the teens. Thus, a 20 percent response rate for a technology survey can be considered high, especially if it involves sampling small firms, and there is potential attrition in the sample through exits or mergers and acquisitions. The NRC surveys are expected to exceed this benchmark for two reasons. Experience: The NRC has assembled expertise with an excellent track record of effective sampling of firms. Previous survey work for the Department of Defense SBIR Fast Track survey yielded a response rate of 68 percent. Stewardship: Substantial time and effort will be devoted to following up the survey with phone calls to non- respondents and those that provide incomplete information. While the NRC study expects a significant response rate, based on the same techniques as have proved successful in the past, it is inherently difficult to predict the precise size of the actual result. 29
OCR for page 30
Draft SBIR Phase II Survey Roadmap Draft Survey Roadmap 4. Year 5. Distrib. 6. Fed 7. Trade 1. Current 3. Sales &$ Of sales systems names status a) Yes a) Under development b) Expected 9. Expected 10. Expected 8. Comparison c) Never Sales Yr/$ Sales by 2005 W/Non-SBIR b) Ended 2. Why ended c) Still in P2 11. Commercializ. 12. Marketing 13. Plans activities Partnerships 14. Award 15. 16. 17. 18. I.P. impact on Projected Projected Employees decisions scope delays Yes No 19. # prior 20. # related 22. Additiona l 23. Source SBIRs SBIRs funds of funds Qualified yes 24. Matching 25. Sources of 26. Time for 21. Previous Background funds Matching funds Matching funds funding Commercializ. Other Outcomes 27. Funding 28. Project 29. 30. Other P1-P2 gap History Length of gap Assistance Other SBIRs 30
OCR for page 31
Starting date and coverage Surveys administered in 2004 will cover SBIR awards through 2000. 1992 is a realistic starting date for the coverage, allowing inclusion of the same projects as DoD for 1991 and 1992, and the same as SBA for 1991, 1992, and 1993. This would add to the longitudinal capacities of the study. Projects awarded earlier than 1992 suffer from potentially irredeemable data loss: firms and PI’s are no longer in place, and data collected at the time was very limited. Delivery modalities Possible delivery modalities for surveys will include: • Online • By phone • By mail • In person (interviews or focus groups) Clearly, there are many advantages to online surveys (such as cost, speed, possibly response rates), and such surveys can now be created at minimal cost using third party services. Response rates become clear fairly quickly, and can rapidly indicate needed follow up for non-respondents. Clarifications of inconsistent responses are also easier using online collection. Finally, online surveys allow dynamic branching of question sets, with some respondents answering selected sub-sets of questions but not others, depending on prior responses. There are also some potential advantages to traditional paper surveys. Paper surveys may be easier to circulate, allowing those responsible at a firm to answer relevant parts of the questionnaire. Firms with multiple SBIR grants also often seek to exercise some quality control over their responses; after assigning surveys to different people, answers may be centrally reviewed for consistency. It may be appropriate to consider a phased approach to the survey work, with more expensive approaches (e.g. phone solicitation) supplementing email, specifically aiming to ensure appropriate coverage of the various groups outlined above. Case study method Case studies will be another central component of the study. Second- and third-level benefits in particular will be addressed primarily through focused case studies, as will information about the procurement needs of Federal agencies.69 Research objectives addressed primarily through case studies may include: • generating detailed data not accessible through surveys • pursuing lines of inquiry suggested by surveys • identifying anecdotes that illuminate findings that are more general. Common threads in the case studies are expected to reveal some of the general characteristics of the program, and may help the Committee to understand some of the data resulting from the surveys and agency databases. A common template or set of templates will be developed for the consistent collection of information; however, interviewers will be accorded sufficient freedom to develop the cases in a way that best suits each case and also to collect additional data relevant to their current lines of inquiry and to agency specific concerns. The templates will be mapable to the objectives of the study. Each case study template will be pre-tested. Case study questions will focus fruitfully on the firm, in addition to the project. This would allow a different perspective, focusing on questions such as: Why did the firm participate? What types of firm were they? What were their business strategy and plans? Did they seek strategic alliances, partnerships, or investment to commercialize when in the SBIR cycle? Why? and How? How long did it generally take to produce sales from SBIR? What difficulties did they experience in commercializing SBIR? What impact did SBIR have on company formation and development? Additional questions will focus on the nature of the competitive landscape. Who are the customers and suppliers? How has the marketplace changed and what value does the innovated product introduce to the market? 69 See R. Yin, Case Study Research: Design and Methods. Thousand Oaks, CA: Sage, 1995. 31
OCR for page 32
Case selection criteria: who participates? Case studies will be directed to company officers and individual research scientists, and to appropriate individuals within the funding agency, and possibly in other agencies. The range of selection criteria will be relevant. (e.g., agency, size of firm, multiple awards, etc.). It is not likely that a sufficient number of case studies can be conducted to generate statistically valid results for all relevant issues: not all “cells” in the research matrix will be fully populated. However, it may also be possible to undertake a sufficient number of cases to generate statistically valid results for a limited set of questions. The interview data mentioned above can be used to supplement case studies, or a small subset of case study questions could be generated for responses from prior interviewees. Process characteristics – ensuring comparability across case study teams It will be important to ensure that the case studies are at least minimally comparable with information collected and the reports generated. By developing n integrated case-study guide and data collection templates the Committee can synthesize information needed for the final report.70 Specific tasks to facilitate the case-study component of the study include the following: The Committee will develop a common case-study guide for use in the case study process. The guide will • outline the case-study approach to be followed, and provide a loosely structured framework for conducting and reporting the cases. It will provide a set of core questions to be used in all the case studies, and will provide formatting and stylistic guidance for writing up the cases. • The Committee will develop a data collection template with a core data section that applies to all the case studies, and a specific section for each set of case studies aimed at addressing a separate issue. (See Annex F.) The template will be exact with respect to the metrics to be collected. The template will map to an EXCEL spreadsheet that will be used to facilitate working with the case-study data across cases. • As the case studies relevant to the same agency will be conducted by multiple field researchers reporting to the Committee, attention will be given throughout the process to calibrate these individuals in their interviewing styles and to take into account any remaining differences before drawing conclusions from the case studies. Use of counterfactual and control group studies Determining “additionality” entails finding out if a program made a difference that accounts for all or part of an observed change. 71 As a “best practice” principle, additionality means that it is not sufficient to observe that an SBIR 70 For an illustration of a large set of case studies written by different researchers and using a common data template to ensure consistent collection of data across projects for combination and analysis, see Advanced Technology Program, Performance of 50 Completed ATP Projects, Status Report-Number 2, NIST SP 950-2 (Gaithersburg, MD: National Institute of Standards and Technology, 2001). 71 As noted, the SBIR program has not been extensively researched, particularly in light of the program’s size and 20 year history. Early examples of evaluations of the SBIR program include Myers, Stern, and Rorke, 1983; Price Waterhouse, 1985; and the U.S. General Accounting Office, 1987, 1989, 1992. One early assessment by Scott Wallsten of the subset of SBIR awardees that were publicly traded determined that SBIR grants do not contribute additional funding but instead replace firm-financed R&D spending “dollar for dollar.” See Wallsten, S. J. 1998, Rethinking the Small Business Innovation Research Program,” in Branscomb and Keller, Eds., Investing In Innovation, MIT Press, Cambridge. While Wallsten’s paper has the virtue of being one of the first attempts to assess the impact of SBIR, Josh Lerner questions whether employing a regression framework to assess the marginal impact of public funding on private research spending is the most appropriate tool in assessing public efforts to assist small high technology firms. He points out that “it may well be rational for a firm not to increase its rate of spending, but rather to use the funds to prolong the time before it needs to seek additional capital.” Lerner suggests that “to interpret such a short run reduction in other research spending as a negative signal is very problematic.” See Lerner, “Public Venture Capital: Rationales and Evaluation” in The Small Business Innovation Research Program: Challenges and Opportunities, op. cit., p. 125. See also Lerner, “Angel financing and public policy: An overview, Journal of Banking and Finance, vol. 22, no. 6-8, p. 773-784. and Lerner, “The government as venture capitalist: The long-run impact of the SBIR program,” Journal of Business, July, v. 72, 3, pp. 285-97. More broadly, recent research has shown evidence of additionality. For examples, Saul Lach has showed that government R&D subsidies in Israel induced “additionality" in R&D activity for small firms.” See Saul Lach, “Do R&D subsidies stimulate or displace private R&D? Evidence from Israel, Journal of Industrial Economics December 2002, pp. 369-390. Similarly, a study by Feldman and Kelley on the ATP program found that the recipients of awards attracted additional funding, thus meeting the test of additionality, a phenomena they describe as a “halo effect.” See Maryann P. Feldman; Maryellen R. Kelley, “Leveraging Research and Development: Assessing the Impact of the U.S. Advanced Technology Program,” Small Business Economics Vol. 20, No. 2, 2003. More generally, in a major review of the econometric evidence, David, Hall, and Toole, found the evidence for the “crowding out,” of private capital to be at best problematic. See Paul David, Bronwyn Hall, and Andrew Toole, “Is public R&D a complement or substitute for private R&D? A review of the econometric evidence,” Research Policy 29(4-5): 497-530 (2000). The broader point is that these analyses 32
OCR for page 33
award was made and later the awardee commercialized a new product. Rather, a goal of the study will be to determine if the commercialization or its timing or some other associated attribute of importance was likely caused by the SBIR award. Evaluation is directed at ruling out alternative, competing explanations of an observed change.72 Additionality tests are usually applied by contrasting the changes that occurred in a “program group” with what, hypothetically, they would have done without the program, or, better, what a comparable group that did not participate in the program actually did relative to the program group. In selecting comparison groups, it is important to ensure that they do not differ in important ways other than participation. Additionality tests can be strengthened by using statistical tools and econometric techniques to help rule out other causes. The comparison of what program participants would have done differently without the program is usually ascertained by interviews or surveys, using what are called “counterfactual questions.” Counterfactual questions, for example, have been used in a variety of ATP surveys.73 They have also been used in ATP case studies to help estimate project impacts.74 Use of a control group will entail the comparison of a program group with a comparable group that did not participate in the program. Although identifying appropriate control groups will be challenging and can be controversial, the approach is worth considering. Good examples of the use of control groups in evaluation are also available from ATP studies, where they have been used in conjunction with surveys and supporting econometric analysis. 75 Use of other evaluation methods Special studies may be required that use methods other than surveys and case studies—such as bibliometric or sociometric analysis. Such needs will be determined as the study progresses. underscore the challenge of assessing the impact of public support for private R&D and the need to address the challenges in a comprehensive fashion. 72 For a further discussion, see R. Ruegg and I. Feller, A Toolkit for Evaluating Public R&D Investments: Models, Methods, and Findings from ATP’s First Decade, NIST GCR 02-842 (Gaithersburg, MD: National Institute of Standards and Technology, May 2003). 73 See, for example, J. Powell and K. Lellock, Development, Commercialization, and Diffusion of Enabling Technologies: Progress Report, NISTIR 6491 (Gaithersburg, MD: National Institute of Standards and Technology, April 2000). 74 See A. N. Link, Advanced Technology Program; Early Stage Impacts of the Printed Wiring Board Research Joint Venture, Assessed at Project End, NIST GCR 97-722 (Gaithersburg, MD: National Institute of Standards and Technology, 1997); and Sheila A. Martin, Daniel L. Winfield, Anne E. Kenyon, John R. Farris, Mohan V. Baal, and Tayler H. Bingham, A Framework for Estimating the National Economic Benefits of ATP Funding of Medical Technologies, GCR 97-737 (Gaithersburg, MD: National Institute of Standards and Technology, 1998). 75 See, for example, Maryann Feldman and Maryellen Kelley, Winning an Award from the Advanced Technology Program: Pursuing R&D Strategies in the Public Interest and Benefiting from a Halo Effect, NISTIR 6577 (Gaithersburg, MD: National Institute of Standards and Technology, 2001). 33
Representative terms from entire chapter: