Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Annex I: Tasks to Further Develop and Implement the Methodology Summarized below are a series of tasks to be undertaken in assessing the SBIR program. Task 1: Collect and interpret information on the mission of each agencyâs SBIR program. Initial research suggests that the missions of the five agencies will differ in varying dimensions, and, further, that divisions or sub-groups within each agency have unique missions associated with the SBIR program. This first task of mission definition will be an essential for designing survey questions and case studies, and interpreting evaluation results. In conjunction with this task, the Committee will utilize: â¢ printed information about the SBIR program, including program descriptions as well as intramural (e.g., agency) and extramural (e.g., academic, consulting, public agency) studies of the program and databases related to the program; â¢ Face-to-face meetings with agency administrators and managers of the SBIR program to collect and clarify institutional information, with an eye to understanding the subtleties of the agenciesâ SBIR program missions, and their modes of operation â¢ Discussions and presentations from public symposia convened by the Committee. Draft descriptions of the SBIR program will be prepared for each of the five agencies, using a common template. Based on each draft report, a summary matrix will be developed presenting similarities and differences in the SBIR program among the agencies. Task 2: Collect and interpret information fundamental to a review of (1) the value to the federal research agencies of 1 SBIR-funded projects; and (2) the quality of research being conducted by small business. Information associated with âvalue toâ the Federal agency from SBIR projects will be drawn primarily from internal agency sources. Individuals directly associated with SBIR are in the best position to assess the relative worth of the program. This will include managers at the agency and, in the case of DOD and NIH, at the sub-agency level. Non- SBIR officials with senior positions in the agency may also be interviewed for their view of the âimportanceâ of SBIR to the agency. This information could be collected though face-to-face meetings and/or through survey instruments directed to appropriate individuals within each agency. If instruments are used, they will rely on a working definition of âvalueâ developed with input from appropriate individuals in each agency. Information on value, collected through face-to-face meetings, an internal survey, or both, will be summarized in a draft description of the SBIR program. (See Task 1.)2 âValueâ may also be determined indirectly through other indicators â for example, the allocation of non-SBIR resources to support SBIR management functions may be a key indicator. Information associated with the âquality of researchâ being conducted could be collected externally from non-agency sources using several methods. For the case of basic research, Arnold and BalÃ¡zs (1998) argue that the âquality should be assessed in terms of its potential usefulness to others.3 Of course, SBIR is not related to basic research but rather early stage development, but the standard of usefulness may also be applicable. Bibliometrics is a commonly used proxy to gauge the potential usefulness of research to others, and will be considered in this study. This method measures the number of peer-reviewed articles and citations to research articles. The relevant data can be gathered from external sources, such as the ISI Web of Science and similar resource bases.4 Data on the number of patents directly associated with the research applied for and awarded and the citations to such patents are a complementary indicator. External awards for the significance of the research, such as the IR100 Awards, constitute another important measure of research quality. Sources internal to each organization conducting the research are also important to identify the scale and quality of research. Information on research activity can most effectively be collected through surveys sent to the funded 1 The Committee interprets âreviewâ to mean a summary of facts. A review should not contain conclusions or recommendations. 2 Based on the studyâs goal as expressed in Task 6, one implicit element of value relates to the ability of the agency to meet certain procurement needs through the SBIR program. 3 See Arnold, E., and BalÃ¡zs, K. âMethods in the Evaluation of Publicly Funded Basic Research,â OECD Report, March 1998. Implicit in the Arnold and BalÃ¡zs argument is a linear view of the innovation process, although each segment in the linear progression need not be within the same organization. 4 Overviews of citation and bibliometric analyses are in Melkers, J. âBibliometrics as a Tool for Analysis of R&D Inputs,â in Evaluating R&D Impacts: Methods and Practices (edited by B. Bozeman and J. Melkers), Boston: Kluwer Academic Publishers, 1993. See also Narin, F. and Hamilton, K.S. âBibliometric Performance Measures,â Scientometrics, Summer 1996. 78
organizations/agencies, and probably to program managers within those organizations.5 Another measure of quality specific to SBIR research is the utility of outputs to the funding agency and/or to the market. Information about commercialization will also come from the surveys and case studies.6 7 Task 3: Collect information and evaluate, using traditional metrics, the economic benefits of the SBIR program. Griliches (1958) and Mansfield (1977) pioneered the application of fundamental economic insight to the measurement of private and social rates of return to innovative investments.8 Streams of investment costs generate innovations and associated streams of economic benefits over time. Once identified and measured, these streams of costs and benefits are used to calculate such performance metrics as social rates of return and benefit-to-cost ratios. Thus, the evaluation question that can be answered from this traditional approach is: Given the investment costs and the social benefits, what is the social rate of return from the innovation? The economic benefits achieved by the SBIR program can be evaluated using several methods, including survey and case study methods. Information collected in Task 1 and Task 2 will underpin the details of the approach. The evaluation literature and the evaluation experience of the Committee and that of the expert consultants reporting to the Committee,9 suggests that the first-level net benefits will be quantified based on both retrospective and prospective survey data. The information collected in Task 1 and Task 2 should identify relevant first-level output measures such as sales, employment growth, new products and processes, leveraged R&D investments (including additional R&D investment dollars as well as the establishment of new research partnerships10), and enhanced access to capital markets.11 The surveys will also include questions that address management issues. Second-level beneficiaries from the SBIR program include the agency that funded the project under evaluation. Third-level beneficiaries are the public- and private-sector consumers of the commercialized innovation developed by the award recipient. Both the evaluation literature and the evaluation experience of the Committee and others suggest that second- and third-level benefit data â quantitative and qualitative âcan be collected through focused case studies. Task 3 relates to the second objective of this study. As noted above, part of the Congressional charge to the NRC is to compare the findings from Task 3 to evaluations of similar Federal research and development expenditures. Several Committee members and contract researchers have experience in evaluating Federal research and development programs. At the completion of Task 3, this expertise and experience will be applied to the task of assessing and evaluating the SBIR research results. 5 For an example of an analysis of NASA SBIR program managersâ qualitative information, see Archibald, R.B. and Finifter, D.H. âEvaluating the NASA Small Business Innovation Research Program: Preliminary Evidence of a Trade-off Between Commercialization and Basic Research,â Research Policy, April 2003. 6 Information about commercialization will also be collected from funded company officers and individual research scientists in a later task. 7 The Committee interprets âevaluationâ to be a broader analysis than would be undertaken in an âimpact assessment.â An impact assessment focuses on the impact (e.g., measured in terms of rates of return or benefit-to-cost comparisons) of the funded research on the agencyâs stakeholders (e.g., small businesses). An evaluation includes an impact assessment as well as an examination of the portfolio of research vis-Ã -vis the objectives of the funding agency and an examination of how well the agencyâs funding program are being managed. See Link, A.N. Economic Impact Assessment: Guidelines for Conducting and Interpreting Assessment Studies, Planning Report 96-1, National Institute of Standards and Technology, May 1996, for the application of these important terms as applied within the National Institute of Standards and Technology (NIST). See Georghiou, L., Dale, A., and Cameron, H. Special Issue of Research Evaluation on National Systems for Evaluation of R&D in the European Union, April for an application of these terms as applied within the European Union. As such, preliminary discussions with agencies suggest that a review of commercialization after the award would be useful to them for management purposes. The team anticipates viewing commercialization as an output of research and thus would logically become a part of the evaluation effort in this task. 8 See Griliches, Z. âResearch Costs and Social Returns: Hybrid Corn and Related Innovations,â Journal of Political Economy, 1958. See also Mansfield, E., Rapoport, J., Romeo, A. Wagner, S., and Beardsley, G. âSocial and Private Rates of Return from Industrial Innovations,â Quarterly Journal of Economics, 1977. 9 Some of the team members were involved in the evaluation of the Department of Defenseâs Fast Track program. See National Research Council, SBIR: An Assessment of the Department of Defense Fast Track Initiative, 2000, op cit. 10 See Hagedoorn, J., Link, A.N., and Vonortas, N.S. âResearch Partnerships,â Research Policy, April 2000. (2000) for a review of the theoretical and empirical literature related to research partnerships and R&D efficiency. 11 The Advanced Technology Program (ATP) within the National Institute of Standards and Technology (NIST) has a long and successful history of collecting through surveys such output measures to proxy first-level social benefits. See Ruegg, R.T. âThe Advanced Technology Program, Its Evaluation Plan, and Progress in Implementation,â Journal of Technology Transfer, November 1997. See also Ruegg, R.T. and Feller, I. âA Toolkit for Evaluating Public R&D Investments: Models, Methods, and Findings from ATPâs First Decade,â NIST GCR 02-842, National Institute of Standards and Technology, May 2003. Finally, see the research papers contained in National Research Council, The Advanced Technology Program: Assessing Outcomes, C. Wessner (ed.). Washington, D.C.: National Academy Press, 2001. 79
Task 4: Collect and interpret information relevant to an evaluation of the non-economic benefits of the SBIR program. The Committee will explore how best to gauge the potential non-economic benefits of the SBIR program. Non economic benefits include the impact of SBIR on small business growth and development, knowledge effects, environmental benefits and public safety. These factors are related to cluster phenomena, links among SBIR firms, universities, government laboratories, and large firms, and the availability of highly qualified workers. Task 5: Collect and interpret information on Federal research and development funds to small businesses (between fiscal year 2000 and fiscal year 1983). Trend analysis is the appropriate methodology if âfederal research and development funds to small businessesâ is interpreted as support to small businesses through the SBIR program only. In such an analysis, two other factors must be controlled for: political factors associated with the supply of such funds and demand factors associated with changes in the extent of technological competition. A more comparative framework will be necessary if a broader definition, which includes other non-SBIR agency funding for small business R&D, is adopted. Task 6: Collect and interpret information on the extent to which SBIR Phase II awards fulfill the procurement needs of Federal agencies. Here, the Committee seeks to develop, particularly through case studies knowledge about how and why Federal agencies procure technology and how they use such technology. Implicit in Task 6 is the charge to understand the frontier associated with the effective use of SBIR Phase II technology, to understand how close Federal agencies are to that frontier, and to determine what factors are associated with such positioning. 80