Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 70
--> Panel III: Research Perspectives on the ATP Introduction Richard Nelson Columbia University Dr. Nelson introduced the panel with a number of observations that he hoped would frame the discussion on research perspectives on the Advanced Technology Program (ATP). Dr. Nelson said that he found Dr. Hill's earlier remarks on the history of the ATP to be fascinating and consistent with his understanding of the ATP's origins. One thing that the ATP's history demonstrates is that the program did not enjoy widespread support when its authorizing legislation was passed. The question of the program's objectives and instruments was left "quite vague and loose." Dr. Nelson said that those who were charged with implementing the program had "to make a silk purse out of a sow's ear." The ATP has been blessed by having a number of intelligent and dedicated people working to create a first-rate program. This has been challenging, given the broad mandate of the ATP and the constraints imposed upon it since its inception. From the beginning, the ATP has had to struggle with two broad questions: What has the program tried to achieve? As Dr. Hill's remarks demonstrated, many different actors have had different perspectives on what the program was trying to do. Given vaguely defined objectives, what procedures should be implemented to award grants?
OCR for page 71
--> Both questions should be kept in mind when thinking about how best to assess the ATP. As we look at how the ATP addresses these and other issues, it is important to view the ATP's actions as ''compared to what?'' We may want to view the ATP as one of a number of other programs that might loosely comprise a national technology policy. Then we may ask whether the ATP is an important part of a technology policy or whether there are other objectives in the technology arena that the ATP may not address effectively. For example, Bill Spencer and others have written about the decline of large corporate industrial research laboratories, particularly in electronics. Is the ATP a vehicle to address that problem? If so, is the ATP better than an alternative approach? Dr. Nelson also noted that several speakers had mentioned that the ATP has changed over the years. He hoped to hear in this panel's discussion more about how the ATP has changed. The ATP is a program with boundary conditions set on what it can do, but a lot of room to maneuver within those boundary conditions. Comparing the ATP to SEMATECH, Dr. Nelson recalled that the SEMATECH consortium began its life with one set of objectives and design and changed dramatically over the years; eventually, it found a niche different from the initial intent. With respect to the ATP, Dr. Nelson said that the ATP should be evaluated with an aim toward refining and fine-tuning the program so that it can adapt appropriately to changing circumstances. Commenting on Dr. Powell's remarks, he noted that she said that he ATP has, over time, placed a greater emphasis on creating spillovers through ATP grants as opposed to fostering commercialization. Dr. Nelson said that this seemed to be a plausible shift. However, Mr. Newall's statements in the prior session caused Dr. Nelson to pause, because Mr. Newall's comments suggested that ATP grants were oriented to company-specific benefits. This indicates a "tension and schizophrenia" in the program that has been an ongoing struggle for the ATP. Dr. Nelson hoped that today's panel could address this last issue, among many others. Assessment of the ATP Rosalie Ruegg National Institute of Standards and Technology An Early Start at Program Assessment As the National Institute of Standards and Technology's (NIST's) Director Ray Kammer mentioned, the ATP initiated evaluation from the beginning of the program and well before the passage of the Government Performance and Results Act (GPRA). With NIST's long history of measurement, it has been a good home in which to develop performance metrics for the ATP. Not surprisingly, the physical scientists in the NIST laboratories sometimes look a bit askance at social
OCR for page 72
--> scientists as they see the many assumptions that must be made in estimating effects within complex economic systems. The projections that go hand in hand with estimating time-dependent effects, and the rounding to millions of dollars seem strange to those who measure physical phenomena to the fifteenth decimal place and beyond. However, both share a passionate interest in measurement, and NIST is an excellent place for measurement of all kinds. A System of Continuous Improvement An evaluation effort for the program was put into place for two reasons: first, as a management tool, to meet program goals and to improve program effectiveness; and, second, to meet the many external requests for ATP program results, requests that seemed to arrive fifteen minutes after the program was started. Still, this early start in assessment was a help to us later in meeting the GPRA requirements, as well as to prepare an initial report to Congress on progress.1 Evaluation is most potent when it is integrated into program management. To maximize effectiveness, we believe that program management must have four elements: Design Implementation Assessment Learning and Feedback It is important to note the sequence that begins with program design and is followed by implementation, assessment, lessons learned, and feedback. However, this is not simply a linear sequence, but a cycle in which assessment, lessons learned, and feedback would be reflected in appropriate modifications to program design, implementation, and so on. Our goal is to have program evaluation fully integrated into the program's dynamic structure and contribute to continuous program improvement. This goal is being realized. Not surprisingly, this achievement has not happened overnight. First, we had to develop the evaluation program and put it into practice. Then we had to track projects, compile data, perform analyses, and begin deriving lessons from the early results. Now we are gaining insights into what aspects of the program are working and what is not, and this information is providing a basis for program modifications to improve effectiveness. As an early example, we found that some of the joint ventures that were announced as award recipients never actually got off the ground—principally because the members were unable to reach an agreement among themselves on the terms of their collaboration. We learned that joint venture formation was more difficult than we had expected at the outset of the program. 1 National Institutes of Standards, The Advanced Technology Program: A Progress Report on the Impacts of an Industry—Government Technology Partnership, Washington, DC: U.S. Government Printing Office, 1996.
OCR for page 73
--> Termination and Alliance Networks As of the beginning of March 1999, roughly 6 percent of the 431 projects that had been announced were stopped prior to completion, and 21 percent of those "terminated" projects were joint ventures. One action that the ATP took to reduce the problems encountered by joint venture applicants was to establish a Web-based "alliance network" to provide a ''best-practices" tutorial for companies thinking of applying to the ATP as a joint venture. This is a bulletin board that companies can use to help locate possible partners, and it includes a discussion forum for exchanging ideas about problems and solutions. As another example, we are in the process of examining the success of award recipients in commercializing their technologies as a function of the way that projects are structured. This should provide useful insights into project selection decisions. Measure Against Mission There are some basic principles to follow in setting up an evaluation program. One basic principle is to measure against the mission. We examined our statute for the essential mission and goals against which to measure the program's success. Congress directed the ATP to assist U.S. businesses "in creating and applying the generic technology and research results necessary to—commercialize ... rapidly...." The statute requires that the ATP not fund programs that would be conducted in the same time period without the ATP. We use the terms "cycle-time reduction" and "acceleration" to capture the ATP's impact on the timing of research and development and subsequent commercialization of technologies developed in the funded projects. We have found that the ATP addresses two types of delay: the difficulty in starting a project and the pace with which the project is performed. The statute specifically calls for the refinement of manufacturing processes. The ATP funds projects across a wide range of technology areas, including substantial attention to both process technologies and technologies that underpin new and improved goods and services. The statute also emphasizes collaborative activities. It highlights the role of small businesses. It states that the ATP is to focus on improving the competitive position of U.S. businesses. And, the statute indicates throughout that the technologies funded by the ATP are to have the "potential for eventual substantial widespread commercial application." Below is a list of key elements from the ATP statute. In paraphrased form, the statute calls for: Creation and application of high-risk, generic technology Acceleration of R&D and commercialization Refinement of manufacturing processes Collaborative activities Improved competitiveness of U.S. businesses Widespread applications and broad-based benefits
OCR for page 74
--> A Logical Framework for Evaluation Another basic principle in setting up an evaluation program is to link in a systematic way the program's activities to its mission; the outputs to the activities; and the shorter-and longer-run outcomes to the outputs. In the parlance of program evaluation, this is sometimes called developing an "evaluation logic model," which links ATP's mission to activities, outputs, intermediate outcomes, and final outcomes. Examples of ATP activities include holding competitions in which businesses submit technology development proposals and making awards to applicants. Examples of ATP outputs are increased R&D spending and technical goals accomplished. Examples of ATP intermediate outcomes are knowledge dissemination through patents and papers, company growth, licensing agreements, and early sales of new products. Examples of final ATP outcomes are productivity improvements, gains in international market share, employment gains, increases in gross domestic product, and improvements in living standards and quality of life for the taxpayer. These examples illustrate the linkages between mission, activities, outputs, intermediate outcomes, and long-term outcomes. Increasing Spillover Benefits over Time Time is an obvious issue in measuring impact. It is not a simple matter of "R&D dollars in and economic impact immediately out." Research and development both take time; commercialization of goods and services based on the technology platforms developed in ATP projects takes more time; and widespread technology dissemination can take a very long time. It is essential to understand the long-term nature of ATP investments. Figure 1 illustrates conceptually the time entailed for ATP projects to be performed and to have impact. Time in years is measured along the horizontal axis, starting with the announcement of an ATP competition for proposals, progressing to the announcement of awards, then indicating completion of projects in two to five years (on the average between three and four years), followed by the post-project period. Economic impact is measured conceptually on the vertical axis. The lower curve illustrates slowly rising benefits to awardees. The upper curve illustrates increasing total economic benefits to the nation over time. The difference between the two curves indicates spillover benefits that extend beyond the ATP award recipients, as others benefit from the new technologies. Spillovers may include market spillovers, knowledge spillovers, and network spillovers. The kinds of effects that may be expected for a successful project in each of the time periods are listed in the shaded columns. Of course, the exact timing depends a great deal on the technology and the industrial sector in which it is applied.
OCR for page 75
--> Figure 1 Conceptual Timeline for ATP's Expected Impacts. There is certainly no "one-size-fits-all" illustration. This conceptual piece is intended only to give an idea of how time figures into the unfolding of events surrounding ATP projects. From an evaluation perspective, the timeline means that, in order to meet pressing requirements for evaluation, we have had to make use of "indicators" of progress, as well as projections in estimating long-term impacts. With the passage of more time, retrospective estimates of project benefits based on a long view back will become more feasible. Two Paths to Long-Run Economic Benefits The ATP's success is tracked along two principal paths, one that can be described as a direct marketplace path, and the other as a more indirect path of knowledge and institutional effects. Of course, both paths are conditional on the
OCR for page 76
--> technical success of the projects funded, which means that technical progress against the project goals is also important. Looking at the direct marketplace path of success, we ask if the technology developed in an ATP project is being commercialized in the post-project period in one or more applications by the ATP award recipients or their direct collaborators. (Note that in this context "commercialization" means the use in production of process technology as well as the sale of goods and services.) We also look at how users of the resulting products and services are affected. This is the ATP's principal path for accelerating commercialization of the technology as called for by its mission. Looking at the indirect path, which actually may be an array of indirect paths, we investigate whether the knowledge and institutional effects created by the project may be influential outside the boundaries of the ATP-funded project, eventually translating into measurable marketplace effects. These indirect effects may be as important as, if eventually not more important than, the direct effects, but they typically occur at a slower pace than the direct effects, tend to be serendipitous rather than intended, and provide less opportunity for deliberate acceleration of national economic benefits. The direct and indirect effects can be characterized as follows: Productivity gains New business opportunities Employment benefits Higher standard of living Health, safety, and quality of life gains Both paths may lead to substantial spillover effects. Market spillovers in the form of consumer surplus benefits tend to be dominant on the direct path, and knowledge spillovers and network spillovers tend to be dominant on the indirect path. However, the interplay of the various spillover effects is complex. For example, reverse engineering of products and processes produced on the direct path can generate knowledge spillovers, and competitive effects from knowledge spillovers may further increase market spillovers. The ATP's evaluation program places importance on measuring the spillover effects—benefits (and costs) not captured by (or incurred by) the innovator/investor. Spillover measurement is quite challenging for evaluators, and the ATP seeks through its evaluation program to advance the state of the art in spillover measurement. The ATP Aims to Select Projects with High Spillover Potential The ATP not only focuses on spillover effects in its evaluation program, it also aims to select technologies that are particularly rich in spillover potential. These include pathbreaking technologies (e.g., Tissue Engineering's marriage of textile weaving techniques with biological materials); infrastructural technologies (e.g., printed-wiring-board technologies); and multiuse technologies (e.g., ABC's
OCR for page 77
--> dimensional control technologies and Extrude Hone's flow-control technology). With more evaluation experience, we hope to improve further our ability to select through peer review those projects with higher-than-average spillover potential. Better Tools for Assessing Technology Impacts We have found that evaluating a complex program such as the ATP requires all of the evaluation tools in the tool kit—and then some—to address the many questions raised by ATP management, industry, Congress, and others. Figure 2 summarizes the main approaches that we are taking. Figure 2 Multiple Approaches to Evaluation.
OCR for page 78
--> Our first emphasis was on being able to answer all of the who, what, where, and when questions about the projects we had funded. Of course, we also needed immediately to provide for real-time monitoring of the research—through project teams and company technical reports. To analyze and report on progress toward meeting longer-run goals, we have found particularly helpful the periodic surveys that have been conducted, and our internal "Business Reporting System," which collects data for all projects from practically all participating organizations and has done so since 1992. The just-released report on completed projects is Volume I in an ongoing series that will report accomplishments several years after project completion for each and every project.2 We see a substantial demand for this type of report. Being developed in conjunction with the "status reports," as we have dubbed them, is another database that we are using in an experimental way to test the relationships between project characteristics and early post-project accomplishments. In-depth case studies have proven invaluable in understanding complex projects and in quantitatively documenting impacts, but these studies are too resource intensive and time-consuming to conduct for every project. We have completed about ten thus far, and have several additional in-depth case studies currently under way. In a few cases it has been possible to bridge from microeconomic estimates to macroeconomic projections of national impact. The in-depth case studies typically estimate returns to the direct award recipients (private returns), returns to the nation (social returns, which include private returns and spillover effects), and returns specifically on the ATP's investment (what we call "public returns"). These studies have looked more extensively at the "additionality" question, comparing the "with ATP outcome" with a counterfactual "without ATP outcome" to estimate the difference. Along the way, we saw both the need for and the opportunity to improve the tools—the models, methodologies, and databases for assessing the program. We have commissioned others to develop, test, and apply new evaluation models and methodologies, and to compile databases. As I mentioned previously, the Economic Assessment Office within the ATP also has compiled databases to support evaluation research. These tools we expect to be of general benefit to researchers working in the field of technological assessment. In addition, we have needed to address what I will call "special-issue questions." These include such questions as: How are small businesses faring in the ATP? What has been the role of universities and the effect of their participation in ATP projects? What are the similarities and differences of ATP-funded joint ventures versus those funded outside the program? How does receiving an ATP award affect the ability of companies to attract other sources of funding? How can effective control-group studies be formulated? 2 Long, op.cit.
OCR for page 79
--> We also have identified and followed a number of counterpart programs abroad. Most of the world's industrialized nations have ATP-like programs. We identify program features of particular interest and compare them with the ATP. We also are interested in the evaluation efforts of these counterpart programs. Last summer, we compared notes in an international conference on the economic evaluation of technological change, sponsored by the ATP in cooperation with the National Bureau of Economic Research (NBER). In today's symposium, there is insufficient time to address all of the ATP-sponsored assessment studies and results. Today you will see several examples of ongoing work, such as the Zucker-Darby research in the next presentation that is of particular interest because of its development of control groups. In later panels, you will see several examples of research that is completed, such as the Lerner-Gompers research, and also the Vonortas work, which also used a control group in comparing ATP-funded joint ventures to other joint ventures. Because of the limited time, I will merely comment that completed ATP evaluation studies are listed in a Bibliography of Studies by Staff and Contractors of the Economic Assessment Office, which is available at our Web site.3 Much has been done toward evaluating the ATP; much remains to be done. Substantial Involvement of Outside Experts In formulating our evaluation program and identifying the important questions to address, we have had the advantage of advice from leading experts in the field. In the up-front evaluation that takes place in real time during the selection of projects for funding, extensive use is made of outside reviewers. Although we usually do not think of this as part of evaluation, peer review is, in fact, the most common method used to assess research projects. We also have had valuable input from the NBER. Professor Zvi Griliches, well known and highly regarded in the field of evaluating technological change, has co-chaired a series of evaluation planning workshops for the ATP—both at NIST and at NBER, with the first taking place in 1994. These workshops have been well attended by other experts in the field. Professor Edwin Mansfield, another leading figure in the field of evaluation, worked with us prior to his death. Professor Adam Jaffe has played a major role in instructing the ATP on the issue of spillovers, as well as in reviewing evaluation studies in his role as coordinator of the NBER research on the ATP. Important contributions to the ATP's evaluation have been made by many other academics, private consultants, and nonprofit organizations. In addition, we are developing closer interfaces with other funding sources, especially the venture capital community. The ATP has very different criteria for selecting projects than venture capitalists have, and the two fit together in the 3 See the ATP bibliography at http://www.atp.nist.gov/eao/folio.htm
OCR for page 80
--> R&D funding landscape in a complementary, rather than competitive, manner. Venture capitalists and "investment angels" are often important sources of funding, particularly for ATP awardees that need to raise substantial amounts of outside capital to complete research and commercialize results. Many of the ATP's small-company award recipients are eager to present their newly acquired business opportunities to private sector funders toward the end of their ATP projects. With the objective of strengthening future odds of success of the technologies it has funded, the ATP occasionally hosts events that bring together ATP awardees with potential investors and partners in commercialization. The John F. Kennedy School of Government at Harvard University is leading an effort for us to investigate and compare the decision processes of the ATP, of the large, medium, and small businesses, and of venture capitalists, toward the goal of ensuring that the ATP avoids displacing private capital. Three Tests for the ATP's Success Ultimately, there are three tests for the ATP's success: Although some projects will fail and some will deliver less than anticipated, overall, the portfolio of projects must yield large net social benefits, that is, large benefits to the nation in excess of all costs. It is critical to take a portfolio approach in evaluating the success of the ATP. The program must be allowed to have some individual projects that fail if it is to undertake the high-risk projects that, by their very nature will sometimes fail. The ATP should be judged on its overall effect, not on the success or failure of individual projects. The ATP must make a difference. That is, it is not enough that the portfolio of projects yield large net benefits; a sizable share of these net benefits must be attributable to the ATP. Net benefits to the nation must be much greater than the summation of private returns to the awardee innovators; that is, there must be large spillover benefits to others. A subset of ATP projects is now completed, and the first 38 have been subjected to analysis in the post-project period. According to the researcher, the projected benefits of just several of these early projects are sufficient to more than compensate for all of the costs of the ATP so far—and the part of the projected benefits attributable directly to the ATP are also sufficient to meet this test.4 Not surprisingly, some of this first group of projects are performing better than others. As you will see from Volume I of the status report, we are documenting what is 4 Long, op. cit.
OCR for page 81
--> working and what is not.5 And, as I mentioned at the beginning of my remarks, we are drawing lessons gained along the way to improve the program. Lastly, let me just say how much we welcome this effort by the Academy to draw on the ATP's evaluation program as it undertakes a broader assessment of government-industry partnerships. Performance Measures as Indicators of ATP Effects on Long-term Business Success Lynn Zucker Michael Darby University of California at Los Angeles Dr. Darby said that the research he would talk about today might well qualify as generic precompetitive enabling research, in that an objective of the Zucker-Darby research is to build a database that would enable many researchers to analyze the ATP. The research is collaborative because they are working with members of NBER as well as with Maryellen Kelley and Andrew Wang at the ATP. Research Questions In their study, Zucker and Darby have begun to gather archival data about ATP firms and comparable firms not in the ATP so that something can be said about how and whether the ATP makes a difference. They are concentrating on trying to say something about program awardees before, during, and after their participation in the ATP. Three basic comparisons are being sought: Within the ATP, what are the differences between joint ventures and single applicants? What are the differences between ATP-funded firms and non-ATP-funded firms? What are the differences between ATP joint ventures and joint venture that do not receive ATP funds? The research will explore firms within broad science areas and broad technology areas. The time horizon for the firms in the data set will be ATP awardees from 1990 to 1998. The data set at present looks at principal ATP awardees; this does not therefore include subcontractor ATP firms, but it is hoped that they will be included in the future. Overall, their data set at present includes 628 unique organizations (i.e., firms, universities, and national laboratories). 5 Ibid.
OCR for page 82
--> A key outcome measure for the organizations to be studied will be the number of patents filed. For small businesses in the data set, Zucker and Darby will use as an outcome measure whether the business made an initial public offering. Whether small firms were able attract venture capital is another outcome measure for small firms. This would help to test the ''halo effect" hypothesis with respect to the ATP, that is, whether receiving an ATP award increases the probability of obtaining venture capital. Preliminary Results Dr. Zucker then presented the preliminary results of their analysis, emphasizing that the results were preliminary and subject to change as the research progresses. In this preliminary phase, Zucker and Darby focused on two years of data, 1991 and 1992, which comprised 110 organizations and principals only, not subcontractors. The analysis looks at the total number of patents issued two years before an ATP award, and the total number of patents issued 1,095 days after the start of the ATP award. Dr. Zucker noted that changing the timeframe does not change results. In describing the data set that she and Dr. Darby are assembling, Dr. Zucker said that, not surprisingly, the large businesses tend to have filed a number of patents in the evaluation period, whereas small companies did not. For small businesses, there may be a lag problem; a number of such businesses were founded shortly before receiving ATP awards; it is unlikely that they would generate patents within the first few years. For medium-sized businesses, more have patents than do not, whereas about half of the nonprofits have patents. During the period studied, universities tended to have patents, which is not a surprise in light of the incentives provided by the Bayh-Dole Act. In looking at pre-ATP and post-ATP patent activity, although large companies account for a large amount of patent activity, they do not account for an extraordinarily large share of the difference across the two time periods. In looking at a sample of companies and organizations that excludes large businesses, there is a small increase in patenting activity among small businesses in the post-ATP award period, and a larger increase in patenting among medium-sized businesses. For nonprofits, there is a small decrease, although the sample is small, and universities show a substantial increase in patenting activity post-ATP. The Zucker-Darby analysis also examines pre-and post-ATP patenting activity by project type. In the data set, there are far more joint ventures than single-company ATP projects. For joint ventures, the post-ATP period shows an increase in patenting activity, but for single-company ATP awardees, there is no increase in patenting activity. In summary, Dr. Zucker reiterated to the audience that the results presented are preliminary and subject to revision as the project unfolds.
OCR for page 83
--> Discussant J. C. Spender New York Institute of Technology Dr. Spender remarked that some may look at academic research on the ATP and ask whether it is politically viable; others may dismiss the research as not altogether meaningful. For Dr. Spender, the debate over the ATP and academic research on it involves what he calls ''interpretive flexibility" in which a variety of different actors have a variety of different views on the program. He recalled the comments of Dr. Nelson and Dr. Spencer characterizing the ATP as an experiment in technology policy. Although true, Dr. Spender also said that it is true that the ATP is also an experiment in evaluation research. Evaluation is increasingly important in all public programs as people demand more accountability for expenditures; it will not suffice to say that there should be no public expenditures in areas such as technology development. The question is how to spend public funds properly. A Multidisciplinary Evaluation Approach In the evaluation of technology programs, this is an area of profound complexity in a technical sense because of the multidisciplinary nature of the task. Because of this complexity, the conduct of evaluation and its interpretation should not be left solely to economists. Sociologists, anthropologists, political theorists, historians of technology, and others should be drawn into assessment of technology programs. It is a rich area of academic inquiry and of growing importance as the need to understand technology programs and their consequences grows. Dr. Spender said that the ATP invites three classes of discussion: outcomes, which Dr. Ruegg addressed; efficiency of operations, which Dr. Powell addressed; and origins of the program, which Dr. Hill addressed. With respect to origins, Dr. Spender said that additional research into the program's history and its political context would be useful. Most people in the audience were acutely aware of the varying degrees of political hostility that the ATP has faced. This political context places operational constraints on the program, which in turn can feed back into the political environment and affect the program's long-term viability. Another avenue of historical research is comparing the ATP to similar programs overseas. Addressing program outcomes, Dr. Spender said that there may be a "low-hanging fruit" phenomenon in thinking about this issue. In the evaluation arena, researchers have a number of tools readily available to evaluate the ATP, but by its very nature, the ATP challenges researchers to develop new evaluation tools.
OCR for page 84
--> Adding to the challenge is that researchers know that the developing new evaluation tools will have profound political consequences. Dr. Spender encouraged researchers to reach for "higher hanging fruit" in searching for ways to better understand, for example, the theory of spillovers and how private goods, such as industry R&D are transformed into public goods. Dr. Spender also said that better understanding of "knowledge flows" is necessary, that is, tracking the generation of knowledge through organizations. This may enhance the understanding of spillovers. Dr. Spender said the "multidimensionality" is an important concept for ATP evaluators to keep in mind. By that he meant that purely unidimensional analyses, whether anthropological, economic, or political, miss the challenge of properly evaluating the ATP. It is better to develop tools that are truly complex, and therefore capture a project's outcomes. Capturing the ATP's Complexity From his own ATP evaluation experience, looking at the Auto Body Consortium and the Printed-Wiring-Board project, capturing the project's complexity is key. Taking the projects as "self-organizing systems," Dr. Spender said, helped him to gain greater understanding of the projects. In his view, the two ATP projects that he explored were "nested sets of semi-self-organizing systems." The project may be regarded as a self-organizing system or the firm may be so regarded. Alternatively, the ATP within an industry sector, with the latter regarded as a self-organizing system, may be the proper perspective through which to view the ATP. Taking the self-organizing metaphor further, an ATP grant is something that can alter the initial conditions of an industry's trajectory, and even a small change in initial conditions may have large consequences as a sector self-organizes. In concluding, Dr. Spender encouraged the development of better evaluation tools for the ATP and again emphasized the need for a multidisciplinary approach to evaluation. Questions from the Audience Following up on Dr. Zucker's presentation, Dr. Flamm asked why patents would be the appropriate measure of ATP outcomes. He noted that one could argue that increasing patent activity was not an ATP objective, but rather encouraging private investment, with public assistance, in projects that would have large spillovers for the economy at large. Dr. Zucker responded that their study was not focused solely on patents, but on a variety of possible impacts from the ATP. Patent data were one useful way to quantify impacts. Dr. Zucker also said that patents had more spillovers than trade secrets, for example. Citations to patents could be used as a measure of
OCR for page 85
--> spillovers, she said, while acknowledging the imperfection of patents as a proxy for spillovers. Dr. Flamm cautioned that one might find the same number of patents pre-and post-ATP in a company or companies, and draw the inference that the ATP had had no effect. It is possible, however, that the ATP encouraged companies to generate patents with greater spillovers than before, meaning that the program made a difference. Dr. Zucker responded by saying that she and Dr. Darby were developing a way to address the quality of patents among ATP recipients. Alan Lauder of DuPont asked Dr. Ruegg about the 12 project terminations in the ATP and stated that often we can learn more from failures than from success. What, Mr. Lauder asked, has been learned from these failures? Dr. Ruegg said that the 12 projects were terminated in the time horizon of Dr. Long's study and, since then, 12 more have been terminated. This amounts to 6 percent of all ATP projects. About 20 percent of the terminated projects are joint ventures that did not get off the ground. Another 20 percent have been small companies that have gone bankrupt. One-third of the terminations is due to a change in company management or strategy; something internal changed that caused the company to no longer pursue the project. A final class of projects was terminated for technical reasons; the R&D team believed that technical challenges were too great to be overcome in the context of the project.
Representative terms from entire chapter: