Cover Image

PAPERBACK
$55.00



View/Hide Left Panel
Click for next page ( 14


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 13
3. Breaking Anti-Commons Constraints on Global Scientific Research: Some New Moves in “Legal Jujitsu” – Paul A. David4 Stanford University & All Souls College, Oxford & United Nations University-MERIT, Maastricht I expect that most of those who are attending this symposium will have heard something about “the anti-commons” and that they understand that it is not a good thing. Perhaps they have also run across the article in which Michael Heller and Rebecca Eisenberg5 argued that the monopoly rights granted to inventors under the patent system of the United States and many countries—ostensibly for the encouragement of inventive activity, or at least public disclosure of the latter’s results—actually might have the perverse effect of inhibiting both invention and innovation. Surely there will be others present who recall the message of Garrett Hardin’s 1968 article "The Tragedy of the Commons,"6 in which the it was the opposite of the “anti-commons” that figured as a decidedly bad thing. At least, the latter is the view one is left to draw from Hardin’s account of historical experience with common-use arrangements, such as the common grazing rights on lands held by agrarian communes in medieval Europe that led inexorably to the destruction of valuable and exhaustible resources by “over-grazing.” So, we are confronted with two disconcerting if not necessarily contradictory views about the effects of private property rights in valuable economic resources: the absence of the right to exclude others from trespass, as in the case of “the tragically over-grazed” village commons, leads to a bad outcome; but the same may be said about the presence of the patent-holder’s right to exclude others from the use of the patented invention. Is either of these propositions valid as a general rule? How can both be true? This brief presentation is premised on my conviction that is important not only for economists and lawyers, but people with the diverse range of expertise that is represented in this audience to recognize and understand the “anti-commons effect” as a general economic phenomenon, in the same way that we grasp the logic of more familiar argument that valuable resources are best held as private property, because their owners would have strong incentives not to exploit them wastefully. Although land and other tangible physical resources hardly are the same as data and information, and patent rights differ from copyrights and database rights, the economics of both the commons and its dual, the anti-commons are germane will be seen to be directly germane to subjects presently under discussion. Like the microbial commons, many of large scientific and technical databases that have been constructed either for the public domain or made available on an open access basis to qualified users, constitute “research resource commons” or “semi-commons”.7 4 Presentation slides available at http://sites.nationalacademies.org/xpedio/idcplg?IdcService=GET_FILE&dDocName=PGA_053729&Rev isionSelectionMethod=Latest. 5 M. A. Heller and R. S. Eisenberg, “Can patents deter innovation? The anti-commons in biomedical research,” Science, 280 (1 May, 1998): 698-701. 6 G. Hardin, “The tragedy of the commons, “Science 162 (1968):1243-1248. 7 The term “semi-commons” has been employed by several previous presenters in referring to contractually constructed common-use arrangements that occupy a position intermediate between that the public domain 13

OCR for page 13
The Problem of the Commons in Theory and History It will be useful to begin with “the commons,” in order to put aside confusions that appear frequently in the economic and legal literature due to widely shared misconceptions about the “tragedy” of common-use exploitation of land, fisheries and other natural resources. That desertification is a tragic consequence of “over-grazing” in many parts of the world is not in doubt. Prolonged unrestricted livestock grazing in arid climates, by sheep in the Patagonian region of Argentina and by goat-herds in northern Chile has been a major contributor to contemporary desertification, just as unrestricted grazing by cattle in the rangelands of southern Texas was a factor in the region’s “dust bowl” during the 1930’s. Likewise, it is thought that after the 3 century C.E. the multiplying Bedouin sheep-flocks in the Negev combined with the decline of agriculture and the abandonment of the associated irrigation dams and channels to produce a reversion to the desert conditions that are found in the southern region of modern Israel. But the exhaustion of village lands in medieval Europe due to over-grazing of their common fields, the illustration that Garrett Hardin provided as a parable supporting his argument that efficient natural resource use requires private property rights and market pricing of resource use, is just a fantasy. Its repetition, unfortunately, has served only to obscure an important lesson that the actual historical experience holds in regard to the management of common-use resources. Europe in the Middle Ages never knew public domain “commons” of the sort Hardin imagined, and where once-cultivated lands were allowed to tumble down to grass and village settlements eventually were “lost.” This was symptomatic of the retreat of agriculture from the marginal, semi-arid regions into which population had expanded in the 13th and early 14th centuries—before the mortality crisis of the Black Death. In reality, the feudalized regions of western Europe knew no territories that formally were in the public domain; “null terre sans seigneur” (no lands without a lord) was the canonical expression of the situation under which control and exploitation rights over physical property—arable, pasture, woodlands and sub-surface mineral deposits—had come to be held by one or another king and their respective vassals. Thus, the areas of agrarian settlement where “common-rights” were established were not the wilderness or “waste,” but lay within the jurisdiction and governance of particular communities that regulated, inter alia, the number of animals that each of the holders of a tenure in the village could put to graze upon the common stubble-field following the harvest, or on the common meadow-lands close to the village. What is important to emphasize is that seasonally designated common-fields, and their meadow-lands were not open for all to use, not even for the villain tenants to exploit at will by bringing in additional hands from other villages, say to glean the fallen grains of wheat following the harvest and before cattle were turned into the fields to graze upon the stubble.8 The managed commons of the village communes in medieval (and early modern) Europe therefore exemplify the “club goods” form of resource commons—intermediate between the public domain and the regime of private property. The Commons in tangible and the private domain. As will be seen in the following pages, I prefer the description of such arrangements as “club commons” 8 The presentation slides illustrate the detailed nature of the limitations placed upon the exercise of these “common rights,” in the case of Salford Manor, in Oxfordshire at the end of the 16th century. 14

OCR for page 13
exhaustible resources is not a defunct institution, for collective ownership of exhaustible resources did not, and does not translate automatically into a chaotic struggle for possession among neighbors, nor does it result in the egalitarian distribution of use-rights. Even in western Europe today, such arrangements based upon de jure common use rights (res communas) dating from the Middle Ages have survived in the Swiss Alps and Northern Italy—e.g., the Magnifica Comunità di Fiemme, in the valley of Aviso (Trento) —where they still govern the use of tens of thousands of hectares of alpine forests, pasture and meadow land.9 I have undertaken this historical digression as a means of putting aside a number of the overly simplistic and misleading preconceptions that have developed around the popular story of the tragedy of the commons.10 Inasmuch as the “microbial commons” involve the curation and sharing of tangible research resources, the point that has been emphasized regarding the importance of user-based governance of natural resource commons is immediately germane. But, because we are here concerned also with digital commons for sharing scientific information and data—some of it representing “metadata” that directly complements the organic material of the microbial commons—a different point should be emphasized: unlike land, the productive value of data is not diminished by “over-use” per se. Data and information are more akin to fire than to coal: one gains light from them without their being consumed in the process. This does not, however, warrant the conclusion that there is no need to restrict access to a scientific data and information commons, because it will remain “un-depleted” by repeated, intensive utilization. While the latter is true, governance arrangements cannot be discarded if the quality of data and the reliability of information is to be maintained. Data can be degraded by being mixed with other data that are inaccurate, and screening of contributed materials to minimize that form of “contamination,” standard formats for data, and accompanying minimum metadata requirements need to be enforced in other to insure the widest extent of usability of the common’s resources. The imposition of management procedures with restrictions on contributed resources, rather than limitations on access to prevent “congestion” or “overuse” is the primary economic rationale for the “club goods” form in the case of scientific resource commons. For, in this case, and particularly that of digital research resources, the value of the commons actually improves with more intensive exploitation by members of an extensive community of expert users. The removal of recording and copying errors, and the annotation of data-files and reports that link the contents to the corpus of published research findings, and to related datasets with which they may be federated for further analysis, are semi-automatic consequences of the symbiotic relationship between a research community and the resource commons that it builds and exploits. Thus, we are here in a world very different from the “tragic commons” conjured up by Garrett Hardin, 9 See also, David P. Mitigating’anticommons’ harms to research in science and technology. UNU-MERIT Working Paper No. 2011-001, Analysis and Debate of Intellectual Property Issues, Forthcoming 2010 10 These, unfortunately, continue to figure in leading economists’ textbook expositions of the “common- pool problems”, as, for example this one in Suzanne Scotchmer’s Innovation and Incentives, Cambridge, MA: MIT Press, 2004 (p. 88): “The anti-commons is a play on words and refers to the ‘tragedy of the commons’ which is taught in freshman economics. In the tragedy of the commons peasants in early modern Britain overgrazed shared pastures (‘the commons’) because the absence of private property eliminated incentives to conserve.” 15

OCR for page 13
and repeatedly invoked by advocates of private property rights as the necessary condition for efficient resource use. The threat posed by the anti-commons, however, is quite another matter. In an earlier presentation to this symposium, Paul Gilna11 noted that biological communities and microbial biology communities are entering Leroy Hood’s second phase of scientific breakthroughs, where having worked on the problem of how to generate and capture new data, they now have to think of what to do with the data. What are the modes of analysis we need to handle data at this enormous scale and volume, and to do it with links connecting a distributed community? In this context, one also must take into account the fact that a part portion of the interested researchers at present, and probably for some time into the future will not have the training to write their own analytical algorithms or even to use the open-source algorithms that are already available to work with the data. There are consequently advantages of scale in use, for very large groups are more likely to draw in supplementary resources and mobilize the necessary expertise of a few members that can in the development of analysis tools. Minna Allarakhia12 did point out in her presentation that some open-access communities are beginning to provide complementary access not only to data and to archived publications, but also to analysis tools, search tools, and other types of research tools. The premise was that, without such analytical tools, users will not be able to benefit sufficiently from the enormous data investments that are being made. Here is the point: If you go down the road towards user-friendly analysis tools, you will enter the part of the software world where commercial software vendors have been operating. This is off-the-shelf software. By contrast, the typical mode for large scientific work groups has been to assemble their own software—put together a lot of pieces that they already have experience with, make something that works quickly, and keep on going. Now, people in the United States and in other parts of the world where patenting of software is possible have patented many of the subroutines and algorithms, which are then embedded in black boxes—machines where you feed the data in, press a button, and something comes out. This has been going on for a long time. It happened in physics with mass spectrographic analysis. Fast algorithms make it possible to do things in real time, but only for the people who have access to them. There have been ongoing discussions even in the journals as to whether people should be forced to publish these algorithms, but those who developed the algorithms resist. They say, “No, we are working on it. We are trying to document it. We are going to upgrade it. It could take a year. And then we are going to release it for sale.” So researchers are likely to bump up against patent thickets, where some key algorithms are not freely available. It is possible to go around that. A group of open- source people might make it a project to write an open-source version, for instance, or contact people familiar with that type of software and ask them to work on it. The result, however, will not necessarily be user-friendly. In the case of UNIX applications, this can be a useful step that will yield new and more efficient customized code. There are two problems, though. First of all, many of the users in the commons will need to have somebody repackage these custom-made programs to make them usable. That imposes an extra cost. Secondly, there will be little standardization in this approach, and 11 See Chapter 17 within this publication. 12 See Chapter 20 within this publication. 16

OCR for page 13
standardization of analysis techniques is one of the ways in which research communities increase replicability and transparency of their procedures, both of which are desirable properties of research tools. With packaged software, people do not have to go through a long description of the published algorithms. They get a standard algorithm that is widely used—it is in the library someplace, or it is in a commercial package, and the code is “stabilized” so that attempts at replication are not frustrated by ambiguities regarding which particular “release” or customized version of the algorithm had been used in obtaining the results reported in publications. This is one of the things that eases access for the people— usually not those at the cutting edge of the new field, but those people who are following after—who are going to do the normal science in the field. The process of “black boxing” and commercialization reduces the cost of producing user-friendly techniques to the less skilled, but the impulse of the holder of a patent filed on the original prototype to share the profits from the developed commercial version(s) may not only inhibit that development, but create an obstacle to those who would simply implement the basic concept of the (patented) research tool for their own work. The point here is that the device of a contractually constructed commons—the phrase introduced in the seminal 2003 publication by Reichman and Uhlir13—addresses the reality of the research world in which prior work has given rise to patented procedures, or other IPR and sui generis forms of restricted access (database rights, in the EU) that encumber subsequent application and extension. This source of impediments to the cumulative, incremental advancement of scientific knowledge has been a prominent concern in the present discussion, largely because our focus has been upon on “the data tsunami” —the massive wave of newly available data, much of which is not copyrightable and not patentable. But there are also new fields with new analysis tools that are emerging, and as people follow the science, eventually they will wander into some part of this terrain and they will find that others have staked out property rights there before them. During the remainder of this presentation, therefore, I will emphasize the case for the creation of digital resource commons as an ex post fix for those inherited problems. There are many fields where researchers who are trying to do collaborative work are tripping over the fact that the downside of building on the shoulders of giants is that sometimes you are building on the shoulders of pygmies.14 In this case, a researcher may find not a step created by others on which it is simple to build, but an obstacle in the path that requires paying a fee if it is to be used, or a cost in time and effort if one is trying to work around it. Leading research groups in some fields, which are to say academic researchers in many instances, have patented many results including research tools among them, and thereby have left in their wake obstacles for researchers in the same fields, and also for those seeking to transfer established techniques to new fields of investigation. Frontier researchers will prefer to proceed as far as possible by employing 13 J. H. Reichman and P. F. Uhlir, “A contractually reconstructed research commons for scientific data in a highly protectionist intellectual property environment,” Law and Contemporary Problems 66 (2003): 315ff. Available at http://www.law.duke.edu/shell/cite.pl?66+Law+&+Contemp.+Probs.+315+(WinterSpring+2003) 14 There is a double meaning in this allusion to the now widely repeated phrase, uttered by Newton, in the course of a priority dispute with Hooke. Hooke was a very short man, and Newton, rather nastily quipped: “If I have seen farther, it is by standing on the shoulders of giants” (and not persons of little stature –both physical and intellectual). 17

OCR for page 13
and adapting where necessary tools that have become standard, well-known, and documented, and such tools in the future are more and more likely to come with IPR restrictions. So what is the anti-commons problem? The anti-commons problem is like an onion—a simple onion, in which there are three discernable layers. Layer 1 is search costs, the costs of discovering whether tools described in the research literature are privately appropriated and to whom the property rights were assigned, whether as patents, copyrighted computer code, or database rights. If you have distributed inputs into a process, the inputs are not all in one place, they have been produced by different people, and it takes you a while to look at them. If this is an area covered by patents, patent searches can be very lengthy and very costly processes. Layer 2 is transaction costs. These arise when one has identified the owner of the intellectual property (IP) and seeks a license or an agreement to transfer materials. These are different from the search costs. This is negotiation, and the key attribute is that negotiations take time. Even if your university has a lawyer whom you can use, so will the other side. There will be many lawyers, and they will have many meetings and many expenses. This is a long process. I experienced some of this when I was at Oxford University. Even when people were not holding out, there were delays. Everybody wanted to see what the contracts were going to be, so negotiations stretched out over 18 months for a demonstration project that had something like a 3-year budget. Ultimately, they went ahead without anything, but the university lawyers were complaining and saying, “No, you cannot do that.” It gets even worse when more fundamental research is involved because it is impossible to know what is going to come out of it. At Oxford University I encountered the office of Research Services, which was staffed by very competent solicitors, whose main responsibility was to do “due diligence,” protecting the corporate interest of the institution from the harms to which it could be exposed by embarking on faculty initiated research projects. The lawyers took it upon themselves to worry about liability for accidents involving new and dangerously toxic materials, entanglements in the liabilities of other institutions with which the university had joined in collaborative agreements, and the possibility of suits by third parties that claimed to have been injured (commercially or otherwise) by following the advice based upon a research publication. But in addition to the hazards of untoward outcomes, there also were the risks of failing to fully exploit opportunities that might arise from successful research. Not getting the largest share possible of the income derived from commercial exploitation of research findings is no less a “risk,” when viewed from the window of the Research Services offer, as failing to protect the university from a liability law suit, or a charge of patent infringement. The reality is that such failures do not simply represent the loss of a potential benefit; they carry penalties. Not obtaining strong patent rights on a discovery or invention that could turn out to be important, and moreover a major source of revenue had it been property privatized could expose the institution’s leaders to the kind of response that Oxford’s requests for increased overhead research funding were known to have elicited on some occasions from officials in Her Majesty’s Treasury: “Had you only thought to patent penicillin, you wouldn’t need to be here now, would you?” Due diligence therefore suggested that in negotiations about collaborative research agreements it was better to seek the strongest possible IPR protections for the university’s interests, or to push all the conceivable liability risks (or the costs of insuring against them) onto other parties, even if this strategy would have the result of blocking 18

OCR for page 13
the project in question from going forward. Considering that was no end to the number of research proposals that the faculty seems to be able to bring forward, the best (diligently cautious) stance was to be wary of those that were surrounded with greater uncertainties. The problem with this, however, is that uncertainty is in a sense the hallmark of novel, more interesting research proposals—those that typically distinguish academic science and engineering from the projects to which the bulk of corporate R&D funding is committed. The uncertainties about the nature of the products and processes of the proposed research project, taken in conjunction with the professional incentives of those charged with performing “due diligence” and their inability to calculate the countervailing value of the losses entailed in not doing the research, tend to promote behaviors that reflect extreme risk aversion. Fears of failing to secure as large as possible gains from intellectual property rights on university conducted research appears to be a major source of protracted negotiations for collaborate agreements. This is observed not only where inter-institutional and collaborations, and university–business projects are involved, but also in cases of projects proposing grant or contract research to be conducted in different departments and schools within the same institution. In other words, the representatives of the university’s corporate interests, as distinct from those of their faculty researchers, are pre-disposed to advocate and adopt a tough bargaining stance, trying to get the other collaborating party (or parties) to yield the greater part of any potential income that is envisaged to result from the research, and to bear the greater part of the potential liabilities, or the costs of insuring against them. Moreover, should that appear to be infeasible, the conscientious legal counsel will not be hesitant to recommend that the project should not be undertaken. Understandably, that stance tends to come as an unwelcome surprise to uninitiated prospective corporate “partners” who entered the negotiations with the expectation that “the university” would be seeking a way to satisfy the interests of the faculty counterparts of their own research group, just as they themselves were under instructions from the vice president of research to find a way to “make the project happen.” Disappointment of those expectations would at least account for the shocked and disparaging terms in which research directors of large, R&D-intensive U.S. companies have expressed their views about the experience of negotiating with universities over the IP rights to joint R&D ventures, such as those reported on the basis of a survey of 60 vice presidents of research that was carried out by Hertzfeld, Link and Vonortas. The consensus view was that trying to deal with universities over IP matters was much more difficult, and more frequently unsuccessful than negotiating collaborative research agreements with other business companies.15 15 See the 2003 survey results reported by H. R. Hertzfeld, A. N. Link, and N. S. Vonortas, “Intellectual Property Protection Mechanisms in Research Partnerships’, Research Policy, 35 (June-July), 2006 [Special Issue on Property and the Pursuit of Knowledge: IPR Issues Affecting Scientific Research, P. A. David and B.H. Hall, eds.]. See also P. A. David, "Innovation and Europe’s Universities: Second Thoughts about Embracing the Bayh-Dole Regime," in Perspectives on Innovation, ed. F. Malerba and S. Brusoni, Cambridge U.P., 2007: pp. 251-278. Esp. Table 1 and accompanying text discussion. 19

OCR for page 13
The Core of the Anti-commons: “Multiple Marginalization” The foregoing difficulties are further compounded by the problems encountered when one reaches the third layer, that being innermost core of the anti-commons phenomenon. It involves the condition referred to by economists as “multiple marginalization” which copyright lawyers will be familiar with as “royalty stacking.” It arises when there many parties holding exclusion rights over the use of research tools, each of them asking to be paid what to them appears no more than a reasonable royalty for the license to use the patented research tool. To assemble a collection of photographs for a book, for example, may entail paying for the copyright license on every image, and when these are separately owned the copyright holders individually will ask to receive what appears to be a modestly small percentage of the revenue from the prospective sales of the book. But it mounts up: a 0.5 percent royalty charge levied for each license to reproduce 50 different color photographic plates will take 25 of the sales revenues from the print run of the book. The analogy to the art book’s photographs is the collection of different research “tools” that will be used in carrying out a proposed scientific or research project, many, if not all of them under patents or copyrights that are held by distinct parties. Even when there are no strategic holdouts in the negotiations over licenses, and even though the negotiations can be rapidly concluded, when the number of items is large what seem to be very reasonable requests for very low IP royalties to be paid on commercial sales of downstream research products, the total bill for royalties—which none of the distributed IP owners has considered—can become an obstacle to going forward with the project. In a survey conducted among academic biomedical researchers at U.S. universities and research institutes, John Walsh, Ashish Arora, and Wesley Cohen asked whether the respondents had abandoned a research project because the costs of obtaining licenses on patented research tools was un-supportably large.16 They found so few such instances of blocked or abandoned research projects that they report the victims of the anticommons (in the field biomedical research) to be “as rare as white tigers.” While on the surface this seemed to be very good news, when considered more closely it took on a different cast. In the first place, it is unlikely that a planned project would actually be terminated once under way, whereas a preliminary investigation of the patent status of indicated tool sets that revealed a multiple marginalization problem would led to modifications in the research design, or where that was infeasible, to substituting a different project altogether—before the one initially contemplated got under way. Secondly, a follow-up survey questions asked what the research initiators might have done to avoid the impediments created by the requirement to required licensing access to numerous tools-sets and data-sets. With surprising frequency the answer was: “We just don’t pay any attention to the patents.” This disclosure stirred some concerns about the consequent unknown extent of exposure of these scientists’ universities to patent future infringement suits.17 16 J. P. Walsh, A. Arora and W. M. Cohen, “Research Tool Patenting and Licensing and Biomedical Innovation,” The Operation and Effects of the Patent System, Report of the STEP Board of the National Academy of Sciences, National Research Council, Washington, D.C. National Academies Press, December 2003. 17 Expressions of worry on that score from U.S. research university administrators increased noticeably following a 2002 ruling by the U.S. Federal Court of Appeals for the 9th Circuit in a patent infringement suit. The judgment for the plaintiff in Madey vs. Duke University greatly narrowed the scope of the so- 20

OCR for page 13
The problem with distributed claims is simply that the cost of using research tools that have been protected by IPR is that the IP owners are acting independently, rather than considering the consequences the collective impact of all of their independent actions. Quite naturally, their position is “Why should I be the one to desist or to charge nothing for the use of my patent, or database, if everybody else is going to demand royalties to grant a license? And they figure that perhaps if they hold out long enough, some of the others will reduce their charges, and they will not have to. This has serious implications for federated databases because there are various ways that patented technologies can affect the access of materials in a database. Perhaps the data are locked up by patented encryption software, for example, or perhaps the search tools are patented. The patent holders can charge you to remove your own data from the database. Graham Cameron at the European Bioinformatics Institute (EBI), in his contribution to 2002 EU working party report on IPR issues affecting Internet-based collaborative research, remarked that were one to try to replicate the EBI’s federated databases in the then-existing environment, it would not be possible.18 The Institute could not raise enough money to buy off the people and it would consume way too much time in the negotiations. That may be an extreme example, but Cameron was making a point. In building some research infrastructures, we are out in front of the process of those people who are privatizing parts of the public domain, but, we still have to work with the requirements of science, some parts of which are now impeded. A little microeconomics analysis indicates that is to be expected under those conditions. When database rights are distributed among commercial owners each of whom independently set prices on the contents in order to maximize the owner’s individual profits, symmetrical owners will set the same charges for access rights and the greater the number of databases that a project must consult, the higher will be the stack of access charges the project will face. Facing this elevated cost for the search activity that is an input into the planned research project, either the extent of the search will be restricted, or an alternative project that is fewer searches intensive will be substituted. You will either access less or you will substitute at the margin. You will not do extensive searches. If you need to do this kind of search, you are not going to either reverse engineer or write your own tools for ones that are very important to your work. To do that you would have to hope that you can use something that will not infringe, or perhaps you called “research exemption” from patent enforcement that had been widely supposed to exist, left American research universities substantially greater risk from infringement suits that previously had been supposed. In ruling in the case of Madey v. Duke University, 307 F.3d 1351, 1362 (Fed. Cir. 2002), the court did not completely reject the research exemption defense, but left only a "very narrow and strictly limited experimental use defense." A patented process or device might be used without permission (license) for "amusement, to satisfy idle curiosity, or for strictly philosophical inquiry." The court also precluded the defense where, regardless of profit motive, the research was done "in furtherance of the alleged infringer’s legitimate business." In the case of a research university like Duke University, the court held that the alleged use was in furtherance of its legitimate business, and thus the defense was inapplicable. The U.S. Supreme Court subsequently refused to hear Duke University’s appeal, thereby allowing the Appellate Court running to stand. 18 See IPR Aspects of Internet Collaborations, EC/Community Research Working Paper, EUR 19456, April 2001. Not only were most of the European genetic and proteomic and ancillary demographic databases subject to copyright and database right restrictions, or protected by clickwrap licenses granting pass- through rights, but technical compatibilities among the various digital rights management (DRM) systems that had been deployed would frustrated the “deep linking” of database contents that was required for a searchable federated database. 21

OCR for page 13
might just keep quiet about what you are doing. The time that you spend going around the databases, or figuring out how to build new analysis tools, or use other instruments that you build in the lab would be enormous. If you cannot perfectly substitute for the database search in the end, then the research product is going to be degraded. In these cases, exploratory science will be most affected because you do not know how to limit the discovery space. Commercial firms are less affected because they are looking for certain targets. For example, if a pharmaceutical company is trying to produce a particular drug, it may not need to use the epidemiological data or know about protein folding. It can just key in on the molecule that it is interested in to see if it can figure out how to build the key that goes in that particular lock. For that purpose they are willing to pay the $100,000 flat access fee just to get into certain databases. The fact that exploratory science will be most affected by this reinforces what people from the sciences have been saying—that if these federated databases cannot be put together, then a lot of the potential is not going to be fulfilled. The outcome is actually worse than if there were a monopoly of all the databases because the monopolists would be aware that if each of them set prices to maximize the revenue just from the usage fees from that database, it would reduce the number of people who will ever pay to use any given database, and unless the data in that database are critical, the total revenues would likely decline. Responses to the Anti-Commons Problem What is to be done? Preventing distributed IRP protections being placed on materials that would form complementary sets of research inputs is perhaps the most straight-forward line of attack on the core aspect of the anti-commons. The young field of genomic research provides an exemplar of preventive action that is feasible if people see the problem coming and can act swiftly in concert to avert its materialization: the International Haplotype Map (Hap Map) Project, which was put together by the National Human Genome Research Institute ( see International Haplotype Mapping Project in 2002 (see http://www.genome.gov/10001688). This was the result of a coalition of publicly funded researchers and some commercial firms, all of whom wished to avoid having lots of fragmentary gene sequences protected by patents or copyrights that were held by different research institutions—because that would greatly raise the costs of working with that data. To reduce the data use costs of their research, they first reduced the number of single-nucleotide polymorphisms (SNPs) that would be needed in order to examine an entire genome for association with a phenotype. The so-called Hap Map project then followed the precedents established by the Human Genome Project, in rejecting protection of the data under copyright or database rights and establishing a policy requiring participants to release individual genotype data to all the project members as soon as it was identified. It was recognized that any of This is a special case of legal jujitsu, where a “copy-left” strategy has been mutually imposed on database users by an enforceable contract in the absence of IPR ownership. In essence, “copy-left” says: I have something under copyright. I am going to give you a license which makes you not exploit this, which makes you share it on a share- and-share-alike basis. This is the sort of logic used in the contractually constructed commons. Let us now return to the question of what happens if you have research fields where you cannot start afresh. This is the state of affairs for which the devise of a 22

OCR for page 13
contractually constructed research resource commons originally was intended. The core idea of the neuroscience commons project, initiated by Science Commons (http://www.sciencecommons.org) under the aegis of Creative Commons, was to figure out some way to enable researchers to escape from the patent thickets in which their work had become entangled. Each of the researchers held a piece of the solution, and they found they needed to work together. The negotiations that were undertaken to create a way in which they would each pay each other for the set of licenses they needed eventually led them to ask whether it would not be simpler to put their IPR into a common pool, from which the members could freely draw the items they required. 19 Some people who are proponents of market solutions for market problems ask, “Why won’t the market respond by having private intermediating organizations emerge and profit by providing a market solution for science’s anti-commons problem?” This was the idea behind the Collections Society proposal. The goal was to reduce the costs of searches and transactions in the same way that other organizations have done for copyright in music and other types of content. The idea is that you make the IPR less costly and that will then encourage research production by inducing more inventions. The Collections Society would have an incentive to write contractual provisions, such as grant backs, in order to induce non-cooperating owners to share the use. This would create incentives to put content into the Collection Society. It sounds very good when you first hear it, but there are lots of reasons to be skeptical. The main problem is that arguments by analogy in this area are really dangerous. Intellectual property is not the same thing everywhere. Authors typically want their works to be widely distributed, but inventors and researchers creating databases for their own research uses often do not seek a similar kind of wide distribution. Copyrights in songs, text, and even images are more likely to be close substitutes than is the case with patents and scientific data. So, what is the response to this? It is this: Inside the intellectual property domain you can try to create a space that emulates the public domain by getting people to volunteer to put their patentable or otherwise protected assets into it. In exchange they can benefit by being in collaborations with other people whose patented material they want to use. There are also other sorts of incentives that may appear if this becomes regarded as a good thing. There are preemptive benefits, for example. A researcher might put something into that space at an early stage in order to have some control over how it gets used later. There are a number of different ways in which a commons could be established. One important thing to keep in mind when you are designing something like this is that there are capabilities for abuse. The argument is that when you put a lot of resources into a club and it is not open for everybody, that can be a restriction of competition, and so competition regulators may want to look very closely at that. The defense against that is that this is an efficient patent pool, not an abusive patent pool. An efficient patent pool is one that is constructed out of elements that are complements in some desired process (here research production), because it is their complementarities that give rise to adverse 19 Science Common’s Neurocommons Project (http://neuroscience.org), collaboration between Science Commons and the Teranode Corporation, having created a database with open access scientific information and data – content that is digital, online, free of charge, and free of most copyright and licensing restrictions -- is using it also to build a semantic web to permit linkage of the contents and sophisticated search facilities for neuroscience research projects. (Here I should disclose an “interest,” in that I have been and remain a member of the scientific board of Science Commons.) 23

OCR for page 13
externality effects when ownership is distributed and owners do not take account of the effects upon others of their own price-setting decisions.20 The “efficient” scientific resource commons therefore should not bundle together extraneous intellectual property, and the contents should instead to restricted to collections of research tools (including data and information) that will be close complements—in that they already constitute an actual patent “thicket” that could block downstream use and elaboration the research tools, or are expected to be regularly used on conjunction with one another in exploratory data search and analyses. An objective empirical procedure for establishing the likelihood that a collection of patents (or copyrights) is an obstructive “thicket” would be particularly useful in addressing this issue. It is relevant to notice the proposal and practical demonstration by Gregory Clarkson21 of a method of using network analysis to discover patent thickets and disqualify them as ineligible for efficient pool status. Nevertheless, dual pricing policies by foundations operating research resource commons, potentially would be subject to abuse, and competition among those entities will be quite limited if they are successful in internalizing complementarities among research tools. Therefore, there seems an inescapable conclusion that there would be a need for continuing monitoring and vigorous antitrust supervision of these new institutional arrangements. Looking Ahead If you begin to look ahead on the path that would be opened by a coordinated program of commons formation to break the constraints imposed by extensive IPR restrictions on research tools, it appears possible that an desired outcome could be the retrieval from universities a lot of their patented material, much of which never even has a license issued on it and some of it which is used to form blocking patents. When we get further into the development of nanotechnologies, we will have entered the first major research domain where virtually all the fundamental tools will have been patented, many by universities. This will be a very different situation from that of the biotechnology revolution of the early 1970’s, when Cohen-Boyer patent on restriction enzyme techniques was licensed on nonexclusive basis at very low rates—$5,000 was the flat rate for the Cohen-Boyer license. In the future, by contrast, the consequence of extensive academic and public institute patenting during the past three decades will mean that many of the necessary tools for continuing advance in the new fields of application are proprietary. Cleaning up after that parade, and thereby opening the way for future scientific advances will be important task, to which the institution of the contractually constructed research resource commons can contribute. 20 The substantial literature has recently developed in economics on the topic of “efficient pools” is directly relevant in this context. See, e.g., J. Lerner and J. Tirole, “Efficient Patent Pools,” NBER Working Paper, 2002; C. Shapiro, “Navigating the Patent Thicket: Cross Licenses, Patent Pools, and Standard Setting,” Innovation Policy and the Economy, 1, 2000: pp. 119-150; M. A. Lemly and C. Shapiro, “Patent Hold-up and Royalty-Stacking,” Texas Law Review, 2007. [Available at: http://faculty.haas.berkeley.edu/shapiro/stacking ]. 21 See G. Clarkson, “Objective Identification of Patent Thickets,” Harvard Business School Working Paper, version 3.9, 2004. 24