As I was exploring the issues related to this particular workshop and the various references the organizers had assembled for the meeting, I found this quote from the Australian National Data Service website of particular interest:
Data citation refers to the practice of providing a reference to data in the same way as researchers routinely provide a bibliographic reference to printed resources. The need to cite data is starting to be recognised as one of the key practices underpinning the recognition of data as a primary research output rather than as a by-product of research. While data has often been shared in the past, it is seldom cited in the same way as a journal article or other publication might be. This culture is, however, gradually changing. If datasets were cited, they would achieve a validity and significance within the cycle of activities associated with scholarly communications and recognition of scholarly effort.5
The last statement (highlighted in bold) is actually quite complex and fraught. I will argue today why it has at least two questionable underlying assumptions. The first, is that by virtue of being citable, data achieve an equal footing with traditional publications in institutional merit review of scholars. The second, is that data standing alone, without an interpretive layer (such as an article or book) and without having been peer reviewed, will be weighted in tenure and promotion decisions the same as traditional publication.
The centrality of career advancement in a scholar’s life
My argument (which I hope is not too circuitous) is that these two assumptions are contrary to what our research would suggest. As we have demonstrated (Harley et al., 2010), the primary drivers of faculty scholarly communication behavior in competitive institutions are career selfinterest, advancing the field, and receiving credit and attribution. Although the institutional peer- review process allows flexibility for differences of discipline and scholarly product, a stellar record of high-impact peer reviewed publications continues to be the most important criterion for judging a successful scholar in tenure and promotion decisions. The formal process of converting research findings into academic discourse through publishing is the concrete way in which research enters into scholarly canons that record progress in a field. And, as the formal version “of record,” peer-reviewed publication establishes proof of concept, precedence, and credit to scholars for their work and ideas in a way that can be formally tracked and cited by others. Accordingly, data sets, exhibitions, tools/instruments, and other ‘subsidiary’ products are awarded far less credit than standard publications unless they are themselves ‘discussed’ in a an interpretive peer-reviewed publication.
The importance placed by tenure and promotion committees, grant review committees, and scholars themselves, on publication in the top peer-reviewed outlets is growing, not decreasing, in competitive research universities (Harley et al., 2010: 7; Harley and Acord 2011). There is a concomitant pressure on all in the academy, including scholars at aspirant institutions globally, to model this singular focus on publish or perish, which we and others would argue translates into a growing glut of low-quality publications and publication outlets. This proliferation of outlets has placed a premium on separating prestige outlets (with their imprimatur as proxies for quality) from those that are viewed as less stringently refereed. Consequently, most scholars choose outlets to publish their work based on three factors: (1) prestige (perceptions of rigor in
5 Australian National Data Service: http://www.ands.org.au/guides/data-citation-awareness.pdf