The goal of this session is to try to illuminate and inform the discussion about some of these emerging technologies, the related social processes, some specific pilot projects, and the challenges and opportunities that may provide the basis for these kind of future “publishing processes.” This is put in quotes, because we may someday not actually think about it explicitly as a publishing process, but as more holistically integrated into the “knowledge creation process.”
Paul Resnick, University of Michigan
When some people think about changing the current publication process to a more open system, they express concerns that scholarly communication will descend into chaos, and that no one will know what documents are worth reading because we will not have the current peer-review process. This idea should be turned on its head. Instead of going without evaluation, there is a potential to have much more evaluation than we currently have in the peer-review process. We can look at what is happening outside of the scientific publication realm on the Internet to give us some clues about where this could go.
In today's publication system, there are reputations for publication venues. Certain journals have a better reputation than others, and certain academic presses have strong reputations. There are a few designated reviewers for each article, and these serve as gatekeepers for the publication. An article either gets into this prestigious publication, or not; it is a binary decision. Then afterward, we have citations as a behavioral metric of how influential the document was.
We can examine some trends on the Internet to see how they apply to the scientific publication and communication process. There can be a great deal of public feedback, both before and after whatever is marked as the official publication time, and we can have lots of behavior indicators, not just the citation counts.
Let us consider some examples of publicly visible feedback. Many Web sites now evaluate different types of products or services, and post reviews by individual customers. In the publishing world, many people are now familiar with the reviews at Amazon.com, both text reviews and numeric ratings that any reader can put in. Many of us find this quite helpful in purchasing books. We do not quite have it for individual articles in scientific publishing yet, but we do have it for books, even some scientific ones.
Even closer to the scientific publishing world, there is a site called Merlot, which collects teaching resources and provides a peer-review process for them. There is a peer-review process before a publication is included in the collection, but even after it is included, members can add comments. Typically, such comments are made by teachers who have tried using it, and they report what happened in their classroom, and so on. The member comments do not always agree exactly with the peer-review comments.
These examples provide a sense of what is happening with subjective feedback that people can give on the Internet, beyond the traditional peer-review approach.
With behavioral indicators, you do not ask people what they think about something, you watch what they do with it. That is like the citation count. For example, Amazon.com, in addition to its customer reviews, has a sales rank for each book.
Another example is Netscan, which is a project that Mark Smith at Microsoft Research has been doing to collect behavioral metrics on Usenet newsgroups. It uses various types of metrics about the groups.
Google uses behavioral metrics of links in their page rank algorithm. Many people check how they are ranked on Google on various search strings. However, Google is not just doing a text match; it