Traditional methods of peer review are coming under strain as the volume of manuscripts and the number of forums for manuscript submission rise. These pressures can result in poorer quality reviews, extended publication times, and higher costs to the organisations that fund research. In this paper we describe a method for reducing reviewing burden, expediting feedback and shortening publication times. Furthermore, by its nature, the method produces leading (as opposed to lagging/trailing) publication metrics for authors and the manuscripts they write, and we show how these metrics can be used by search engines to provide more useful orderings of search results. Finally, we briefly discuss the potential to apply the underlying mechanism of the method to application domains beyond research publishing, such as the web as a whole.
Peer review has served as a cornerstone of scientific advancement over the last century. In some fields of scholarship, peer review has been integral for an even longer period. However, in recent times, several factors have conspired to prompt many to question traditional peer review processes. These factors range from issues of integrity to the emergence of technologies that might feasibly reduce the time to publication and the costs of knowledge dissemination.
Meanwhile, employers of researchers, government and non-government research funding agencies and others rely on bibliometrics, in combination with various other measures of impact, to assess the research strength of research groups or prospective employees. Most bibliometric measures are trailing indicators, and it is unclear whether they reflect current publishing performance or if they can be used to predict future publishing performance.
The ongoing debates around peer review and bibliometrics are taking place against a backdrop of fundamental change in the post-publication phase of the research publishing workflow. The Open Access movement is gaining momentum, aided in no small part by recent decisions taken by the faculties of several major universities, including MIT and Harvard. These resolutions come in the wake of an earlier policy passed by the US Congress to make all research funded by the National Institutes for Health (NIH) freely available to the public twelve months after publication via PubMed. The cumulative effect of these mandates is to ensure that a large body of peer-reviewed scientific works, including those originally published under restrictive licenses and those published under some flavour of open access license, will remain in the public domain. No doubt this growing momentum will prompt more research institutions to update their publishing policies in the coming months and years.
It should be noted that some fields of scientific endeavour already have an ingrained open access culture. Physicists, for example, routinely publish pre-print versions of their articles to arXiv.org, to solicit early feedback on drafts of their manuscripts, and to establish priority. According to the arXiv Primer,
[s]ubmissions are reviewed by expert moderators to verify that they are topical and refereeable scientific contributions that follow accepted standards of scholarly communication (as exemplified by conventional journal articles). In other words, these moderators act as a first pass filter, weeding out articles that are off topic or which are deemed not to contain a scientific contribution, but they are not expected to provide formal reviews or ratings of the manuscripts.
A large proportion of the papers submitted to arXiv.org are eventually published in an appropriate journal. For example, in 2005, almost sixty percent of papers submitted to arXiv.org in the field of high-energy physics went on to be published in a peer-reviewed journal (Mele et al, 2006). The administrators of arXiv.org do not publish download statistics; however, Meho (2006) suggests only 50% of (accepted) peer-reviewed articles are ever read by someone other than the authors and the reviewers. Furthermore, 90% of articles are never cited. Coupled with the fact that the most prestigious journals (or conferences for those fields of study, such as computer science, in which conferences are generally more highly regarded than journals) maintain low acceptance rates, we can infer that the vast majority of submitted manuscripts never see the light of day, even if they do successfully navigate the process of peer review!1
In this paper we describe a method2 that has potential to increase efficiencies in the dissemination and identification of important new research, and which, by its nature, introduces a potentially useful predictive publishing metric that can be used alongside existing bibliometric indicators such as citation counts, Eigenfactor (Bergstrom, 2008), h index (Hirsch, 2005) and its variants. Unlike existing measures of research quality, the statistic introduced in this paper is a direct reflection of how one's peers value one's research contribution at the present time and a forward indicator of the value to the research community of a piece of science. It is therefore a quantitative measure that is closely tied to the judgements of one's peers. The method also provides an incentive that encourages authors to further review their own work (or, equally, to pass their manuscripts to colleagues for comments and editing) prior to broader dissemination (submission to the editorial board of a journal, for example).
The rest of the paper is structured as follows. Section 2 provides an overview of related work. Section 3 details our approach to improving efficiencies in scientific communication and introduces a leading publishing metric. Section 4 analyses the properties inherent in the approach. In Section 5 we describe our implementation of the approach. Section 6 concludes with a discussion of future work.
In designing our method, we considered several properties to be essential, many of which are held over from traditional peer review:
These properties were derived as the result of focus groups, a structured survey conducted within the author's research institution and collation of the commonly aired misgivings aired by the research community3, and cross-validated through discussions with key stake-holders such as university vice-chancellors, funding bodies and program committee chairs.
A common argument against the removal of upfront peer review, a model that we might term extreme open access, is that the research community would be flooded with a high proportion of poor quality papers. The proportion of rejected papers from traditional peer reviews would seem to lend support to this claim. However, we also know that peer review has been responsible for rejecting papers that have, much later, either gone on to earn their authors a Nobel Prize, Fields Medal or similar, or established the groundwork for others to claim one of these prizes (for numerous examples in the field of economics, see Gans and Shepherd, 1994; in mathematics, a famous example is the rejection of Mordell's conjecture by the London Mathematical Society, a conjecture that was later proved by Falting and won him the Fields Medal). While the initial rejections may have helped to improve these works, the fact remains that the central thesis of these works were present in the original drafts. The rejections thus served to delay the dissemination of the valuable results at the heart of these manuscripts.
We need to find, then, a mechanism that enables the rapid dissemination of valuable scientific contributions, which also guards against a flood of less valuable manuscripts. One means for doing this is to require authors to stake some kind of collateral on their paper, which will be returned to them (with interest) if their confidence in the research contribution of the manuscript is vindicated by their peers. In so far as the collateral has some value to the authors, this design has the effect of making the authors ask themselves whether their manuscript is really of use to anyone, since they run the risk of losing the collateral if their peers judge the manuscript as having no research contribution. In the case that the authors decide the paper really is valuable, it can be disseminated very quickly. In the case where the authors decide the manuscript does not make a sufficient contribution in its current form, others are saved the task of reading and reviewing it.
To stand in as collateral, let us introduce a virtual currency, an equal amount of which is initially given to each researcher. To
submit a paper, authors must stake a portion of their virtual tokens on each paper they submit. The authors can reclaim their tokens by
selling a portion of the equity in the paper to their peers.
As in traditional peer review, the authors' peers provide comments and feedback, and an overall rating, such as a strong accept or weak reject. To simplify the scenario, let us for the moment assume three possible ratings: strong accept, weak accept and abstain. Unlike traditional peer review, the peers are now required to back their rating with their tokens. A strong accept costs 2 tokens, a weak accept costs 1 token and abstention costs nothing. A paper is accepted if some pre-defined threshold of tokens is bid (cumulatively) by the peers. If the threshold is not reached, the paper is rejected. Importantly, we have changed nothing in the traditional form of peer review except to require peers to back their overall rating with their (finite) tokens. In return for this backing, the peers receive a stake in the manuscript proportional to the value of their overall rating. Now, each time a future manuscript cites the manuscript the peer has backed, they receive a dividend payment in tokens, which provides the incentive for backing their rating with tokens in the first place. Though it is not crucial to the approach, we note here that the dividend is sourced from the "submission fee" paid by the authors of the future manuscript that cites the current manuscript. In other words, the dividend does not simply materialise out of thin air; rather, there is a closed loop of capital flow in this system.
We may generalise this approach by replacing the discrete ratings, strong accept, weak accept and abstain, with open-ended ratings, such that peers signal their opinion of the scientific worth of a paper by increasing or decreasing the price they are prepared to pay for a unit stake in a paper, or by increasing or decreasing the size of their stake in a paper. This generalisation does not change the fundamental operation of the mechanism described above. In effect, what we are suggesting here is a stock market-like system for research manuscripts, wherein peers purchase shares in the papers they wish to support. Their incentive for doing this, is the dividend payment they receive from future citations, and the opportunity to sell their shares at a higher price than that at which they were purchased. The approach thus aggregates the collective wisdom of peers in assessing the frequency with which a manuscript will be cited in the future. We contend that the probability of future citations is a legitimate proxy for manuscript quality since the value of the contribution in the manuscript is the primary information a researcher possesses to decide how many citations a paper will receive in the future. However, as noted by Meho (2007) the manuscripts of eminent researchers are often cited ceremonially, which could inflate the number of citations a manuscript would otherwise receive. We discuss this further in Section 6.
In the above proposal, we make the simplification that negative reviews are signalled by abstention. In most cases we may retain this simplification since a peer's tokens are finite and the backing of one paper necessarily means a peer forgoes the opportunity to back another one. However, we can cater for negative reviews by introducing short-selling of shares in those cases where it is really required. However, for most circumstances, it is the relative rating of two papers that is really of interest. Therefore, it is enough that a peer backs one manuscript at the expense of another.
One aspect of our proposal that bears further discussion is the treatment of negative citations. We do not, in fact, treat negative citations any differently to positive or neutral ones, and nor is this required. If one paper cites another in a negative fashion, perhaps calling into question one of the conclusions in the earlier paper, it is true that the shareholders of that paper will receive a small dividend from that citation. If the research community comes to the consensus that the result in the earlier paper really is incorrect, this paper begins to be cited less frequently, which places downward pressure on the share price of that paper. An incorrect result can be pointed out only so many times before doing so becomes pointless (Garfield, 1979, p. 244).
We have now described the basic design of our mechanism for peer review. For clarity, we summarise our approach here.
It is important to note that this method is derived from the existing common approach to peer review, except that there is now a cost for submission, and peers must back their reviews with a portion of their finite tokens.
Our method has several useful properties. Interestingly, some of the more useful properties were not designed into the mechanism, but emerged as a by-product of the design.
The first property is that authors now have a strong incentive to carefully review their work, or ask their colleagues for thorough feedback, prior to wider dissemination. The reason for this is that if the authors are unable to sell a sufficient portion of the shares that are created as a result of their submission, they will be unable to recover the cost of submission. If their peers believe the work is valuable and will therefore be cited frequently, then the shares will be sold and the cost recovered. This provides a benefit to researchers by filtering out poor and mediocre papers, allowing researchers to spend more of their time on other aspects of conducting research. When used in conjunction with traditional journal or conference peer reviews, the submission cost is also likely to reduce the occurrence of recycled submissions, whereby a paper rejected from one venue is re-submitted to another with few or no changes. The pursuit of this publishing strategy will quickly result in the authors' "bankruptcy".
Another property is that peers have a strong incentive to back only those papers they believe contain important science, because it is these papers that become highly cited. While it is true that some of the most important scientific manuscripts see the number of citations tailing off at some point, it takes some years for this effect to occur. This effect occurs when the science contained in the manuscript becomes integrated into the common knowledge of a field. However, at the time of publication, it is difficult to tell whether a manuscript is so valuable that it will become one of these rare papers. Thus, from a peer's point of view, the best indicator of a paper's future citation rate is still its value to the research community (ceremonial citations notwithstanding), as judged by the peer.
Conveniently, the likelihood of future citations of a paper, as assessed by the community of peers, is reflected in the current share price of the paper. Unlike existing bibliometric indicators such as h index, citation counts and PageRank™, this is a non-monotonically increasing metric, and it is a leading metric. Thus, if a theory is proved or disproved, or a study is later found to have been carried out with less than due regard for the scientific method and so on, our rating corrects itself when this information comes to light. Any such metric is bound to be more volatile than lagging metrics. However, it is, perhaps, the overall trend of our indicator that is of most importance to funding agencies and so forth. Furthermore, we do not propose that this metric be used in isolation of existing metrics. Rather, it is a metric that fills an important gap in the current range of bibliometric statistics. The prices of shares in all manuscripts within a given research area can be aggregated to see how that area has fared over time. Comparisons may also be made across research fields, though it would be meaningless to do straight comparisons of the raw data due to the different citation cultures, publication rates and numbers of researchers in those fields.
A participant's publishing and reviewing reputation is reflected in the value of their portfolio of shareholdings and accumulated tokens. Usefully, the proportion of their reputation that is due to authorship of valuable science can be separated easily from the proportion due to their ability to identify valuable science. As far as we know, this is the first quantitative measure of reviewing reputation. The set of researchers responsible for the authorship of valuable research is not necessarily the same as the set of researchers skilled in identifying valuable research. In fact, history shows that is often those researchers who have gained a strong publishing reputation who are responsible for (wrongly) rejecting groundbreaking new research. As such, the identification of valuable science is an important skill that deserves recognition. Currently, reviewers receive no real credit for performing the task of peer review well, and there is little accountability in the process. Our approach makes progress on both these counts. A researcher's reviewing reputation is, presumably, of more interest than their publishing reputation to an editorial board or program committee.
The method also produces a valuable side-effect that we did not design for: the indicator introduced by our approach is resistant to citation collusion. As such, in so far as the metric introduced by our approach is adopted by funding agencies and employers to aid in their evaluation of research, citation collusion produces no benefit for researchers. To see why this is the case, take any subset of participants in the system. This subset of participants may engage in citation collusion, but this merely shifts tokens amongst themselves without increasing the sum total of tokens. The only way for this group to achieve an increase in the sum total of their tokens is if participants outside of this subset cite or purchase shares in their papers.
Note also that share swap agreements, in which two or more parties agree to purchase shares in each others' papers at inflated prices, are also ultimately doomed to failure, though they can be used to temporarily increase the current share price of a paper. Although it is possible for a participant to bid an arbitrarily large figure on a share of a manuscript, they are likely to do so only if they are confident of receiving dividends in keeping with their bid price, and subsequently, that they could close out this position by selling their share at the same (or higher) price. Otherwise, they find themselves in a losing position, wherein either they must sell their share at a lower price than that at which they purchased it, or their tokens are locked into this share, and they must forgo the opportunity to put those tokens towards purchasing shares in other manuscripts or submitting their own publication. In other words, at the instant a participant pays more for a share than the community of peers (that is, the market) values that share, they are in a losing position. This holds in the case of collusion, as well as the case where a researcher mistakenly over-values the contribution to the research community of a manuscript.
Another property of this approach is transparency. A scientist or manuscript has a particular rating because it is the aggregated score given by the research community. Furthermore, each researcher is accountable for the ratings they give because the act of providing a score has an effect on their own rating. Transparency is an important aspect of any rating system, since it is difficult to trust ratings that emerge from a black-box process, or some complex algorithm. Our mechanism is neither algorithmic in concept nor reliant upon secretiveness.
In this section we enumerate some of the ways in which the metric introduced by this paper can be used within the realm of scientific research.
In assessing project proposal and grant applications, funding agencies consider many factors in coming to decisions about whom to fund. These factors range from qualitative real-world impact measures to quantitative publication metrics. None of these factors is sufficient in isolation to come to a reasonable decision. However, in many fields of research, particularly those in which the primary outcomes are encoded within research manuscripts, publication metrics are a key statistic in assessing research quality. As previously mentioned, these statistics are usually trailing metrics. Missing from the research assessor's toolbox is a leading metric that gives some indication of the research community's current valuation of a researcher's work. In this manuscript we have introduced such a metric, which can be used in conjunction with existing trailing metrics to give funding agencies a more complete picture of the quality of research emerging from a particular group.
In applying for grants, researchers will often have already conducted some preliminary work which can be used to support the funding proposal, or they may have published "technical reports", "workshop papers" or "works-in-progress", which contain the seed of their idea. Currently used trailing metrics are of little use to funding bodies, because they will not provide any information about what a researcher's peers think about this contemporary line of scientific inquiry, simply because the work is too new. A leading metric, such as we have described, on the other hand, may have much value to funding bodies, because it does not require the accumulation of citations before useful information is revealed. Only a short period of time need pass, enough for some trading of shares in the relevant papers to have taken place, before these manuscripts acquire a value, determined collectively by the community of peers. The advantages this has over traditional peer review (or "expert review" in the case of some research grant proposal assessments) is that it taps a potentially much larger number of "reviewers", and aggregates their collective opinion.
Furthermore, employers may be interested in knowing what the research community thinks about the recent research conducted by a prospective employee. While h index, g index, PageRank&trade and citation counts can give a cumulative indication of a researcher's past performance, these metrics are not responsive to recent events. Our metric gives employers the possibility to weigh off contemporary achievements against historical ones. We do not claim that our metric is superior to existing trailing metrics, merely that our metric is a useful addition to the employer's toolkit, because it throws some light on an aspect of research performance that was previously in the shadows.
Can a combination of a wiki, karma, and a voting method like reddit or digg substitute the current system of academic publication?
We have implemented4 the mechanism described above in a proof-of-concept web site, called Citemine. Citemine is a web-based exchange for researchers, who may upload manuscripts, and buy and sell shares in those manuscripts.
Before commencing the development of our implementation, we ran two focus groups with a combined total of thirteen anonymous participants drawn from various fields of science and social science. The majority of participants were aged under forty. The purpose of the focus groups was to qualitatively assess the participants' understanding of the underlying mechanism, and to determine an appropriate form for the implementation. The participants were given a five minute introduction to the mechanism described in this paper. The explanation did not include a description of the properties held by the mechanism. In order to gain some insight into the participants' level of comprehension of the concept, the participants were then given ten minutes to write down any properties of the mechanism (both positive and negative) that were immediately obvious to them. Eight of the thirteen participants responded to this task with at least one of the properties listed in Section 4 above. Several participants responded with more than three. Transparency and incentive for self-review were commonly suggested. Several participants also raised the problem of researcher identification. In particular, how does one prevent a malicious participant from gaming the system by submitting work or buying and selling shares as several fictitious researchers? The identification of these properties by the participants indicates a fairly high level of understanding of the underlying mechanism. This can, perhaps, be attributed to the mechanism's similarity to other market systems with which the participants would be familiar. The remainder of the time in the focus groups was used to complete a questionnaire that helped us decide how the system should be implemented, and what features the implementation should eventually provide. All participants responded that the system should be implemented as a web site. The alternatives were a desktop or an application within Facebook.
To participate in Citemine, researchers sign up for an account, at which point they are allocated 1000 tokens. In Citemine, the currency is called the Real, after real numbers, not after the currency of Brazil. These reals (the currency symbol for the real is ℜ) can be used to publish manuscripts or purchase shares in other manuscripts. Submissions cost ℜ100, and this is split evenly amongst the authors. There are two basic kinds of manuscript in Citemine: prospect and retrospective. Prospect manuscripts are manuscripts that have not yet been formally published elsewhere. They can be technical reports, pre-prints, drafts and so on. A version of these manuscripts may later be published in a journal or conference proceedings. Retrospective manuscripts, on the other hand, are papers that have already been formally published. Retrospective papers can be added by anyone, and they can later be claimed by their authors at the usual submission price.
At the current time, Citemine supports PDF documents and HTML-based manuscripts.5 PDF documents are manually uploaded to the site by the user, while for HTML-based documents the user need only provide the URL of that document. In addition, Citemine crawls e-prints repositories that support the Open Archives Initiative metadata harvesting protocols. During beta-testing we limit this harvesting to a small set of providers, including arXiv.org and the University of Queensland eSpace repository. This process enables us to pre-populate Citemine with bibliographic data. Citemine participants can also insert bibliographical data for a manuscript without uploading the full text of the manuscript itself. A URL or DOI can be added to the record for the manuscript, which links to the authoritative full-text version of the manuscript. Citemine also accepts pre-prints and other sorts of previously unpublished material.
Depending on copyright restrictions, the full-text of the manuscript may or may not be retained by the system. In the case that copyright restrictions prevent the full-text being offered by Citemine, bibliographic data, including the title, authors and citations, is extracted from manuscript before the full-text is discarded.
Citemine applies automated citation parsing (Lawrence et al, 1999) to extract the citations from PDF documents, and expects HTML-based manuscripts to use the COinS "standard" for identifying references.6 It is on the basis of these citations that Citemine allocates dividends to shareholders as described in Section 3.
For each manuscript, we create a forum, in which written feedback can be provided to the authors of the paper, and in which debate can take place. We are experimenting with various kinds of forum. For example, to encourage concise comments on specific aspects of a manuscript, we are implementing a ticketing system similar to those used to track tasks within software development teams and help-desk systems. Each ticket can be categorised as a simple problem (such as a spelling or typing error), a more major kind of problem to do with the scientific content, or praise. We believe this sort of feedback will be more helpful to authors than lengthy, monolithic reviews. The system should also help to limit duplicated feedback from multiple participants.
Our implementation allows for, but does not mandate, so called Liquid Publications, which are manuscripts that can be updated to reflect new findings or to correct errors and omissions. We note our rating mechanism is ideally suited to such publications. If a change is made to a publication which the peer community deems to improve the publication, an upward adjustment in the share price should be expected. If a modification is made to a publication which the peer community deems to detract from the publication, a downward adjustment in the share price should be expected. These mutable publications take advantage of the benefits that digital publishing brings. In the current implementation, the citations made by a manuscript must remain fixed (after a short initial "drafting" period), since the citations are the basis for the payment of dividends. Also, Citemine delegates the task of version control to the authors; that is, we have not yet implemented a version control system for the manuscripts that are inserted into Citemine.
In the event that Citemine gains traction within the research community, thereby accumulating a critical mass of data, it will be able to offer researchers a novel search mechanism, in which search results are ordered by their share price (for manuscripts) or portfolio value (for researchers). Thus, important new scientific results can be quickly integrated into the working body of knowledge. Valuable research is discovered quickly, and there is no bias towards older (and, therefore, more highly cited) work. Rather, it is the value of the scientific content of a manuscript, as determined by the community of researchers, that determines whether it appears near the top or bottom of a set of search results.
Citemine does not monitor insider trading. In fact, we encourage it. Let us consider the circumstances under which insiders have an advantage. The authors of a study are presumably in the position best placed to later find flaws in this study. If such flaws are found, or if they conduct another study that refutes the findings of the initial study, then the authors will have a first mover advantage in unloading the shares in the paper that reports the results of the initial study, placing downward pressure on the share price of that paper. This is precisely what we want to happen. Allowing this sort of insider trading expedites the flow of information in the market. Furthermore, it incentivizes the authors to correct their own erroneous findings. While insider trading can impart a small advantage to the authors of a paper, we believe the benefits far outweigh the costs. The foremost goal of Citemine is to expedite the dissemination of valuable research, and to allow researchers to find this research quickly; encouraging insider trading in this context can only help to serve that goal.
At the present time, Citemine does not allow short selling. That is, one cannot sell shares that they do not own. In addition, participants cannot borrow reals from other participants or from the system itself. However, in the case that a participant has somehow managed to spend all their reals and therefore cannot afford to submit a manuscript they have just written (in other words, if they are bankrupt), they are free to convince their colleagues of the value of this new paper, and have these colleagues foot the bill for submission.7 Thus, even participants whose "bank balance" is currently very low have a means to submit manuscripts.
One problem faced by Citemine is the possibility that malicious users could create fake accounts that are used to direct reals to certain other accounts. The problem is that the reputation of these fake accounts is of no consequence, and so a malicious user can submit manuscripts that contain no scientific content, but which cite the work of the malicious user's friends and colleagues, as well as the work submitted under the malicious user's actual account. We believe that the prospect that one could be caught playing this game is a strong deterrent, since it may have disciplinary ramifications far beyond Citemine's system of reputation and rankings.
A potentially more damaging scenario is one in which non-researchers join Citemine, and commit the same kind of offences described above. In this case, the malicious user is not deterred by the prospect of being caught, as their research reputation is of no consequence to them. At present, Citemine does not guard against this malicious use. However, there are several partial solutions to the problem, which we may choose to implement in the future. By making Citemine participation by invitation only, we can restrict participation to the peers of those participants already using the system. The trail of invitations can be used to trace back to the person who invited a user who turns out to be malicious, and investigations can be launched from there. A second approach is to levy a fee (in real-world currency such as US dollars) for participation in Citemine. This should provide a barrier to entry to illegitimate participants. Both of these approaches may adversely affect participation numbers.
Citemine currently encapsulates only the core mechanism described in this paper. We are in the process of implementing features that add value to the basic concept. These include an updated search engine that considers aspects other than share price when ranking search results, the ability for participants to add their own tags (keywords) to manuscripts written by other researchers, and a manuscript recommendation system that suggests personalised reading material for each participant.
In this paper we have described the design and early implementation of a trading system for research manuscripts that aims to increase efficiencies in science communication. A product of the method described is a potentially useful leading metric that gives a direct indication of the research community's valuation of a manuscript and its authors. We show how this method can be derived from traditional peer reviewing processes, and then show how the method can be generalised to a stock exchange-like system. The method has several interesting properties, including providing incentives for thorough self-review, providing incentives to peer researchers to back only those papers likely to be frequently cited in the future regardless of who has written these papers, and the emergence of a metric resistant to citation collusion. We note, again, that the metric introduced by our method is not intended to be used in isolation from other measures of research impact; rather, it is a metric that provides a different vantage point from which to assess research quality. The method has been implemented in the form of Citemine, a web-based exchange for scientific manuscripts.
There is much scope for future research and implementation work. We describe here several lines of work we are pursuing or wish to pursue.
This paper has described the operation of our market mechanism informally, which has the benefit of being widely understood. However, an informal formulation does not enable us to easily answer questions such as
what happens if we change the cost of manuscript submission? and
how does our mechanism compare with existing mechanisms used in other sorts of markets? A formal mathematical description will allow us to find answers for these questions. A simple first step would be to define a discounting formula that gives a fair price for shares in a given manuscript based on the number of citations it is expected to receive.
An obvious addition to the system just described is the introduction of derivative securities. During our focus groups, one concern of the participants was that it is not possible to signal support for a researcher except through the act of purchasing shares in their manuscripts. One possible way to overcome this problem is to introduce researcher futures on top of the basic market described in this paper. These futures would be, in essence, a prediction market for divining the future publishing and reviewing reputation of the participants in the system, thereby adding another instrument to the toolkit of those wishing to assess research quality.
We are in the process of adding to Citemine the ability to co-ordinate traditional peer review processes. It is important to note that the mechanism described in this paper is not mutually exclusive to traditional peer review; in fact, we believe our method can play an important role for conference program committees and journal editorial boards. Conferences and journals typically receive many more papers than they can accept. Top conferences in computer science, for example, maintain acceptance rates in the low teens (or even less). The peer review process can be coupled with Citemine to reduce the number of submitted papers, thereby reducing reviewing burden. Upon submission to a publishing venue via Citemine, the authors must pay the submission cost (in reals). They will do this only if they believe there is a strong chance of their paper being accepted by the publishing venue. After the traditional peer review is carried out, the paper can be traded in Citemine like any other. The authors of papers that were rejected by the traditional peer review have the option of leaving a copy of their paper on Citemine as a pre-print or technical report, which may then be traded as usual.8
In this paper we have focused on the application on the underlying mechanism to scientific publishing. We are investigating the potential for applying the mechanism to other domains. Perhaps our first task is to identify the set of domains that bear the characteristics that make them amenable to the mechanism described in this paper. Essentially, the mechanism requires a set of entities that reference each other in some way, and where we wish to ascertain the quality of these entities and the providers of these entities (where quality defined in a domain-specific manner). One such domain that we have already identified is web search, or search within any subset of the web such as the blogosphere or news web sites. Search engines typically use a number of algorithms to decide upon the order of results shown to the end user, and each of these algorithms is given a weight, which controls how strongly the algorithm affects the overall result. The algorithms include eigenvector centrality measures such as PageRank™, term frequency, term location and methods that take user feedback into account (using, for example, artificial neural networks). The idea of
social search or
search 2.0 is also gaining momentum. This concept involves having humans in the (search) loop. It encompasses approaches in which users can customise their search results (for example, Rollyo), and where the results are influenced by one's social network through analysis of shared
bookmarks (for example, Gravee) and so on. However, we are not aware of any search engine, traditional or social, that takes predictive metrics, such as the one described in this paper, into account. By incorporating our metric, which is calculated by the wisdom of the crowd, search engines could deliver results that are more attuned to what is important right now. Importantly, documents need not wait for months or years to accumulate inbound links before being identified as a high-quality document. We currently apply this method of ordering search results within Citemine.
Another potential use is in distributed computing, where applications are often composed of multiple, loosely coupled objects or services. The composition of these services is achieved through object references, which are analogous to citations in manuscripts or hyperlinks in the web. As such, there is scope for investigating the extent to which our method can be applied in this domain. In these sorts of environments it is often difficult to assess the quality of services and the reputation of the service providers. It is a problem of trust. We will investigate the possibility of applying our method to the problem of trust and reputation in distributed computing environments. We have a particular interest in applying this work to ubiquitous computing (which goes by other names such as pervasive computing, Everyware and the Internet of Things), in which many distributed computing problems arise, including the problem of trust.
There are some problems in scientific publishing that our approach does not address. Among these is the problem of honorary authorship, whereby eminent professors are appended to the list of authors on a manuscript, regardless of whether or not they contributed to the scientific content. Our implementation at least encourages these honorary authors to carefully review the works to which they put their name, since as an author they must pay a portion of the submission cost. Another problem that our mechanism does not overcome is ceremonial citations, whereby an author will cite the works of an influential researcher for no valid reason. A potential solution to this problem might be to link the cost of submission to the number of citations, thereby encouraging authors to cite only relevant work. This may have the undesired side-effect of encouraging authors to omit even the relevant citations; however, there is an opposing force in play here: peer researchers may be reluctant to support a paper that does not contain the necessary citations, therefore detracting from the share price of that paper. Clearly, these subtleties require deeper investigation, that will be aided by a formal model.
In conclusion, we encourage researchers to take advantage of the significant benefits that open web-based publishing offers. Citemine, we believe, makes it possible to harness the rapid dissemination of scientific knowledge afforded by the web, while retaining a high degree of quality control and injecting a measure of accountability for authors and their peers. It is our hope that Citemine, and other related efforts, will encourage researchers towards a new, more open, model of science communication. We note, finally, that others are beginning to think along similar lines (most notably Crowcroft et al., 2009), which we choose to interpret as a sign that our proposal is something more than a mere thought experiment.
Since the conception of the idea presented in this paper during the latter months of 2005, a great many people have provided help in various forms, and I gratefully acknowledge their contributions here.
First, there are number of people at NICTA I wish to thank. Jonathan Thompson has done the bulk of the Citemine implementation. His work is ongoing. I am indebted to many within the NICTA leadership team who allowed me to pursue this idea despite its tenuous relevance to my
day job. In particular Chris Scott provided comments and feedback when this idea was in its infancy, and his support was crucial in securing funding for this
project. Mike Rosa helped to refine the form of our implementation, and was among the first to begin thinking of applications for our mechanism beyond academic publishing. I am thankful to many other colleagues at NICTA and affiliated organisations, including Charles Gretton (now at the University of Birmingham), Paul Hoff, Jadwiga Indulska, Brian Menzies, Silvia Richter, Conrad Sanderson, Mark Staples, Jim Steel (now at Queensland University of Technology), Bob Williamson and Ryan Wishart (now at Imperial College, London) for their insightful comments.
Second, there were many outside of NICTA who helped to improve this manuscript. Karen Henricksen encouraged me to pursue this idea, and provided objective criticism on a daily basis. She also provided feedback on early drafts of this article. Anthony
AJ Towns provided reams of comments and suggestions (including the notion of a discounting formula), which prompted many changes to the original drafts. His in-depth understanding of the underlying mechanics of the proposed solution proved invaluable, and I am most thankful to him. I must also thank Jon Crowcroft, Joshua Gans, Michael Nielsen and Mary O'Kane who asked insightful questions and directed me to related efforts, many of which are cited in the text.
I also extend my gratitude to the anonymous focus group and survey participants, who willingly gave of their time to help shape our implementation, and who suggested a number of potential features and extensions to Citemine.
Any errors that remain in this manuscript are my own, and none of those whom have been acknowledged above bear any responsibility for those errors.
NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program; and the Queensland Government.
Bergstrom, Carl T., Jevin D. West, and Marc A. Wiseman. 2008. The Eigenfactor Metrics. J. Neurosci. 28, no. 45 (November 5): 11433-11434. doi:10.1523/JNEUROSCI.0003-08.2008. http://www.jneurosci.org.
Brin, Sergey, and Lawrence Page. 1998. The anatomy of a large-scale hypertextual Web search engine. Proceedings of the seventh international conference on World Wide Web 7: 107-117. http://portal.acm.org/citation.cfm?id=297805.297827&coll=GUIDE&dl=GUIDE&CFID=35720789&CFTOKEN=54521891.
Casati, Fabio, Fausto Giunchiglia, and Maurizio Marchese. 2007. Liquid Publications: Scientific Publications meet the Web (December 1). http://eprints.biblio.unitn.it/archive/00001313/.
Crowcroft, Jon, S. Keshav, and Nick McKeown. 2009. Scaling the academic publication process to internet scale. Commun. ACM 52, no. 1: 27-30. doi:10.1145/1435417.1435430. http://portal.acm.org/ft_gateway.cfm?id=1435430&type=html&coll=GUIDE&dl=GUIDE&CFID=35815892&CFTOKEN=46798548.
Dasgupta, Partha, and Paul A. David. 1994. Toward a new economics of science. Research Policy 23, no. 5. Research Policy: 487-521. http://ideas.repec.org/a/eee/respol/v23y1994i5p487-521.html.
David, Paul A. 2008. The Historical Origins of `Open Science': An Essay on Patronage, Reputation and Common Agency Contracting in the Scientific Revolution. Capitalism and Society 3, no. 2 (October 24). doi:10.2202/1932-0213.1040. http://www.bepress.com/cas/vol3/iss2/art5.
Egghe, Leo. 2006. Theory and practise of the g-index. ScientificCommons. http://hdl.handle.net/1942/981.
Gans, Joshua S, and George B Shepherd. 1994. How Are the Mighty Fallen: Rejected Classic Articles by Leading Economists. Journal of Economic Perspectives 8, no. 1. Journal of Economic Perspectives: 165-79. http://ideas.repec.org/a/aea/jecper/v8y1994i1p165-79.html.
Garfield, Eugene. 1979. Citation Indexing: Its Theory and Application in Science, Technology, and Humanities. In Mapping the structure of science, 98--147. New York: John Wiley & Sons, Inc.
Garfield, Eugene. 2006. The History and Meaning of the Journal Impact Factor. JAMA 295, no. 1 (January 4): 90-93. doi:10.1001/jama.295.1.90. http://jama.ama-assn.org.
Hanson, Robin. 1990. Could Gambling Save Science? Eighth International Conference on Risk and Gambling (July). http://hanson.gmu.edu/gamble.html.
Hirsch, J. E. 2005. An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences of the United States of America 102, no. 46 (November 15): 16569-16572. doi:10.1073/pnas.0507655102. http://www.pnas.org/content/102/46/16569.abstract.
Lawrence, Steve, C. Lee Giles, and Kurt Bollacker. 1999. Digital libraries and autonomous citation indexing. IEEE COMPUTER 32: 67--71. doi:10.1.1.17.1607. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.17.1607.
Masum, Hassan, and Yi-Cheng Zhang. 2004. Manifesto for the Reputation Society (July 5). http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/viewArticle/1158/1078.
Meho, Lokman. 2007. The Rise and Rise of Citation Analysis. Physics World 20 (January): 32--36. http://physicsworldarchive.iop.org/summary/pwa-xml/20/1/phwv20i1a33.
Mele, Salvatore, David Dallman, Jens Vigen, and Joanne Yeomans. 2006. Quantitative Analysis of the Publishing Landscape in High-Energy Physics. cs/0611130 (November 26). http://arxiv.org/abs/cs/0611130.
Page, Lawrence, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The PageRank Citation Ranking: Bringing Order to the Web. STANFORD INFOLAB: 17. doi:10.1.1.31.1768. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.1768.
Riyanto, Yohanes E., and I. Hakan Yetkiner. A Market Mechanism for Scientific Communication: A Proposal. SSRN eLibrary. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=370496.