Biases, deliberate delay, repeated rejection – peer review has its problems but it is a crucial part of research dissemination, writes Rebecca Lawrence, who explains that open publication of all good science followed by open peer review is the key to future publishing.
Discontent with the traditional peer review system and the problems it brings has been building for many years. Opinions range from ‘peer review is broken and ultimately unfixable’, to those who say that, despite the problems, there is nothing better and so are determined to stick with the current process. The debate has reached such high levels that there was even a UK Science & Technology Parliamentary Select Committee investigation into the peer review system in 2011. Amongst other points, it concluded that:
‘Innovative approaches—such as the use of pre-print servers, open peer review, increased transparency and online repository-style journals—should be explored by publishers.’
‘…the growth of post-publication peer review and commentary represents an enormous opportunity for experimentation with new media and social networking tools’.
The current system
What is clear is that in this age of electronic communication and the immediate dissemination of information, it is unacceptable that peer review can add months or even years to the publication process for researchers who are ready to share their findings with other researchers, clinicians and patients, who might benefit from their research.
I should be clear that I think the process of peer review is a crucial part of research dissemination – we just need to do it differently to avoid the many problems caused by anonymous pre-publication peer review, such as referee and editor biases, deliberate delays in reviewing to enable competitor articles to publish first, delays caused by repeated rejection simply because it doesn’t fit into the journals’ own perceived level of quality or by subjective views on how interesting or novel a finding is, and so on. To the extent that peer review is intended to block publication of ‘sub-standard’ science, it actually doesn’t work in its current form: if an author is persistent enough, the majority of articles will eventually get published in a ‘peer reviewed’ journal somewhere. Equally, once a broader audience beyond the closed referees view the articles after publication, they often identify problems that the referees missed, leading to ‘Letters of Concern’ that are often published in other journals and are not linked to the original paper.
Peer review with a difference
The launch of the F1000 Research publishing program with its post-publication peer review model, tries to address these many points. Authors submit their article, which then undergoes a fast internal pre-publication review to check the content, quality, tone and format, ensuring the article is intelligible and written in good English.
The article is then published, clearly labelled as ‘awaiting peer review’, and is immediately sent to 3-5 expert referees. These referees are asked to do two things: state whether the article ‘seems ok’ (in a matter of days), and then provide a more standard referee report (within a couple of weeks). The ‘seems OK’ status and the referee report are published immediately and signed openly by the referees.
Authors are encouraged to revise their articles based on these referee reports, with each article version being linked to its predecessor(s) and individually citable. Registered users (whom we can identify as scientists) are also encouraged to comment. Our first articles have now been published on our preliminary consultation site and can be viewed here.
Another significant aim of F1000 Research is to support and encourage the publication of the data behind the results to enable both reproducibility and reuse. This includes separate data articles (datasets with associated protocol information). Post-publication refereeing is particularly well suited here as it is often difficult to know whether datasets are ’right’ until other scientists have had an opportunity to use and work with them.
Solutions to some challenges
We believe there are many advantages to this completely transparent approach to peer review and have developed several additional features to address some of the issues it raises. We have created a novel citation format, in conjunction with our Advisory Panel and many major indexers that includes the referee status details and article version number, so that the status is clear even when articles are cited in CVs, grant documentation and elsewhere. We have also agreed with these indexers that articles will only be indexed when they have received at least two positive reviews. Those articles that are deemed poor quality by all the reviewers will therefore not be indexed, and we will remove them from the default search on our site.
We anticipate that the transparency of our approach will act as a strong disincentive to the submission of poor-quality work, which our immediate publication model might be assumed to otherwise encourage. No researcher benefits from having his or her work openly criticised, and a citation that clearly shows that referees judged the work to be poor quality is unusable. For this simple reason, we anticipate that F1000 Research is likely to receive fewer sub-standard submissions than journals using the standard closed pre-publication model.
Furthermore, with the current system, an article can be rejected by numerous reviewers for a whole host of journals before finally being accepted in a ‘peer reviewed’ journal, and the reader is none the wiser that perhaps as many as a dozen previous reviewers were unhappy with the work. With F1000 Research, it will be immediately obvious if our reviewers feel that an article is scientifically unsound, saving the time and effort of many further reviewers.
We are pleased by the level of support, and indeed excitement, we have seen from our Advisory Panel and Editorial Board members, as well as many others, for our approach. We recognise that every peer review model has its downsides, but most people seem to agree that the F1000 Research model has fewer downsides over the traditional approach. We have soft-launched first so that we can test our model and amend and fine-tune it as necessary, and we have in fact already started to do this as we learn from our first articles.
Many have commented that the timing of the F1000 Research launch is perfect. This year has seen an unprecedented number of developments that have solidified support for substantial change in the way things are done, for example the Research Works Act at the beginning of the year, the many universities (e.g. Harvard University, UCSF, etc) strongly urging their researchers towards open access, and the UK Government’s announcement this month that all publicly funded research will be published open access by 2014.
We believe that the scientific publishing world is ready to take its next step into further openness by supporting open publication of all good science followed by open peer review. Many other publishers clearly agree that current accepted models are broken and are themselves looking at other alternative approaches. It is an exciting time to be conducting our own experiments!
Note: This article gives the views of the author, and not the position of the Impact of Social Sciences blog, nor of the London School of Economics.