Both the ‘green’ and the ‘gold’ models of open access tend to preserve the world of academic journals, where anonymous reviewers typically dictate what may appear. David Gauntlett looks forward to a system which gets rid of them altogether.

Every week there’s something new in the open access debate. A couple of weeks ago the Finch report concluded that all publicly-funded research should indeed be made available free online (hurray!). But it favoured the so-called ‘gold’ model of open access, in which the highly profitable academic journal industry carries on as normal, but switches its demand for big piles of cash away from library journal subscriptions and over to authors themselves – or their institutions (boo!). Campaigners such as Stevan Harnad questioned why the Finch committee had not favoured the ‘green’ model, where authors put copies of their articles in free-to-access online repositories – the answer being, it was assumed, a successful blitz of lobbying by the publishing industry.

The ‘green’ model, which favours the interests of society over the interests of publishers, is clearly the best option. But whichever solution prevails, the promise of straightforward free access to all this research is exciting. To be honest, though, I am most enthusiastic about open access as a stepping stone on the way towards a situation where we get rid of academic publishers altogether, and shift to the ‘publish, then filter’ model.

If you’re not sure what ‘publish, then filter’ means, then let me explain.

Publishing things used to be an expensive business – getting a text to be typeset, printed and (in particular) distributed to readers, libraries or bookstores involved an enormous amount of effort. Therefore it was rational to be very cautious and selective about what things would be published. Filtering therefore had to be done by a small number of gatekeepers on behalf of everybody else.

But we no longer live in that world. Today, an author can make their text look presentable, and pop it on the Web for anyone to access, very easily. So all of the previous assumptions can be turned on their head. This doesn’t mean that researchers will suddenly publish a flood of random jottings – authors, mindful of their own reputations, will hopefully prepare their texts carefully before release.

But once they’ve written a nice article, why can’t we just access the thing straight away? The author can put the text online, let people in their networks know about it (via a blog, Twitter, or announcement on an email list), and interested people will see it and, if they find it valuable – or just think that it looks potentially valuable – will share it with others.

Two obvious good things about this model are:

  • it’s immediate (rather than the standard model, where you wait two years for the thing to appear);
  • it cuts out the process of pre-publication ‘peer review’, in which anonymous random people force you to make pointless changes to your carefully-crafted text.

‘Publish, then filter’ isn’t a new idea. It’s one of the most basic ideas that got everybody excited about the Web in the first place. The process of people being able to publish whatever they like, without gatekeepers, and then drive it to broader attention, was discussed in the book Web Studies, which I edited and contributed to in the late 1990s and published in 2000, when dinosaurs roamed the earth – and although that volume hopefully contained some original insights, that was not one of them.

Clay Shirky popularised the elegant ‘publish, then filter’ formulation in his book Here Comes Everybody, published 2008, but had been using the phrase for many years before that. In 2002, he told an audience at the BBC:

“The order of things in broadcast is ‘filter, then publish’. The order in communities is ‘publish, then filter’. If you go to a dinner party, you don’t submit your potential comments to the hosts, so that they can tell you which ones are good enough to air before the group, but this is how broadcast works every day. Writers submit their stories in advance, to be edited or rejected before the public ever sees them. Participants in a community, by contrast, say what they have to say, and the good is sorted from the mediocre after the fact.

Media people often criticize the content on the internet for being unedited, because everywhere one looks, there is low quality — bad writing, ugly images, poor design. What they fail to understand is that the internet is strongly edited, but the editorial judgment is applied at the edges, not the center, and it is applied after the fact, not in advance. Google edits web pages by aggregating user judgment about them, Slashdot edits posts by letting readers rate them, and of course users edit all the time, by choosing what (and who) to read.”

A typical objection to this model is: ‘Well, that’s not going to work. At least journals sort out the better-quality work from the rubbish stuff… you couldn’t do without peer review. How would we know what to read, with so much stuff out there?’

This sounds like a rational worry. But in fact there are publishing worlds which already do fine without peer review. One example is blogs. Most blogs are just published, with no gatekeeper editors. So the question is: how do you know what to read, in a world of so many blogs? But it’s not really that bewildering or difficult, is it? You follow recommendations from people you know and/or trust on social media (or in real life); and you can, if you want, look at reputational indicators, such as the prestige of the places the writers are employed at, or where the blog is published. That works fine really.

If you were to pick a blog at random, you might find it to be less than brilliant; but you could say exactly the same about academic journals, or academic journal articles, which are also bewilderingly numerous and often not that great.

And in fact, as is becoming increasingly well-known, a version of the ‘publish, then filter’ model is already in operation for some open access science journals. As Mike Taylor explains in this blog post, journals such as PLoS ONE only check that papers are ‘technically sound’, and then put them into the public domain so that the whole community of interested researchers (potentially) can do the work of picking out and circulating the articles which they find to be interesting and innovative. Similarly, in ‘Time to review peer review’, Andrew Pontzen notes that:

“These days most physicists now download papers from, a site which hosts papers regardless of their peer-review status. We skim through the new additions to this site pretty much every day, making our own judgements or talking to our colleagues about whether each paper is any good. Peer-review selection isn’t a practical priority for a website like, because there is little cost associated with letting dross rot quietly in a forgotten corner of the site. Under a digital publication model, the real value that peer review could bring is expert opinion and debate; but at the moment, the opinion is hidden away or muddled up because we’re stuck with the old-fashioned filtration model.”

Pontzen proposes that a journal should become more like a curated online platform, where “the content of the paper is the preserve of the authors, but is followed by short responses from named referees, opening a discussion which anyone can contribute to”. This sounds so much more appealing that the awful, slow process we have at the moment – especially for researchers in the humanities and social sciences, where (in my experience) anonymous reviewers make insecure demands for more jargon, or trivial and irrelevant details, or references to their own hobby-horses which don’t have anything to do with the intention of the article, slowing down the publication process by months whilst rarely making a positive difference to the articles themselves. This is one of the reasons why the journal industry’s claims of ‘added value’ are so nauseating.

Returning to the open access debate, I was initially surprised that the Wellcome Trust came out in support of the Finch committee’s ‘gold’ access model (authors pay publishers) rather than the ‘green’ model (authors put published articles online), even if it cost more. You might expect that the Wellcome Trust would be looking to save money. But their view was based on the understandable principle that they want research to appear as quickly as possible, and with no restrictions. This would happen under the ‘gold’ model: having got their cash, the publishers would be happy to make things available online quickly and would not prevent data mining. This speed and flexibility is good for science. The ‘green’ model, meanwhile, tends to be based on the idea that academic journals would still exist, and that researchers would put their work into online repositories after an embargo period of 6 or 12 months, and might still put restrictions on access for data mining. So the ‘green’ solution turns out to be a bit of a messy fudge – you can understand why the Wellcome Trust might prefer to fork out for a faster, unrestricted service.

But the ‘publish, then filter’ model solves that one as well. As a publishing model it’s immediate, it’s as unrestricted as you like, and it’s cheaper by several million pounds.

The academic community – or rather, different academic communities – would need to develop tools that would support the process of reviewing and rating research and research articles; but that’s ok – many online platforms have worked out decent ways to do that already.

I don’t think the ‘publish, then filter’ model will become dominant very soon. But these movements reach tipping points quicker than one might expect – a few years ago, for instance, who would have thought that the government and the research councils would be so strongly advocating the principle of open access. So let’s skip through this publisher-preserving phase of open access as quickly as possible, please, and move on to a publishing model suitable for this century.

Print Friendly, PDF & Email