Both the ‘green’ and the ‘gold’ models of open access tend to preserve the world of academic journals, where anonymous reviewers typically dictate what may appear. David Gauntlett looks forward to a system which gets rid of them altogether.
Every week there’s something new in the open access debate. A couple of weeks ago the Finch report concluded that all publicly-funded research should indeed be made available free online (hurray!). But it favoured the so-called ‘gold’ model of open access, in which the highly profitable academic journal industry carries on as normal, but switches its demand for big piles of cash away from library journal subscriptions and over to authors themselves – or their institutions (boo!). Campaigners such as Stevan Harnad questioned why the Finch committee had not favoured the ‘green’ model, where authors put copies of their articles in free-to-access online repositories – the answer being, it was assumed, a successful blitz of lobbying by the publishing industry.
The ‘green’ model, which favours the interests of society over the interests of publishers, is clearly the best option. But whichever solution prevails, the promise of straightforward free access to all this research is exciting. To be honest, though, I am most enthusiastic about open access as a stepping stone on the way towards a situation where we get rid of academic publishers altogether, and shift to the ‘publish, then filter’ model.
If you’re not sure what ‘publish, then filter’ means, then let me explain.
Publishing things used to be an expensive business – getting a text to be typeset, printed and (in particular) distributed to readers, libraries or bookstores involved an enormous amount of effort. Therefore it was rational to be very cautious and selective about what things would be published. Filtering therefore had to be done by a small number of gatekeepers on behalf of everybody else.
But we no longer live in that world. Today, an author can make their text look presentable, and pop it on the Web for anyone to access, very easily. So all of the previous assumptions can be turned on their head. This doesn’t mean that researchers will suddenly publish a flood of random jottings – authors, mindful of their own reputations, will hopefully prepare their texts carefully before release.
But once they’ve written a nice article, why can’t we just access the thing straight away? The author can put the text online, let people in their networks know about it (via a blog, Twitter, or announcement on an email list), and interested people will see it and, if they find it valuable – or just think that it looks potentially valuable – will share it with others.
Two obvious good things about this model are:
- it’s immediate (rather than the standard model, where you wait two years for the thing to appear);
- it cuts out the process of pre-publication ‘peer review’, in which anonymous random people force you to make pointless changes to your carefully-crafted text.
‘Publish, then filter’ isn’t a new idea. It’s one of the most basic ideas that got everybody excited about the Web in the first place. The process of people being able to publish whatever they like, without gatekeepers, and then drive it to broader attention, was discussed in the book Web Studies, which I edited and contributed to in the late 1990s and published in 2000, when dinosaurs roamed the earth – and although that volume hopefully contained some original insights, that was not one of them.
Clay Shirky popularised the elegant ‘publish, then filter’ formulation in his book Here Comes Everybody, published 2008, but had been using the phrase for many years before that. In 2002, he told an audience at the BBC:
“The order of things in broadcast is ‘filter, then publish’. The order in communities is ‘publish, then filter’. If you go to a dinner party, you don’t submit your potential comments to the hosts, so that they can tell you which ones are good enough to air before the group, but this is how broadcast works every day. Writers submit their stories in advance, to be edited or rejected before the public ever sees them. Participants in a community, by contrast, say what they have to say, and the good is sorted from the mediocre after the fact.
Media people often criticize the content on the internet for being unedited, because everywhere one looks, there is low quality — bad writing, ugly images, poor design. What they fail to understand is that the internet is strongly edited, but the editorial judgment is applied at the edges, not the center, and it is applied after the fact, not in advance. Google edits web pages by aggregating user judgment about them, Slashdot edits posts by letting readers rate them, and of course users edit all the time, by choosing what (and who) to read.”
A typical objection to this model is: ‘Well, that’s not going to work. At least journals sort out the better-quality work from the rubbish stuff… you couldn’t do without peer review. How would we know what to read, with so much stuff out there?’
This sounds like a rational worry. But in fact there are publishing worlds which already do fine without peer review. One example is blogs. Most blogs are just published, with no gatekeeper editors. So the question is: how do you know what to read, in a world of so many blogs? But it’s not really that bewildering or difficult, is it? You follow recommendations from people you know and/or trust on social media (or in real life); and you can, if you want, look at reputational indicators, such as the prestige of the places the writers are employed at, or where the blog is published. That works fine really.
If you were to pick a blog at random, you might find it to be less than brilliant; but you could say exactly the same about academic journals, or academic journal articles, which are also bewilderingly numerous and often not that great.
And in fact, as is becoming increasingly well-known, a version of the ‘publish, then filter’ model is already in operation for some open access science journals. As Mike Taylor explains in this blog post, journals such as PLoS ONE only check that papers are ‘technically sound’, and then put them into the public domain so that the whole community of interested researchers (potentially) can do the work of picking out and circulating the articles which they find to be interesting and innovative. Similarly, in ‘Time to review peer review’, Andrew Pontzen notes that:
“These days most physicists now download papers from arxiv.org, a site which hosts papers regardless of their peer-review status. We skim through the new additions to this site pretty much every day, making our own judgements or talking to our colleagues about whether each paper is any good. Peer-review selection isn’t a practical priority for a website like arxiv.org, because there is little cost associated with letting dross rot quietly in a forgotten corner of the site. Under a digital publication model, the real value that peer review could bring is expert opinion and debate; but at the moment, the opinion is hidden away or muddled up because we’re stuck with the old-fashioned filtration model.”
Pontzen proposes that a journal should become more like a curated online platform, where “the content of the paper is the preserve of the authors, but is followed by short responses from named referees, opening a discussion which anyone can contribute to”. This sounds so much more appealing that the awful, slow process we have at the moment – especially for researchers in the humanities and social sciences, where (in my experience) anonymous reviewers make insecure demands for more jargon, or trivial and irrelevant details, or references to their own hobby-horses which don’t have anything to do with the intention of the article, slowing down the publication process by months whilst rarely making a positive difference to the articles themselves. This is one of the reasons why the journal industry’s claims of ‘added value’ are so nauseating.
Returning to the open access debate, I was initially surprised that the Wellcome Trust came out in support of the Finch committee’s ‘gold’ access model (authors pay publishers) rather than the ‘green’ model (authors put published articles online), even if it cost more. You might expect that the Wellcome Trust would be looking to save money. But their view was based on the understandable principle that they want research to appear as quickly as possible, and with no restrictions. This would happen under the ‘gold’ model: having got their cash, the publishers would be happy to make things available online quickly and would not prevent data mining. This speed and flexibility is good for science. The ‘green’ model, meanwhile, tends to be based on the idea that academic journals would still exist, and that researchers would put their work into online repositories after an embargo period of 6 or 12 months, and might still put restrictions on access for data mining. So the ‘green’ solution turns out to be a bit of a messy fudge – you can understand why the Wellcome Trust might prefer to fork out for a faster, unrestricted service.
But the ‘publish, then filter’ model solves that one as well. As a publishing model it’s immediate, it’s as unrestricted as you like, and it’s cheaper by several million pounds.
The academic community – or rather, different academic communities – would need to develop tools that would support the process of reviewing and rating research and research articles; but that’s ok – many online platforms have worked out decent ways to do that already.
I don’t think the ‘publish, then filter’ model will become dominant very soon. But these movements reach tipping points quicker than one might expect – a few years ago, for instance, who would have thought that the government and the research councils would be so strongly advocating the principle of open access. So let’s skip through this publisher-preserving phase of open access as quickly as possible, please, and move on to a publishing model suitable for this century.
Thanks for your post, David. Your arguments make eminent sense to me. The true test of a decent article will be if others take it up and use it once it is published.
Under the ‘publish then filter’ model writers will have to engage in very careful self-editing before publishing, however, so as to ensure the highest quality possible. But in assisting with this I can forsee the formation of collectives of academics working in the same field who may decide to group together to provide a pre-publication review process for each other.
This is exactly what we have been working hard to do for a year and a half over at The Comics Grid. It’s meant a lot of work, and it’s also meant that contributors have had to look “under the hood”, so that “careful self-editing” also includes following best practices in media-specific publishing.
Thank you for this great post.
Thanks Deborah, and Ernesto.
I agree that people will have to prepare their texts carefully — and would likely benefit from informal (or even more formal) collectives to support that process. But, that’s what most scholars, and in particular most of the good ones, do anyway, I think.
There will be more badly-edited, badly-written stuff made available, in this model, I think, but as Andrew Pontzen noted in the quote above, there is plenty of space online for unwanted, unloved texts to “rot quietly”. Most of the time you simply wouldn’t encounter them.
For the smallish number of actual geniuses who can’t make reasonable-looking readable documents, well, they’ll have to get help. But if we had to picture a solution to their problem, I don’t think we’d dream up the multimillion-pound academic journal industry (and I know you’re not saying we would)!
David,
I agree that some viable alternative to both the current model and the ‘pay-to-publish’ model is urgently needed. I’m not convinced this is it, however. There are a lot of assumptions in this argument that don’t bear up to scrutiny. For me, the most significant is that we live in a world where everything is equal and in which rational people will naturally gravitate towards the best work, thus allowing ‘the cream’ to rise to the top. But we don’t live in a world like that. We live in a world in which reputation is a key factor and is not equally distributed, providing some individuals with a gravitational pull that exceeds the true importance of their ideas, while other individuals with equally important things to say struggle to even make themselves heard. This is a particularly acute problem for those whose ideas represent a radical challenge to the established orthodoxy of a discipline. It is particularly important that those people are able to find ways of getting their arguments across so that established orthodoxies can be tested. In this model, however, it would be all too easy for those people to be ignored along with all the nutters ‘filtered out’ after publication, leaving disciplines dominated by a relatively small number of people who are well established and whose work does little more than continually reinstate the status quo. Not necessarily the best, just the most powerful.
That this would happen should not be surprising to anyone. What you are suggesting here is really just a version of free market liberalism for ideas and, unfortunately, it is a inherent characteristic of markets that, left unregulated, they tend to produce monopolistic behaviour, as those with more power ruthlessly work to consolidate their status. This may be a rather pessimistic view of the academic character (and I’m certainly not suggesting all scholars are like this) but I’m sure we all know people like this.
[Reply to Mike –]
Hi Mike. This is interesting. But bland reinforcement of the status quo, dominated by those who are already well-known and powerful, is what I believe happens in the journals. Whereas the quirky and different voices, you find online and elsewhere. And those people can become visible and talked-about without having to go through the journal ‘screening’ process, which tends to reject material which does not conform to a very conservative bunch of norms.
The model I’m proposing would not be perfect, and you’re right, it’s not like the ‘best’ ideas would magically rise to the top with no friction; but I’m sure it’s better than the one which won’t even accept you have a voice if you don’t talk journalspeak.
Hi David
It’s a difficult problem. I certainly don’t think the current model works particularly well but I also don’t have much faith in the ability of a sort of marketised free for all to deliver what is needed either. I do think there is something to be said for the model adopted by Martin Barker’s journal, Participations. Submissions are peer-reviewed before publication, but not anonymously. I think this imposes a much greater sense of responsibility on reviewers to act objectively and fairly, since they can no longer hide behind the veil of anonymity, and it promotes a real sense of dialogue between reviewers (who are expert in the field) and authors hoping to contribute something valuable to that field. When you submit to that journal you do get a real sense of working with people (rather than being excluded by faceless “gatekeepers”) toward a publication that makes a real contribution.
Mike
Devil’s advocate for a moment…. wouldn’t this proposed journey lead academic and authors down a road to ‘shameless self promotion’. I could easily see this becoming a matter of how many friends one has on face book and liking one’s article being huge issue. Under that system I do not think the best work would make it.
Also haven’t we been down this road with journalism? The idea that free and open access would actually increase news — yet the reality is it has only resulted in reproduced news and in some cases the death of investigative journalism?
I also see such a model as prone to a lack of connecting work to theoretical ancestor. Case in point, I just read a book on celebrity in which the author clearly uses Mead’s work (including the same names as his processes) but did not provide one citation to Mead.
For what it is worth-
[Reply to Chris –]
Thanks for the comments. I think professional people will use and recommend work which is useful to them professionally, and so I don’t think it would be like a competition for Facebook friends, which is quite different.
And I think the comparison with journalism is a red herring, because that’s about a situation where newspapers and professional journalists are being decimated, and some people hope that volunteers and enthusiasts will fill the void. But that’s not the case here — academics are (already) employed to do their research, and we’re not talking about replacing those academics with volunteers and enthusiasts, so that seems quite different as well.
Finally, I don’t see any connection between the distribution model and the example you give of someone not acknowledging a theoretical ancestor. The open distribution model would make it easier to access texts generally, so should help the quality of citations. Alas, the fact that you had a book which failed to properly cite Mead doesn’t seem like a powerful critique of the ‘publish, then filter’ model (sorry!).
TEST QUALITY BEFORE CONSUMING
I don’t think we would want our ailments to be treated on the basis of “publish then filter” research (and especially not in the jostle between the publish and the filter). I’m not sure knowledge in other disciplines is that much less important than our health either.
(Nor do all — or even most — scholars and scientists want to make their findings public before their soundness is vetted by their peers. For the exceptions, one can always post one’s unrefereed preprints, as the physicists do.)
The purpose of open access is to free peer-reviewed research from access restrictions, not to free it from peer review.
Harnad, S. (1996) Implementing Peer Review on the Net: Scientific Quality Control in Scholarly Electronic Journals. In: Peek, R. & Newby, G. (Eds.) Scholarly Publishing: The Electronic Frontier. Cambridge MA: MIT Press. Pp 103-118. http://cogprints.org/1692/
Harnad, S. (1997) Learned Inquiry and the Net: The Role of Peer Review, Peer Commentary and Copyright. Learned Publishing 11(4) 283-292. http://cogprints.org/1694/
Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242. http://cogprints.org/1646/
Harnad, S. (2008) Flight-test before you fly. Comment on “A XXI-century alternative to XX-century peer review ”, real-world economics review, issue no. 47, 3 October 2008, pp. 252-253, http://www.paecon.net/PAEReview/issue47/CommentsIettoGillies47.pdf
Harnad, S. (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8). http://eprints.ecs.soton.ac.uk/21348/
[Reply to Stevan –]
Nice to hear from Stevan Harnad, one of my heroes! Shame he does not agree. Of course I’m talking about something which goes beyond the open access debate.
And I write from the perspective of the arts, humanities, and social sciences, where I don’t think I’ve ever heard a colleague being pleased about the opportunity to have their ideas ‘tested’ within the anonymous peer review process, which tends to be harsh and unhelpful. They do want to ‘test’ their ideas in the open court of academic opinion, but that’s best done via the ‘publish, then filter’ model, where you can get your ideas out there, and then talk about them in a respectful way with named individuals — which is incredibly helpful — rather than being picked at by anonymous and often quite random people — which is not.
In my own experience, I have had very many healthy discussions about my published work, post-publication, with people who agree and people who disagree, so that definitely happens (and it’s not that I don’t like disagreement … that’s all good). But I’ve not had that experience with pre-publication peer review, apart from when you get helpful little suggestions about things — which are the kinds of constructive comments that academics could solicit from their friends and colleagues prior to publication anyway.
PEER REVIEW VS OPEN PEER COMMENTARY
No one could be more in favour of open peer commentary than I am, having umpired it for a quarter century.
One thing open peer commentary teaches you, however, is the difference between peer review and peer commentary.
But, not to be misunderstood, I’m all for posting unrefereed preprints online to seek commentary before submission for peer review! I do it all the time myself. It’s just that not everyone goes in for that sort of thing!
Peer review is meant to be a filter for those who only want to read (or be read) after vetting by qualified peers.
Great piece, David, plenty of food for thought. The phrase ‘publish, then filter’ is a really good one – summarises things very neatly.
You mention one possible objection, that peer review is needed to sort the wheat from the chaff, and cite blogs and people’s obvious ability to filter and pick and choose which blogs to follow. I guess my choice of blogs and bloggers (including twitter) to follow is based upon their ability to produce what’s useful content (for me), and to curate and share that produced by others who I might not be aware of. If I’ve got this arranged well, anything I ought to read will be brought to my attention, and of course as an active participant in my networks I’ll be doing the same for others.
So far so meritocratic. But I wonder if we can rely upon the wisdom of crowds remaining meritocratic. I think it’s now fairly well known that some political parties and lobbyists use ‘astroturf’ tactics which include setting up blogs and commenting on newspaper articles. There have been a number of occasions where the number of comments appearing almost immediately after the article is published, and the number of ‘likes’ (or whatever) for those comments can only be the result of organised activity, rather than the normal activities and responses of ordinary readers. For me, this is inherently dishonest because it’s a deliberate attempt to give the false impression of consensus around a particular issue, or at the very least to artificially inflate the popularity of one particular view. Do you think the ‘publish, then filter’ method could remain meritocratic in academic research, particularly around sensitive topics or even those that are highly controversial just within academia?
On a related point, my other worry about moving away from peer reviewed journals might be too much openness. How do we/ordinary people/journalists/politicians distinguish between proper, quality academic research (on the one hand), and the kind of low quality cherry-picking ‘research’ produced (for example) by those who don’t accept the reality of climate change, or those pushing dubious ‘alternative’ medicines? Especially if such groups are capable of manufacturing the appearance of consensus.
I was once a research scientist, then I worked in publishing, and now I work in medical communications in the pharmaceutical industry. All I can say is, I hope this model never takes off. I certainly wouldn’t hold up blogs as some kind of model of where we want scientific communiction to go — the vast majority of blogs are terrible, and whenever one original insight appears someplace, it just gets parroted around the blogosphere until there are dozens of blogs all saying the same thing with slightly different wording.
People act as though journals just soak up a lot of money without providing any value. But they do at least a couple of things that are valuable and worth paying money for. Filtering content is valuable — I know that I am going to get a certain kind of paper from Neuroscience and Biobehavioral Reviews, which will differ from Pharmacology, Biology and Behavior; or Behavioral Neuroscience; or Experimental Neurology. I’m not really interested in sifting through a lot of unfiltered stuff. Second, peer review is not just a random assemblage of people, it is (or should be) a panel of reviewers who identify the flaws and errors and weed out the junk. Journals also edit — which, as a former copyeditor, I can tell you is a very valuable service. Finally, if nothing else, journals archive their publications for years or decades. What happens when “Joe’s BioBlog” goes belly up in 5 years?
Sure, make publically funded research available to the public. But the blog model is horrific, IMO.
[Reply to Pierce –]
Filtering *is* valuable, yes. But what I am saying is, there are other ways of doing it – via your trusted (and identifiable) peers, rather than via anonymous peer review.
I think this post on this very blog by Patrick Dunleavy offers a good account of why blogs are a good thing. And if blogs are so “horrific” I don’t know why you’re reading, or corresponding, on this one (!).
Regarding the need for copyeditors … well I seem to have been able to somehow produce this blog post without the help of a paid punctuation expert, and it’s no more than what we expect of, say, our students. I look at a number of academic blogs and they are typically well written and gramatically fine. I don’t think that’s a big worry!
So, sorry, but we clearly disagree.
At ScienceOpen, we’re walking the “publish then filter” walk because that’s what we do here!
Just by way of a bit of background, before joining ScienceOpen I was at PLOS for nearly 9 years, helped to kick start Open Access and launched & built PLOS ONE with a team of committed individuals. Prior to that I was at Nature, again for quite some time. It’s probably true to say that I’ve experienced many permutations of scholarly communication and the one I like the best is the one we offer at ScienceOpen.
In a nutshell, here’s what we do:
1. We publish with DOI within about a week after an initial editorial check. http://ow.ly/Fbbps
2. During the production cycle, we provide proofs, basic copy-editing and language help.
3. We facilitate Non Anonymous Post-Publication Peer Review (PPPR) from experts with at least five publications per their ORCID (our registration process and theirs are integrated) to maintain the level of scientific discourse on the platform. http://ow.ly/Fbbxf
4. All reviews receive a DOI so they can be found and cited. http://ow.ly/FbbCE
5. Authors who wish to respond to reviewer feedback can use versioning. http://ow.ly/FbbK5
We also aggregate OA content (from PMC and ArXiv, nearly 1.4 million articles), facilitate #PPPR on all of them and have tools and Community Editor roles to collect them together into customized collections from multiple publishers. http://ow.ly/FbczL
We think this approach might just be the next wave of OA and there are some who agree with us.These include the experts on our Editorial and Advisory Boards such as Peter Suber, Stephen Curry, Anthony Atala, Bjorn Brembs etc. Also, we have some nice examples of PPPR in action on our site and it is these that convince me that what we are doing is the way forwards.http://ow.ly/FbeRo
One final point in closing, I am a huge fan of PLOS ONE (for obvious reasons) but it’s worth noting that they offer Pre-Publication Peer Review and it’s very rigorous. They do have commenting after publication, that is true and goes to your argument.
I fully agree with your model, but the “typesetting” and copy editing that is done by publishers is worth something. More importantly, the publication should be available in an agnostic, structured format (say XML) so that it can be reused, text-mined, perhaps searched semantically, and reformatted for accessibility.
So what we need is an authoring system that creates XML at the point of publication. Then you won’t need publishers, or typesetters (like my company!). I think it can be done.