Academics generally recognise that the scholarly publishing business model is flawed, the impact factor does not point to quality, and open access is a good idea. And yet, academics continue to submit their work to the same for-profit journals. Philip Moriarty looks at what is keeping academics from practicing what they preach. Despite many efforts to counter the perception, journal ‘branding’ remains exceptionally important.
This piece is part of a series on the Accelerated Academy.
I’m going to put this as bluntly as I can; it’s been niggling and nagging at me for quite a while and it’s about time I got it off my chest. When it comes to publishing research, I have to come clean: I’m a hypocrite. I spend quite some time railing about the deficiencies in the traditional publishing system, and all the while I’m bolstering that self-same system by my selection of the “appropriate” journals to target.
Despite bemoaning the statistical illiteracy of academia’s reliance on nonsensical metrics like impact factors, and despite regularly venting my spleen during talks at conferences about the too-slow evolution of academic publishing towards a more open and honest system, I nonetheless continue to contribute to the problem. (And I take little comfort in knowing that I’m not alone in this.)
Image credit: Pixabay CC 0 public domain
One of those spleen-venting conferences was a fascinating and important event held in Prague back in December, organized by Filip Vostal and Mark Carrigan: “Power, Acceleration, and Metrics in Academic Life”. My presentation, The Power, Perils and Pitfalls of Peer Review in Public – please excuse the Partridgian overkill on the alliteration – largely focused on the question of post-publication peer review (PPPR) via online channels such as PubPeer. I’ve written at length, however, on PPPR previously (here, here, and here) so I’m not going to rehearse and rehash those arguments. I instead want to explain just why I levelled the accusation of hypocrisy and why I am far from confident that we’ll see a meaningful revolution in academic publishing any time soon.
Let’s start with a few ‘axioms’/principles that, while perhaps not being entirely self-evident in each case, could at least be said to represent some sort of consensus among academics:
- The business model of the traditional academic publishing industry is deeply flawed. While some might argue that George Monbiot – or at least the sub-editor who provided the title for his article on the subject a few years back (“Academic publishers make Murdoch look like a socialist”) – perhaps overstated the problem just a little, it is clear that the profit margins and working practices for many publishers are beyond the pale. (A major contribution to those profit margins is, of course, the indirect and substantial public subsidy, via editing and reviewing, too often provided gratis by the academic community).
- A journal’s impact factor (JIF) is clearly not a good indicator of the quality of a paper published in that journal. The JIF has been skewered many, many times with some of the more memorable and important critiques coming from Stephen Curry, Dorothy Bishop, David Colquhoun, Jenny Rohn, and, most recently, this illuminating post from Stuart Cantrill. Yet its very strong influence tenaciously persists and pervades academia. I regularly receive CVs from potential postdocs where they ‘helpfully’ highlight the JIF for each of the papers in their list of publications. Indeed, some go so far as to rank their publications on the basis of the JIF.
- Given that the majority of research is publicly funded, it is important to ensure that open access publication becomes the norm. This one is arguably rather more contentious and there are clear differences in the appreciation of open access (OA) publishing between disciplines, with the arts and humanities arguably being rather less welcoming of OA than the sciences. Nonetheless, the key importance of OA has laudably been recognized by Research Councils UK (RCUK) and all researchers funded by any of the seven UK research councils are mandated to make their papers available via either a green or gold OA route (with the gold OA route, seen by many as a sop to the publishing industry, often being prohibitively expensive).
With these three “axioms” in place, it now seems rather straight-forward to make a decision as to the journal(s) our research group should choose as the appropriate forum for our work. We should put aside any consideration of impact factor and aim to select those journals which eschew the traditional for-(large)-profit publishing model and provide cost-effective open access publication, right?
Indeed, we’re particularly fortunate because there’s an exemplar of open access publishing in our research area: The Beilstein Journal of Nanotechnology. Not only are papers in the Beilstein J. Nanotech free to the reader (and easy to locate and download online), but publishing there is free: no exorbitant gold OA costs nor, indeed, any type of charge to the author(s) for publication. (The Beilstein Foundation has very deep pockets and laudably shoulders all of the costs).
But take a look at our list of publications — although we indeed publish in the Beilstein J. Nanotech., the number of our papers appearing there can be counted on the fingers of (less than) one hand. So, while I espouse the three principles listed above, I hypocritically don’t practice what I preach. What’s my excuse?
In academia, journal brand is everything. I have sat in many committees, read many CVs, and participated in many discussions where candidates for a postdoctoral position, a fellowship, or other roles at various rungs of the academic career ladder have been compared. And very often, the committee members will say something along the lines of “Well, Candidate X has got much better publications than Candidate Y”…without ever having read the papers of either candidate. The judgment of quality is lazily “outsourced” to the brand-name of the journal. If it’s in a Nature journal, it’s obviously of higher quality than something published in one of those, ahem, “lesser” journals.
If, as principal investigator, I were to advise the PhD students and postdocs in the group here at Nottingham that, in line with the three principles above, they should publish all of their work in the Beilstein J. Nanotech., it would be career suicide for them. To hammer this point home, here’s the advice from one referee of a paper we recently submitted:
“I recommend re-submission of the manuscript to the Beilstein Journal of Nanotechnology, where works of similar quality can be found. The work is definitively well below the standards of [Journal Name].”
There is very clearly a well-established hierarchy here. Journal ‘branding’, and, worse, journal impact factor, remain exceptionally important in (falsely) establishing the perceived quality of a piece of research, despite many efforts to counter this perception, including, most notably, DORA. My hypocritical approach to publishing research stems directly from this perception. I know that if I want the researchers in my group to stand a chance of competing with their peers, we have to target “those” journals. The same is true for all the other PIs out there. While we all complain bitterly about the impact factor monkey on our back, we’re locked into the addiction to journal brand.
And it’s very difficult to see how to break the cycle…
The post is part of a series on the Accelerated Academy and is based on the author’s contribution presented at Power, Acceleration and Metrics in Academic Life (2 – 4 December 2015, Prague) which was supported by Strategy AV21 – The Czech Academy of Sciences. Videocasts of the conference can be found on the Sociological Review.
Note: This article gives the views of the author, and not the position of the LSE Impact blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.
Philip Moriarty is Professor of Physics in the School of Physics and Astronomy, University of Nottingham. His research interests span a number of topical themes in nanometre scale science with a particular current focus on single atom/molecule manipulation using scanning probes. My ORCID profile includes a full list of publications and grant awards. He blogs as regularly as he can (which is sometimes not particularly regularly) at Symptoms Of The Universe.
What is needed is to break the cycle:
1) systematically challenge those peers who use journal name as proxy for science quality in everyday life (colleague “Wow, that’s a Science paper”; answer: “you mean, like Arsenic DNA?”); aggressively challenge them if that happened in the context of recruitment.
2) stop perpetuating the myth that the only way to get recruited is by publishing in glam journals
3) try to publish your best papers in non-glam journals… but leave the final choice of journal to 1st author
There are at least two academic publishers who are willing to publish without too much fuss, and I mean this for full-length books as well as papers of goodly size. I recently sent my book to one of them and it was published. The only trouble is that due to the small numbers of books likely to be demanded, the asking price is very high. Today with electronic storage and printing methods, its all on a basis of print by order, individual copies are produced this way and it works!
A nice piece! Very honest and reflective.
I can’t speak to physics-related publishing to your depth.
However, the brand publishing model is critically important in life sciences.
This is well driven by the need for patenting in the western medical industrial complex.
Though the USA Myriad case ended this for diagnostics, pharma is still relying upon patenting (and the resulting high drug prices). Watch the unfolding CRISPR patent battle for further insights.
A brand publication goes a long way to validating an endeavor that seeks a significant monetary outcome.
The patenting of science is a culture driver. Until patenting outcomes of collaboration (science) ends, the journal publishing validator will continue.
The pre-print and open science models might finally be taking off in the life sciences.
Good post – In management, we have the CABS (Chartered Association of Business Schools) ‘list’ which are the ‘quality’ journals.
Many jobs openly put in the job specification that they use publication to this list to short-list – what is a rational employee who has a mortgage to pay going to do? It’s not rocket science.
research is publicly funded..but it does not mean quality output is guaranteed- not sure what you mean by “for-profit” journals- Many of the “open access ” journals are the ones making a profit by charging the author.. At present researchers dont pay to get published..but people willing to read latest findings in good journals are expected to pay- Nothing wrong with this in my view.
Just a reaction to the statement that “people willing to read latest findings in good journals are expected to pay- Nothing wrong with this in my view.”:
This is a very west-centred perspective. Many scholars from the economically weaker countries publish in the prestigious publishing houses (Roger Chartier, legedary book history scholar, calls them “firms”) like Elsevier, Taylor and Francis Group or Springer but they cannot afford to pay for the articles in their paywalled databases. They can thus contribute to the system for free but they cannot take from it. It is thus a very unfair, unethical, discriminatory system. This needs to be continuously emphasized and exposed, otherwise it will never change.
The high cost is my main reason for not using open source journals. Yes they do give discount but generally I am not eligible for them and many funding agencies in my area expressly say they won’t cover publication cost. So until such time as it cost of publication drop and founders agree to pay at least some costs I will keep publishing were it does not cost me.
I would like to propose a strategy to continuously break the circle and stop being hypocrite. Of course at some point, it will imply doing something different from yesterday !
First, there is a causal relationship between the use of the impact factor as a way to evaluate articles and the other problems you are mentioning (and in fact with also all the others you are not mentioning!). Because we use IF to evaluate science, we desperately need journals, the process of science becomes privatized and fragmented, and in return they have full power to impose what they want: money, closed access and also their own scientific policy (which may be quite conservative and slows down the progress of science, and with random peer review). What we need to fix is the way science is evaluated. As you point it, this new evaluation must be compatible with current practices (the change must be continuous) since few people will accept to risk their career for science’s sake.
There is a solution with SJS (www.sjscience.org), a non-commercial repository that offer tools to the scientific community to build a novel community-based evaluation that no journal can reproduce. Since SJS is a repository, it can be used in parallel to current practices (e.g. as arXiv) so you can immediately start putting value in this novel mechanism while still playing the publish-or-perish sick game.
In a few words, SJS proposes to evaluate the quality of articles along two axes: 1st is its validity (the objective part of its quality) and it is established through an explicite signed consensus within the community. 2nd is its importance (the subjective part of its quality) and it is established through a particular community-wide curation mechanism. You may read details at http://www.sjscience.org/article?id=46 . In the end lazy members of committee will also have numbers that they can use as easily as the IF, but with much better scientific significance and which are no longer depending on private players such as journals, and whose internal logic strongly incentivizes open science.
Using this allows to prepare continuously an alternative to the IF which will eventually make science free (both as in free speech and free beer). The relevance of it relies on the number of users. I welcome you to have a look, discuss it (e.g. by just openly reviewing the aforementioned article) and take action if you think there is indeed hope for change !
Impossible to register on website http://sjscience.org
I would like to propose a concrete strategy to continuously break the circle and stop being hypocrite. Of course at some point, it will imply doing something different from yesterday !
First, there is a causal relationship between the use of the impact factor as a way to evaluate articles and the other problems you are mentioning (and in fact with also all the others you are not mentioning!). Because we use IF to evaluate science, we desperately need journals, the process of science becomes privatized and fragmented, and in return they have full power to impose what they want: money, closed access and also their own scientific policy (which may be quite conservative and slows down the progress of science, and with random peer review). What we need to fix is the way science is evaluated. As you point it, this new evaluation must be compatible with current practices (the change must be continuous) since few people will accept to risk their career for science’s sake.
There is a solution with SJS (www.sjscience.org), a non-commercial repository that offer tools to the scientific community to build a novel community-based evaluation that no journal can reproduce. Since SJS is a repository, it can be used in parallel to current practices (e.g. as arXiv) so you can immediately start putting value in this novel mechanism while still playing the publish-or-perish sick game.
In a few words, SJS proposes to evaluate the quality of articles along two axes: 1st is its validity (the objective part of its quality) and it is established through an explicite signed consensus within the community. 2nd is its importance (the subjective part of its quality) and it is established through a particular community-wide curation mechanism. You may read details at http://www.sjscience.org/article?id=46 . In the end lazy members of committee will also have numbers that they can use as easily as the IF, but with much better scientific significance and which are no longer depending on private players such as journals, and whose internal logic strongly incentivizes open science.
Using this allows to prepare continuously an alternative to the IF which will eventually make science free (both as in free speech and free beer). The relevance of it relies on the number of users. I welcome you to have a look, discuss it (e.g. by just openly reviewing the aforementioned article) and take action if you think there is indeed hope for change !
How about this for breaking the cycle? Submit your paper to a high IF journal, get it accepted there, then instead of publishing it there, retract it and post to Arxiv or another open-source journal (and say “This paper was accepted to Journal of Blah Blah” and provide the reviewer reports. Eventually, people willet fed up with this “buffer”, but hey, if you were able to get past Journal of Blah Blah peer review, this paper should be good, right?
The post opens with this statement
“Academics generally recognise that the scholarly publishing business model is flawed, the impact factor does not point to quality, and open access is a good idea.”
If you believe these things then you may indeed be a hypocrite if you act otherwise. Another possibility is that you don’t really believe these things but are giving to the pressure to nod along with them as they constitute the posh view in many academic circles and you are worried about negative social or career consequences if you point out for example the limitations of open access or the fact that (gasp) the average paper in Nature is pretty good. The resolution to your hypocrisy may therefore not be to change your behavior, but to recognize that your actions reflect what you really believe, and to start being honest about that with your colleagues. When you do so you will find that the opening statement of this blog “Academics generally recognise” is not really true, it’s more that “Most academics know that they are better off acting as if they recognise”.
“Academic publishing is a game: if you do not like it, what are you doing on the playing field???” This is illustrative of the attitude of a majority of researchers who publish. Most academics do not do research or attempt to publish, and progress through the ‘managerial’ route. Without exception, all (Executive) Deans insist on publishing in 3* and 4* journals… Resistance is futile and counterproductive…
Journal impact factor provides a signal about quality of work, although not a particularly good one. Unfortunately, when that is the only signal available to time strapped decision makers, it is going to be used, and I don’t see much of a way around that.
The problem then is one of signal quality. Open access journals don’t signal high quality because they are not exclusive, and they are not exclusive because they do not signal high quality and therefore do not get their pick of the best papers.
To break the cycle, we need to build a journal brand from the ground up. My suggestion would be to create a journal which invites older academics to (where rights can be legally acquired) republish seminal papers re-edited and reworked to reflect developments in the field since their initial publication. People would, hopefully, start citing these more accessible still highly important papers to quickly build journal stats and name recognition.
Then slowly start adding in a few very high quality recent publications. The journal can be fairly selective if they only publish a small number of new papers at a time. Eventually, the feedback loop would kick in and we would have a top level open access journal. This isn’t a perfect plan, but I think it has a decent chance of working if executed correctly,
Academics behave like slaves. Guess who will beat up a slave who tries to escape the most? Not the masters, but other slaves.
Interestingly, who are preaching to their students that they should seek freedom and independence? The academics. The results? They are merely preaching with their handcuffs. Their preaching is no more than another moaning of their own plight, but somehow misconceived as their own free spirit. Sadists, saddest spicies on the planet.
These issues have been widely debated since this was published in 2016, and we have moved from demeaning OA journals as ‘bad’ as content is increasingly shifting in that direction. But the sentiment rings true. When I was a research group head in the UK, I was unable to convince members to follow my lead and consider the ethics [financial and otherwise] of the journals they published in. But I made the point repeatedly that authors must not be penalised for attempting to publish outside high ranked journals coining it for the big 5 commercial publishers. So when assessing applications/promotions we should read the material, not be lazy and look at journal titles. More work for committees, but entirely necessary. And I was able to become a professor myself, perhaps delayed somewhat, with my commitment to small and ethical journals. But many feel unable to follow this path. A practical contribution is an OA journal listing for a few social science disciplines. https://simonbatterbury.wordpress.com/2015/10/25/list-of-decent-open-access-journals/