Responding to the decision of the Finnish publication forum, Julkaisufoorumi, to downgrade so-called ‘grey area’ journals, Frederick Fenter, argues that, by focusing on the publishers’ operating model in the decision, the experience and judgement of researchers have been unfairly ignored.
On Friday 6 December a group of scientists working in universities across Finland signed an open letter in support of the academic publisher Frontiers. What prompted this unusual action?
The julkaisufoorumi (JUFO) is a national classification forum created by the Finnish scientific community to support the assessment of quality scholarly journals. In this, the best intentions are clearly at play. But, without transparent criteria, rigorously and consistently applied, the best intentions can lead to unintended consequences. Although JUFO’s criteria are stated on their site, they include a clause allowing the forum’s publication panels to override their own criteria and exclude journals based on the publisher’s operating model. This clause seems to have been invoked to downgrade most of the journals from Gold Open Access publishers, regardless of independent and objective evaluation and quality of these individual journals, even if they meet JUFO’s level 1 criteria.
Stop and think about this situation. The clause condemns ‘operating model’, not journal quality, and it dismisses and ignores researchers’ experiences and choices. As they say in their open letter to Finland’s Minister of Science and Culture, Sari Multala, and the JUFO steering group, “Many of us have collaborated with Frontiers as authors or editors, and our experiences do not align with JUFO’s conclusions.”
The clause condemns ‘operating model’, not journal quality, and it dismisses and ignores researchers’ experiences and choices.
Frontiers has tried to engage with JUFO and of course the door remains open. However, after we requested details of what specific issues had been raised by the Finnish research community about Frontiers journals, we were presented not with a list from this community, but with references to mainly opinion articles already in the public domain. For a publisher it is hard to respond to what amounts to personal opinion and anecdote about Gold Open Access publishers. We agree with JUFO that assessing journal quality is difficult. This is why, while not without faults, dedicated indexation and evaluation services (such as Clarivate’s Web of Science) offer the advantage of professional expertise applied via rigorous, independent, and transparent criteria and processes. Everyone knows where they stand, what they need to do, and how they can appeal. Conversely, the recent JUFO decision seems to override objectivity altogether, to the point of rejecting JUFO’s own evaluation criteria. Which leaves hanging a critical question for researchers.
Where do I publish? is a question every researcher asks. Sometimes the answer is obvious, based on their precise sense of the publishing landscape and their own research’s place within it. Sometimes, in an ever more complicated research publishing landscape, the answer may not be clear. An increase in good publishing venues and options, as well as ‘scam’ operators, make journal evaluation tools for researchers important, such as those listed on the Committee on Publication Ethics (COPE) Think Check submit list. It is completely understandable that institutions and national bodies may want to produce their own lists to help researchers. However, those lists must be predicated on informed researcher choice and on open criteria and objective processes.
It is completely understandable that institutions and national bodies may want to produce their own lists to help researchers. However, those lists must be predicated on informed researcher choice and on open criteria and objective processes.
As with the long defunct Beall’s list, institutional lists that have no transparent criteria and no appeals process result in a lack of balance and bias. Indeed, institutional lists end up falling foul of COPE’s advice that, ‘authors and institutions should treat lists of predatory (or fake) journals with the same degree of scrutiny as they do with the journals themselves. Lists that are not transparent about criteria used should not be relied on.’ Without such transparency, a door is left open to manipulation and a complacent adherence to the status quo rather than free, equitable choice for all researchers. We need to apply scientific rigour, transparency and accountability to scholarly journal assessment. The way to do this is for institutions to ensure they are addressing and canvassing a broad segment of their community rather than creating an echo chamber of those-who-shout-loudest. Clear criteria, open discussion, and transparent processes, including appeals, all help.
It is in all researchers’ interests to have good, reliable lists and indexes to go to and it is in publishers’ interests to be on these lists and in those indexes. These interests should work in tandem and not opposition. Publishers should know which standards are demanded and what they need to do to achieve them, regardless of their business model. Researchers need to have confidence in their publishing choices, based on having access to correct and consistent information from reliable sources. Once authors make an informed choice, everyone should respect them. At this point, the evaluation process pivots to the quality of the research itself, as it should. Any researcher who submits to one of our journals, as with any proper publisher, enters into a clear deal – we will vet your manuscript to ensure research integrity and expert peer reviewers will assess its worth as a contribution to the scientific record before it is published. This is where the hard work of evaluation is already and rightfully done, at the individual article level whichever journal it is published in.
It is not fair on publishers that researchers are prevented from publishing with a journal they have chosen because a list has been produced without due diligence and without transparency. As ad hoc institutional lists proliferate, we need to ask some urgent questions about academic freedom and fairness for publishing researchers. Is it right, equitable, or fair, for example, that a researcher, who has published high quality articles in a journal respected by their community, comes to us distressed and confounded after being told these articles will not be considered in their application for tenure?
Judging the stewardship of the scientific record by anecdote and rumour causes deep harm and is a danger we must avoid.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.
Image Credit: Bozhin Karaivanov on Unsplash.
This is not a credible defence coming from Frontiers. This publisher has been cannier than some in the use of good PR – including this intervention – but the truth is that it has all the hallmarks of mass-volume, for-profit OA publishing drawing in huge APCs and with questionable quality assurance practices. Here is a recent summary of the situation, comparing Frontiers and MDPI. These companies are a danger to the integrity of journal publishing in science and scholarship, and the sooner they either change their ways or leave the business, the better.
https://scholarlykitchen.sspnet.org/2023/09/18/guest-post-reputation-and-publication-volume-at-mdpi-and-frontiers-the-1b-question/#:~:text=MDPI%20has%20declined%20by%2027,resonates%20with%20thousands%20of%20researchers.