LSE - Small Logo
LSE - Small Logo

Blog Admin

August 2nd, 2018

Beyond #FakeScience: how to overcome shallow certainty in scholarly communication

3 comments

Estimated reading time: 5 minutes

Blog Admin

August 2nd, 2018

Beyond #FakeScience: how to overcome shallow certainty in scholarly communication

3 comments

Estimated reading time: 5 minutes

Recent media reports in Germany have brought renewed focus on predatory publishing practices and seen a notably increased use of the term “fake science”. But to what extent is this a worsening problem? Lambert Heller argues that predatory publishing has never really become a big thing, and that it became a thing at all is largely attributable to the simple fact of publication in a scholarly journal coming to be seen as an instant “seal of approval” for a research article, as well as more widespread issues with the peer review process. Transparency is the best remedy for the harm caused by predatory publishing practices; when openness of peer review becomes a default, publishers would have a hard time faking it.

Science journalism in Germany in recent weeks has been awash with reports on “predatory publishers” and an integrity “crisis” for German science. Journalists from regional media outlets, WDR, NDR, and Süddeutsche Zeitung, in collaboration with some international partners (e.g. Le Monde) released new information (overview and links by ARD Tagesschau, in German) showing that some authors from esteemed research institutes in Germany had previously published articles in journals which apply next to no peer review on article submissions.

This phenomenon has become well-known under the “predatory publishers” moniker, originally defined ten years ago by librarian Jeffrey Beall and his infamous list of such publishers. Some German journalists also elected to use the relatively new term #FakeScience, clearly modelled after the emergence of the “fake news” phenomenon. (See also Robert Gast’s criticism in Spektrum and Arndt Leininger’s in Tagesspiegel, both in German — both suggest “junk science” as a better term.) Our colleagues from the Helmholtz Open Science bureau pointed out (FAQ, in German) that it’s easy for researchers to avoid these kind of publishers. For example, tools like “Think, Check, Submit” are very useful in identifying legitimate journals for authors.

Since the value of mutual review of papers by peer researchers is well-established, and since most authors will usually avoid publishing behaviours that may somehow harm their reputations, it is no surprise “predatory publishing” never really became a big thing, when seen in relation to the total outcome of scholarly communication. For instance, Martin Paul Eve and Ernesto Priego discussed how authors, even those who are duped by predatory publishers, are rarely actually harmed by such activities, and indeed the fear of harm is largely a myth perpetuated by traditional scholarly publishers, when in reality predatory publishing exposes issues with the peer review system.

The recent media report is not accompanied by a release of the underlying original data, at least not yet (according to their own FAQs; NDR, in German.) Thankfully, Markus Pössel tried to replicate the data-driven part of the news story at Spektrum’s SciLog (in German); see also Raphael Wimmer’s complementary analysis of another dataset. This is sadly a missed opportunity, as the story has broken (with all of the consequences) before the information has been able to be independently verified. Pössel’s article is a quick and insightful read. His results clearly challenge German media outlet’s #FakeScience; in his own words:

“For the statement that ‘Germany apparently plays a key role in this shady business’ (according to the Süddeutsche in a report on the research project) I find no evidence in my sample. Germany lies in the European average, does not stand out particularly with it and contributes to the total number of the articles evaluated here as said only 2.5%.”

What’s the big deal?

Although “predatory publishers” never became a big thing, how did they become a thing at all? First of all, it’s easy to see that “being published” by something that even just looks like a scholarly journal to many seems to be some “seal of approval” in the present scholarly ecosystem. Journal brands are used as a checkmark to sort out what gets into a researcher’s CV, and what legitimises them as an individual researcher. However, for many widely known reasons, this belief is misguided, primarily as it defaults to a profoundly unscientific practice of assessing research based on the container it is communicated in, rather than any intrinsic qualities of the research itself.

Since recent years, research funding organisations all around the world essentially came to agree that research results should be published and valued to their fullest extent. This explicitly includes those results or outputs beyond the comprehensive “story” usually delivered in journal articles. These artefacts resulting from research include original research datasets or pieces of programming code, for instance. There is almost never an objective or selfless reason to wait for any of these artefacts to be published alongside journal articles, or even beforehand (or instead of). Indeed, quite the opposite is true, and in many cases meaningful research should best be witnessed by the public before the collection of empirical evidence even begins.

The evolution of publishing and peer review

In addition to this relatively young but well-accepted premise, one could even go another step further. Why not apply these criteria to the evaluation of any — even preliminary — results by the wider research community? When peer review itself happens as early and as openly as possible, the chances are it helps other researchers as well as all those who want to learn from the results. That is the reason why esteemed publishers such as F1000 and BioMed Central introduced “open peer review” (OPR) years ago, a process which has since become a well-studied practice. Maybe the most obvious outcome of this is that OPR proves that a review took place, when, by whom, and coming to what result. It is the way to definitively draw the line between predatory and non-predatory practices.

This perspective is not some trendy idea, but based on broad evidence from research itself. Take, for instance, the digital repository arXiv. As a repository for so-called preprints, it was started from within the particle physics community in the early 1990s and one of the earliest functioning uses of web technology. It became so popular in particle physics and neighbouring fields that it inspired preprint repositories in other disciplines, like biorXiv. The concept behind it is always the same: before even beginning the sometimes cumbersome process of submitting a paper to a journal, the article is made available to the public as a preprint. Many preprint repositories allow — and some even encourage — OPR on preprints directly, without being asked for by a journal editor. The core principles here are that knowledge is disseminated freely, rapidly, and anyone is able to participate in the engagement and review of that article, should they wish.

Who is afraid of predatory publishers?

Let’s face it, the peer-reviewed journal article has become the basic commodity of an extremely successful business branch: the scholarly publishing industry. It is very well-known by now that peer review practices vary greatly between publishers, countries, and even whole disciplines. (See also Wolfgang Nellen in Laborjournal Blog, in German.) Still, it is all too often taken for granted that some research result is “valid” since it somehow survived some kind of peer review within a typically intransparent system, while another research result is not worth considering, since it doesn’t have that status. Predatory publishing tries to exploit this belief in the meaning of the “peer-reviewed” checkmark by delivering just the good-looking, familiar package, the article itself, plus the publisher’s suggestion it passed peer review, without any considerable review having taken place.

A shallow certainty is reproduced on a daily basis, mostly by senior researchers only considering “real journal articles” when assessing their peers for tenure, funding, fellowships, or the like. (See also Klaus Tochtermann’s interview in Deutschlandfunk, in German.) And the same shallow certainty is being exploited by predatory publishers. Sadly, only the latter is easily and regularly discovered, and typically avoided by authors and readers who have reached some basic level of experience.

Transparency is the best cure

To fight this shallowness in scholarly communication, the same is true as in fighting corruption: “Sunlight is said to be the best of disinfectants” (Louis Brandeis). When disciplinary repositories became an obvious means of exchange in even more disciplines, we would probably see much less of a need for shallow outlets for research results. (At least we don’t hear much about “predatory publishers” in particle physics, because arXiv. See also Pössel’s replication study mentioned before.) When openness of peer review becomes a default, publishers would have a hard time to fake it. And even more important: peer review would become a recognisable part of the conduct of research, potentially helping others to make better use of it.

So, rather than labelling research articles as predatory or “fake science”, as if researchers have some agenda to deceive the wider public, there is one simple solution: encourage and support preprints, and most of all, publish your peer review reports. If you are a legitimate journal, you have nothing to hide, and potentially much to gain from publishing your reviews.


Some tools for researchers to enhance their OPR practices:

  • hypothes.is allows you to annotate and review almost anything you find on the web, word by word, alone, or as a group.
  • Publons  helps you to make your current and previous peer review pieces publicly available.
  • The Winnower helps you to find reviewers for whatever published piece of information you have.

More interesting reactions, all in German:


Many thanks to Jon Tennant for contributing to this article.

This blog post originally appeared on the Generation R blog and is published under a CC BY-SA 4.0 license.

Featured image credit: mohamed_hassan, via Pixabay (licensed under a CC0 1.0 license).

Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

About the author

Lambert Heller is a librarian, speaker, and consultant working at TIB, the German National Library of Science and Technology. At TIB, he kicked off the Open Science Lab in 2013. He’s into scholarly online practices, decentralised web, and more. Lambert can be found on Twitter @Lambo.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Academic publishing | Open Research | Peer review | Research evaluation

3 Comments