LSE - Small Logo
LSE - Small Logo

Blog Admin

February 21st, 2017

Blacklists are technically infeasible, practically unreliable and unethical. Period.

2 comments | 2 shares

Estimated reading time: 5 minutes

Blog Admin

February 21st, 2017

Blacklists are technically infeasible, practically unreliable and unethical. Period.

2 comments | 2 shares

Estimated reading time: 5 minutes

cameronneylonThe removal of the Beall’s list of predatory publishers last month caused consternation and led to calls in some quarters for a new equivalent to be put in its place. Cameron Neylon explains why he has never been a supporter of the Beall’s list and outlines why he believes the concept of the blacklist itself is fundamentally flawed. Not only are blacklists incomplete by definition, they are highly susceptible to legal challenge and vulnerable to personal bias. Scholars should be able to decide for themselves what is a  good venue from which to communicate their work.

The last weekend of January was a bad one for poorly designed blacklists. But prior to this another blacklist was also the subject of significant discussion. Beall’s list of so-called ‘predatory’ journals and publishers vanished from the web in mid-January. There is still no explanation for why, but the most obvious candidate is that legal action, threatened or real, was the cause of it being removed. Since it disappeared many listservs and groups have been asking what should be done. My answer is pretty simple. Absolutely nothing.

It won’t surprise anyone that I’ve never been a supporter of the list. Early on I held the common view that Beall was providing a useful service, albeit one that overstated the problem. But as things progressed my concerns grew. The criticisms have been rehearsed many times so I won’t delve into the detail. Suffice to say Beall has a strongly anti-OA stance, was clearly partisan on specific issues, and antagonistic – often without being constructively critical – to publishers experimenting with new models of review. But most importantly his work didn’t meet minimum scholarly standards of consistency and validation. Walt Crawford is the go-to source on this having done the painstaking work of actually documenting the state of many of the ‘publishers’ on the list but it seems like only a small percentage of the blacklisted publishers were ever properly documented by Beall.

Image credit: What’s on the blacklist? by opensource.com. This work is licensed under a CC BY-SA 2.0 license.

Does that mean that it’s a good thing the lists are gone? That really depends on your view of the scale of the problem. The usual test case of the limitations of free speech is whether it is OK to shout “FIRE!” in a crowded theatre when there is none. Depending on your perspective you might feel that our particular theatre has anything from a candle onstage to a raging inferno under the stalls. From my perspective there is a serious problem, although the problem is not what most people worry about, and is certainly not limited to open access publishers. And the list didn’t help.

But the real reason the list doesn’t help isn’t because of its motivations or its quality. It’s a fundamental structural problem with blacklists. They don’t work, they can’t work, and they never will work. Even when put together by ‘the good guys’ they are politically motivated. They have to be because they’re also technically impossible to make work.

Blacklists are technically infeasible

Blacklists are never complete. Listing is an action that has to occur after any given agent has acted in a way that merits listing. Whether that listing involves being called before the House Committee on Un-American Activities or being added to an online list it can only happen after the fact. Even if it seems to happen before the fact, that just means that the real criteria are a lie. The listing still happens after the real criteria were met, whether that is being a Jewish screenwriter or starting up a well-intentioned but inexpert journal in India.

Whitelists by contrast are by definition always complete. They are a list of all those agents that have been certified as meeting a certain level of quality assurance. There may be many agents that could meet the requirements, but if they are not on the list they have not yet been certified, because that is the definition of the certification. That may seem circular but the logic is important. Whitelists are complete by definition. Blacklists are incomplete by definition. And that’s before we get to the issue of criteria to be met vs criteria to be failed.

Blacklists are practically unreliable

A lot of people have been saying “we need a replacement for the list because we were relying on it”. This, to be blunt, was stupid. Blacklists are discriminatory in a way that makes them highly susceptible to legal challenge. All that is required is that it be shown that either the criteria for inclusion are discriminatory (or libellous) or that they are being applied in a discriminatory fashion. The redress is likely to be destruction of the whole list. Again, by contrast with a whitelist the redress for discrimination is inclusion. Any litigant will want to ensure that the list is maintained so they get listed. Blacklists are at high risk of legal takedown and should never be relied upon as part of a broader system. Use a whitelist or whitelists (and always provide a mechanism for showing that something that isn’t yet certified should still be included in the broader system).

If your research evaluation system relies on a blacklist it is fragile, as well as likely being discriminatory.

Blacklists are inherently unethical

Blacklists are designed to create and enforce collective guilt. Because they use negative criteria they will necessarily include agents that should never have been caught up. Blacklisting entire countries means that legal permanent residents, indeed, it seems, even airline staff, were unable to board flights to the US last month. Blacklisting publishers seeking to experiment with new forms of review or new business models both stifles innovation and discriminates against new entrants. Calling out bad practice is different. Pointing to one organisation and saying its business practices are dodgy is perfectly legitimate if done transparently, ethically and with due attention to evidence. Collectively blaming a whole list is not.

Quality assurance is hard work and doing it transparently, consistently and ethically is even harder. Consigning an organisation to the darkness based on a misstep or, worse, a failure to align with a personal bias, is actually quite easy, hard to audit effectively and usually oversimplifying a complex situation. To give a concrete example, DOAJ maintains a list of publishers that claim to have DOAJ certification but which do not. Here the ethics are clear, the DOAJ is a whitelist that is publicly available in a transparent form (whether or not you agree with the criteria). Publishers that claim membership they don’t have can be legitimately, and individually, called out. Such behaviour is cause for serious concern and appropriate to note. But DOAJ does not then propose that these journals should be cast into outer darkness, it merely notes the infraction.

So what should we do? Absolutely nothing!

We already have plenty of perfectly good whitelists. PubMed listing, Web of Science listing, Scopus listing, DOAJ listing. If you need to check whether a journal is running traditional peer review at an adequate level, use some combination of these according to your needs. Also ensure there is a mechanism for making a case for exceptions, but use whitelists not blacklists by default.

Authors should check with services like ThinkCheckSubmit or Quality Open Access Market if they want data to help them decide whether a journal or publisher is legitimate. But above all scholars should be capable of making that decision for themselves. If we aren’t able to make good decisions on the venue to communicate our work then we do not deserve the label ‘scholar’.

Finally, if you want a source of data on the scale of the problem of dodgy business practices in scholarly publishing then delve into Walt Crawford’s meticulous, quantitative, comprehensive and above all documented work on the subject. It is by far the best data on both the number of publishers with questionable practices and the number of articles being published. If you’re serious about looking at the problem then start there.

But we don’t need another f###ing list.

This post originally appeared on Cameron Neylon’s personal blog, Science in the Open, and is reposted under a CC0 1.0 license.

Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

About the author

Cameron Neylon is Professor of Research Communications and the Centre for Culture and Technology, Curtin University, and a well-known agitator for opening up the process of research. He speaks regularly on issues of Open Science including Open Access publication, Open Data, and open source as well as the wider technical and social issues of applying the opportunities the internet brings to the practice of science. He was named as a SPARC Innovator in July 2010 for work on the Panton Principles,was a co-author of the Altmetrics manifesto and is a proud recipient of the Blue Obelisk for contributions to open data. He writes regularly at his blog, Science in the Open.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Academic publishing | Open Access | Open Research

2 Comments