The replicability and methodology of a paper published in a high-impact journal has prompted further discussion regarding scientific discourse and responsibility. Dorothy Bishop argues the journal editors should have done more to ensure the veracity of the findings before it was published. Furthermore, this case highlights the instrumental role of blogging for improving scientific discourse and peer-review. ‘Post-publication peer review’ via the blogosphere can allow for new research to be rapidly discussed and debated in a way that would be impossible via traditional journal publishing.
In a previous blogpost, I criticised a recent paper claiming that playing action video games improved reading in dyslexics. In a series of comments below the blogpost, two of the authors, Andrea Facoetti and Simone Gori, have responded to my criticisms. I thank them for taking the trouble to spell out their views and giving readers the opportunity to see another point of view. I am, however, not persuaded by their arguments, which make two main points. First, that their study was not methodologically weak and so Current Biology was right to publish it, and second, that it is unfair, and indeed unethical, to criticise a scientific paper in a blog, rather than through the regular scientific channels.
Regarding the study methodology, as noted above, the principal problem with the study by Franceschini et al was that it was underpowered, with just 10 participants per group. The authors reply with an argument ad populum, i.e. many other studies have used equally small samples. This is undoubtedly true, but it doesn’t make it right. They dismiss the paper I cited by Christley (2010) on the grounds that it was published in a low impact journal. But the serious drawbacks of underpowered studies have been known about for years, and written about in high- as well as low-impact journals (see Button, S., et al; Ioannidis, JPA 2005; Ioannidis, JPA 2008; Ioannidis, JPA 2013).
The response by Facoetti and Gori illustrates the problem I had highlighted. In effect, they are saying that we should believe their result because it appeared in a high-impact journal, and now that it is published, the onus must be on other people to demonstrate that it is wrong. I can appreciate that it must be deeply irritating for them to have me expressing doubt about the replicability of their result, given that their paper passed peer review in a major journal and the results reach conventional levels of statistical significance. But in the field of clinical trials, the non-replicability of large initial effects from small trials has been demonstrated on numerous occasions, using empirical data – see in particular the work of Ioannidis, referenced above. The reasons for this ‘winner’s curse’ have been much discussed, but its reality is not in doubt. This is why I maintain that the paper would not have been published if it had been reviewed by scientists who had expertise in clinical trials methodology. They would have demanded more evidence than this.
The response by the authors highlights another issue: now that the paper has been published, the expectation is that anyone who has doubts, such as me, should be responsible for checking the veracity of the findings. As we say in Britain, I should put up or shut up. Indeed, I could try to get a research grant to do a further study. However, I would probably not be allowed by my local ethics committee to do one on such a small sample and it might take a year or so to do, and would distract me from my other research. Given that I have reservations about the likelihood of a positive result, this is not an attractive option. My view is that journal editors should have recognised this as a pilot study and asked the authors to do a more extensive replication, rather than dashing into print on the basis of such slender evidence. In publishing this study, Current Biology has created a situation where other scientists must now spend time and resources to establish whether the results hold up.
To establish just how damaging this can be, consider the case of the FastForword intervention, developed on the basis of a small trial initially reported in Science in 1996. After the Science paper, the authors went directly into commercialization of the intervention, and reported only uncontrolled trials. It took until 2010 for there to be enough reasonably-sized independent randomized controlled trials to evaluate the intervention properly in a meta-analysis, at which point it was concluded that it had no beneficial effect. By this time, tens of thousands of children had been through the intervention, and hundreds of thousands of research dollars had been spent on studies evaluating FastForword.
I appreciate that those reporting exciting findings from small trials are motivated by the best of intentions – to tell the world about something that seems to help children. But the reality is that, if the initial trial is not adequately powered, it can be detrimental both to science and to the children it is designed to help, by giving such an imprecise and uncertain estimate of the effectiveness of treatment.
Finally, a comment on whether it is fair to comment on a research article in a blog, rather than going through the usual procedure of submitting an article to a journal and having it peer-reviewed prior to publication. The authors’ reactions to my blogpost are reminiscent of Felicia Wolfe-Simon’s response to blog-based criticisms of a paper she published in Science: “The items you are presenting do not represent the proper way to engage in a scientific discourse”. Unlike Wolfe-Simon, who simply refused to engage with bloggers, Facoetti and Gori show willingness to discuss matters further, and present their side of the story, but nevertheless it is clear they do not regard a blog as an appropriate place to debate scientific studies.
I could not disagree more. As was readily demonstrated in the Wolfe-Simon case, what has come to be known as ‘post-publication peer review’ via the blogosphere can allow for new research to be rapidly discussed and debated in a way that would be quite impossible via traditional journal publishing. In addition, it brings the debate to the attention of a much wider readership. Facoetti and Gori feel I have picked on them unfairly: in fact, I found out about their paper because I was asked for my opinion by practitioners who worked with dyslexic children. They felt the results from the Current Biology study sounded too good to be true, but they could not access the paper from behind its paywall, and in any case they felt unable to evaluate it properly. I don’t enjoy criticising colleagues, but I feel that it is entirely proper for me to put my opinion out in the public domain, so that this broader readership can hear a different perspective from those put out in the press releases. And the value of blogging is that it does allow for immediate reaction, both positive and negative. I don’t censor comments, provided they are polite and on-topic, so my readers have the opportunity to read the reaction of Facoetti and Gori.
I should emphasise that I do not have any personal axe to grind with the study’s authors, who I do not know personally. I’d be happy to revise my opinion if convincing arguments are put forward, but I think it is important that this discussion takes place in the public domain, because the issues it raises go well beyond this specific study.
This article was originally published on BishopBlog. Please see the original post for further discussion and comment.
Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics.
Dorothy Bishop is Professor of Developmental Neuropsychology and a Wellcome Principal Research Fellow at the Department of Experimental Psychology in Oxford and Adjunct Professor at The University of Western Australia, Perth. The primary aim of her research is to increase understanding of why some children have specific language impairment (SLI). Dorothy blogs at BishopBlog and is on Twitter@deevybee.
I agree that blog or indeed any social media platform should be considered as appropriate forum for discussing published, ongoing or potential research. Good article thanks.
While I think it is good to have these spirited debates, the challenge is that we can’t expect investigators to take on every challenge put forth on a blog about their papers. I could see that becoming a hassle from the perspective of an author. I think the peer review process is partly to blame. As a field editor at a journal my experience has been that it is very difficult to get good investigators to take time out of their busy lives to review papers, a task for which they receive no compensation. I sometimes have to ask 3-4 times the number of people to review a paper than I need because I expect that many will send a reply “too busy.” The more experienced the investigator, the less the reward for taking the time to review papers (and the more the cost–since people seem to get spread thinner as they progress in their careers). Less experienced investigators are far more likely to say yes, but give a poorer quality review. It’s troubling that such an important process is completely on a volunteer basis. Obviously it is a professional responsibility, but as an academic, there are myriad “professional responsibilities” that aren’t directly compensated—including service to the institution, sometimes service to professional organizations, making appearances at various meetings, sitting on grant review panels…the list is long. One can only do so much, particularly in times when funding is tight which means more grants going in each year (at least here in the US).
Why stop there? If Author self-publishing can provide rapid feedback on “properly” published science, then they can also provide dissemination of that science in the first place.
Scientific publishing has too long been about credit and promotion. It’s time it returned to what it really should be and what it originally was: communication.