LSE - Small Logo
LSE - Small Logo

Blog Admin

December 10th, 2014

Changing UK science culture – a publisher perspective on the good, the bad, and the ugly.

0 comments

Estimated reading time: 5 minutes

Blog Admin

December 10th, 2014

Changing UK science culture – a publisher perspective on the good, the bad, and the ugly.

0 comments

Estimated reading time: 5 minutes

Rebecca Lawrence shares her response to the Nuffield Council on Biothetics’ report on the culture of scientific research. The report raised important issues that publishers across the industry are actively working towards. But further collaboration is needed amongst research funders, universities and publishers to tackle the many issues in quality assessment, recognition of negative findings, and adequate peer review. Otherwise we are simply churning out findings to tick the assessment boxes rather than generating real scientific progression.

The UK is one of the leading countries for scientific research worldwide, but it also faces many challenges and pressures that may be contributing to distorting scientific research output.  The good, the bad and the ugly of UK scientific research culture and the drivers behind these issues are exposed in a recent report by the Nuffield Council on Bioethics.

NuffieldimageI was asked to provide the formal publisher response at the launch of this report last Thursday at the Palace of Westminster, alongside Dame Nancy Rothwell (President and Vice-Chancellor, University of Manchester) who represented UK research institutions, and Steven Hill (Head of Research Policy, Higher Education Funding Council for England (HEFCE)) who represented funders. Below is a summary of some of the points I raised; I could have gone on for pages about all the different challenges, so I have focussed this post on those directly affecting publishing.

Many of the points raised by the report are not new, but it provides some valuable real numbers and evidence for many problems in scientific research that many have been discussing and debating for years. Amongst other things, the report emphasised that the publication and recognition of a broader range of research outputs must be improved, a point I strongly support. There are three related areas that I feel are important to consider.

Publish all vs Impact Factor

It is widely recognised that there is a publication bias towards ‘positive’ results across most areas of research.  We have come to rate positive findings above negative ones as the latter don’t make such good headlines, but in the scientific endeavour, this makes little sense.

We also often forget about small discoveries: those that, if they are lucky, might make a poster at a conference but rarely get published or shared more broadly. It seems a common story that while presenting a poster at a conference, another researcher walked up and commented that they had conducted the very same experiment a few months/years earlier with the same result.  What a waste of time and effort, not to mention (often public) funding behind those excessive duplications, simply because no-one knew the study had already been done. However, there is often a concern that a high percentage of small publications on a CV looks bad.

Replications and refutations also rarely get published, often for fear of being accused of duplicate publishing. Some researchers have even told me they can’t publish such findings as their funder would cease their funding if they thought they were ‘wasting their time’ replicating published work. But in many fields, replication is essential before building upon others’ findings. Many studies have shown how little published research can be replicated (e.g. Amgen’s study, John Ioannidis’ paper, Bayer’s study), and there are numerous examples where major research could not be replicated and had to be retracted subsequently, e.g. the stem cell acid bath (STAP) work and the published attempted replication by Kenneth Lee’s lab.

waggon-wheels-336528_1280Wheels of change? Image credit: Unsplash Pixabay Public Domain

Many publishers have tried to tackle this problem; publishing negative/null findings was one of PLOS ONE’s original missions. However, publishers can really only publish such papers, especially at volume, if they stop worrying about their Impact Factor (IF) as these papers typically attract low citations.  Many publishers and journals support the idea that the IF should stop being used as a surrogate measure of research paper quality (see the list of San Francisco Declaration on Research Assessment: DORA signatories). However, existing journals cannot stop using them (researchers look up the IF even if the journal doesn’t promote it), and new journals end up with a challenging conundrum: if they don’t get one, it will seriously impact submissions, as authors in some countries cannot submit to journals without one (e.g. China, Mexico).

For many research fields, the production of negative/null results, small findings, and the replication of those findings your work is being built upon should be an expectation, and one that is demonstrated during research assessment; if they are not included in a publication list, questions should be asked why.

Lack of time

Another major problem is lack of time; time to write up that negative result, that small finding, that replication attempt. Publishers and journals are increasingly adopting data policies, including mandatory ones like F1000Research and PLOS. But researchers’ data are typically in a format and structure that no-one else can understand – missing headings, acronyms not explained, poorly defined protocols – without this information, sharing of data is fairly pointless. But again it takes (sometimes considerable) time to sort out the data and catalogue the protocol details. Faced with a choice between writing up that small finding or sorting out the data behind that submitted article, versus working towards their next ‘big’ paper, it is no surprise which option researchers choose.

It seems we need to redefine ‘quality’ papers as articles that are detailed enough for others to replicate and allow proper analysis and reuse of the work. We need to find a way to incentivise the sharing of sound results that can be fully reanalysed over ‘big’ papers that sound impactful but lack the depth and detail behind them. Otherwise we are simply churning out findings to tick the assessment boxes rather than generating real scientific progression or maximising value for research dollars invested.

Additional benefits of open peer review

Open peer review, where both the review and the reviewer’s identity can be seen by the reader alongside the article, has many known advantages such as reducing bias and making reviews more constructive. In many regards, open peer review is not that new.  Many areas of maths, computational science and physics have been using it for many years, with services such as ArXiv becoming the default platform for the review and discussion of new research for some fields. In biomedicine, the BioMed Central medical series journals have used open peer review since their launch back in 2000, as have other major medical journals such as BMJ Open.

Open peer review also has other less-discussed benefits that tackle several issues raised in the Nuffield Report. The first is to enable researchers to gain credit for their refereeing activities. The report talks about recognising and rewarding ‘high quality’ peer review, but of course it is impossible to know if a researcher’s peer review was high quality unless one can see it! Some assume that the peer review at high-IF journals is high quality, but we can’t judge as their reports are not accessible to the reader – only the correspondingly high retraction rate. However, there is work enabling researchers to get credit for the refereeing they do (article peer review, grant review, REF/institution review, conference review etc) by including it on their ORCID profile and ultimately on any relevant profile, via a project we are doing with ORCID and CASRAI – see the Peer Review Service project.

Another significant benefit of open peer review is in assisting in the training of peer reviewers. The report rightly comments that peer reviewers should receive appropriate training and/or guidance as younger researchers often don’t get much training for how best to peer review and have few examples they can see and learn from except maybe that of their supervisor(s). Because we use open peer review, we have received much interest from universities in working with us to support new peer review courses and have put together a set of peer review tips to help assist new referees.

Summary

Of course the UK is not alone in many of the issues raised; we recently published an article from the Future of Research Symposium held back in October by a group of Boston postdocs that were concerned about US scientific culture and its impact on their future careers and the progression of scientific discovery.  In summary, I think this Nuffield report raised many important issues that publishers across the industry are actively working towards. However, none of the key stakeholders can solve these issues alone; we can only really achieve the changes necessary if we work together, and in a combined and coordinated effort.

Note: This article gives the views of the authors, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Rebecca Lawrence is Managing Director at F1000 Research Ltd. She has previously worked for other publishers including Elsevier for 7 years on the Drug Discovery Today group of products, and holds a degree in Pharmacy and a PhD in Pharmacology.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Academic publishing | Impact | Research funding

Leave a Reply

Your email address will not be published. Required fields are marked *