Research integrity and trust in science have made global news this year. Reflecting on the scientific norm of organised scepticism, Panagiotis Kavouras outlines how building trust in science requires commitments to social and technical means of ensuring transparency and reproducibility across scientific processes.
“Οὐδέν κρυπτόν ὑπό τόν ἥλιον”; this famous ancient Greek saying translates into English as “Nothing rests hidden under the sun”, implies that nothing can remain hidden forever. This process of uncovering is a key function of science; organised scepticism. At its core, it means that no scientific assertion should go untested or unchallenged by a scientist’s peers and colleagues.
However, there is also a negative connotation when we speak about uncovering in science. This is related to the exposure of research misconduct or questionable research practices. Last year saw a record high of more than 10,000 retracted scientific papers. These papers had successfully undergone the process of peer review — meant to initiate the principle of organised scepticism — and were used by the scientific community. Papers are retracted when they are found to be unreliable, be that from an innocent mistake by the researcher, sloppy science, application of questionable research practices, or more sinister types of research misconduct.
Last year saw a record high of more than 10,000 retracted scientific papers. These papers had successfully undergone the process of peer review
Should we conclude from this that science, in general, is unreliable? Definitely not. Does it mean that societies should blindly trust science? Better not to. These seemingly contradictory responses provide some food for thought on the conditions necessary for science to be trusted by scientists and societies.
How should we think about trust in science?
At this point we can ask whether trust is even the correct word. Trust is defined as the firm belief in the reliability or truth of someone or something. And perhaps herein lies a fundamental problem, since belief, as defined by the Oxford Dictionary, is the acceptance that something exists or is true, even without proof. This lack of proof runs counter to the foundational principle of organised scepticism, upon which scientific progress should be based.
Having clarified this not-so-small terminological issue we come to the core question: What conditions should be fulfilled for science to effectively apply the principle of organised scepticism, counteract research misconduct and questionable research practices, and remain trustworthy? The answer is that a scientific result must be communicated together with the means of its verification (or of its negation). In a sense, scientists and societies should not focus only on the conclusions of scientific research, but on the whole process with which those conclusions were reached, that is, to all stages of the research cycle. This allows for the scientific findings to be tested or reproduced by other scientists; reproducibility is organised scepticism in action.
scientists and societies should not focus only on the conclusions of scientific research, but on the whole process with which those conclusions were reached, that is, to all stages of the research cycle.
If a scientific finding can be successfully reproduced, then it is on the right track to be validated as correct. Yet, scientists speak of a reproducibility crisis, where a portion of scientific findings cannot be successfully reproduced. How then can we achieve the necessary level of reproducibility across different scientific disciplines?
Contextualising transparency
Estimates of how much reproducibility can be achieved differ significantly across scientific fields, as different criteria are used to quantify it. It is necessary to work together as a scientific community to ensure published scientific results are reproducible. In other words, to ensure that results are published with the level of transparency to allow someone else to complete the same experiment. In general, researchers need to publish how the study was conducted (i.e. a detailed study protocol), how they analysed the retrieved data (data analysis plan) and, of course, how they openly conveyed the retrieved data itself. This is more easily said than done, since the specific features of how and what exactly to report for reproducibility to be achieved, as well as what exactly is successful reproducibility is very much field-dependent. Whatever the challenges, reproducibility is a key requirement to building trust across all fields.
How can we make transparency a reality?
Open Science is an approach to the scientific process that focuses on uncovering and publishing the processes underpinning scientific research. It is now possible to publish one’s data, software and code or any other tools used to conduct an experiment alongside the published results. In fact, increasingly funders and some publishers are mandating this.
Other Open Science initiatives tackle the issues around high levels of retraction and low levels of reproducibility more directly. For instance, the registered report, where the research questions, the study protocol, and the data analysis plan of a study are published before the actual work of the study takes place. In these cases, if a scientific journal accepts the registered report, it is obliged to publish the findings whether they are groundbreaking or not. This strategy, essentially, measures the value of a study not by its outcome, but solely by the quality of the methodology. Another example is the system of open peer review, where the process for assessing the quality of a study is made public and publishable.
Open Science practices, by exposing the whole research cycle to the eyes of experts and non-experts, could help reduce the risk of questionable research practices and research misconduct.
Creating a professional culture conducive to transparency
At the root of many questionable research practices and research misconduct is a research culture that prioritises publication. Researchers are promoted based on how many publications they produce, the impact of the journals in which they publish, and the number of citations their papers receive. This focus on quantitative criteria for the assessment of researchers and universities exerts enormous pressure. This need for speed and volume of research results pushes researchers to prioritise quantity over quality. Tackling this situation needs a holistic approach, solutions need to permeate research policy, research funding, the conduct of research, research publication and communication, and of course higher education.
At the root of many questionable research practices and research misconduct is a research culture that prioritises publication.
The growth of the field of research integrity is arguably a response to this ‘publish or perish’ culture. Research integrity practitioners all over the world are advocating for the professional acknowledgement and reward of responsible research practices, to ensure research outcomes are trustworthy. They are also trying to create communities, to spread the measures for more trustworthy science. One of the most prominent efforts is the series of the World Conferences on Research Integrity (WCRI) that takes place every two years. These conferences are important events in the research landscape as they bring together scientists and research professionals across all disciplines and regions of the world to discuss these challenges and work to address them.
Reproducibility, Open Science, and research integrity all play important roles in facilitating the processes of ‘uncovering’ in science. Organised scepticism and identification of research misconduct and questionable practices are essential elements in the struggle to safeguard the trustworthiness of science. In a sense, they are the persistent echo of the ancient Greek saying, put paraphrased into relevant context: “No scientific claim should rest hidden under the critical scrutiny of the scientific community.”
Note: This year sees the 8th WCRI taking place in Athens in June.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.
Image Credit: SvetaZi on Shutterstock.
2 Comments