LSE - Small Logo
LSE - Small Logo

Blog Admin

September 24th, 2013

Registered Reports: a new publishing initiative aimed at countering publication bias.

5 comments

Estimated reading time: 5 minutes

Blog Admin

September 24th, 2013

Registered Reports: a new publishing initiative aimed at countering publication bias.

5 comments

Estimated reading time: 5 minutes

GeorgeA publishing initiative launched earlier this year by the journal Cortex re-establishes the crucial importance of the scientific method. By asking scientists to register their proposed study, it ensures that papers are not judged depending on whether the results support or reject the hypothesis. George Lozano welcomes this initiative and hopes this publishing format will spread to other journals and other fields.

The Scientific Method refers to the way in which science is supposed to be conducted. It consists of a series of steps. There is usually a general question to be addressed. This question is then distilled into one or more testable hypotheses. The hypotheses generate specific predictions, which are then tested. The results might support or reject the prediction, and hypothesis, or, more often, they can suggest changes to the original hypothesis. The process is then repeated with the new or updated hypothesis. There are two important features to the scientific method. First, it is circular, with hypotheses eventually generating data, and new findings generating new hypotheses. Second, it is unidirectional, not only conceptually, but also chronologically. The hypothesis affects the predictions, but not the other way around. The predictions influence the way data is collected, but not the other way around. Data are used to draw conclusions about the hypothesis, but not to change it, at least not until the next study is conceived. The scientific method is accepted to be the best way in which knowledge is accumulated. Unfortunately, what is best for science is not always what is good for scientists.

Latent-Heat-Of-Vaporization-Experiment
Image credit: Wikimedia Commons
Scientists are just people, and as such they compete, and seek advancement and success in life. Much of their success depends on how they are evaluated against their peers. The methods by which they are evaluated and assessed vary widely.  However, before any of these methods can be applied, scientists have to publish their work. One problem at this point is “publication bias”, in the sense that studies that support the hypothesis are more likely to be accepted for publication. All scientists face these pressures. So much, in fact that it has become customary by many people (dare I say most?) to fish for significant p-values in their data, present only significant results, integrate post-hoc analyses into the original intent, or, when all else fails, completely change the study’s original intent and hypothesis.

All scientists eventually face reviewers who suggest changing the study’s main hypothesis. Our choices as ethical scientists are (1) to bend those ethics a little, and in doing so, do a disservice to science but get our work published, or (2) to refuse the suggestion, accept rejection, submit elsewhere, and hope for a different reviewer. On the other hand, as reviewers, we all eventually see papers whereby upon examining the methods, it is evident that the intent could not have been as stated in the introduction. Upon seeing the results, however, it becomes clear that the results tangentially support the hypothesis presented in the introduction, even if it was obviously not the original one. It is not known whether the authors came up with the alternative hypothesis on their own accord, or were persuaded into doing so by previous reviewers. As reviewers, we might suggest the paper gets rejected, or we might rationalize that the data should not be wasted, and presenting in the context of the replacement hypothesis actually paints fairly interesting picture.  Of course, changing the hypothesis post-hoc violates the scientific method.

The journal Cortex (ISSN: 0010-9452) has come up with a new publishing initiative that forces scientists to adhere to the scientific method. They call it “Registered Reports”. Essentially, it requires scientists to submit papers before data are collected, essentially with only the introduction and methods sections completed. Preliminary or pilot data are optional. These preliminary papers are essentially research proposals, and they are then reviewed as any other paper. If the reviewers think the rationale and hypothesis are sound and the methods are suitable, then they offer a provisional acceptance. Many factors affect that decision, just as in any normal review. The only difference is that reviewers cannot judge the paper based on whether the results support or reject the hypothesis. If adequate, the paper is “in-principle accepted” (IPA). The authors receive the reviewer’s feedback then carry out the work, collect the data, analyse it, write the discussion, and then submit the completed paper. The paper is then reviewed again, although not necessarily by the same reviewers. At this point, the reviewers check for inconsistencies, making sure that additional hypotheses or predictions are not being introduced, that the original methods were followed, that post-hoc analyses are clearly labelled as such, and of course, that the discussion is relevant and informative. The reviewers cannot base their decision on whether the data supported the hypothesis. If everything is okay, the manuscript is formally accepted.

This is the way we were all taught to do science. In fact, during our graduate careers, often we were forced to do science this way. We had to submit a proposal outlining our rationale and methods to our thesis advisory committee. If these were adequate, we would be encouraged to continue. Otherwise, we would be asked to go back to the drawing board. The committee did not necessarily check whether the final paper adhered to the proposal, but they could, and at least there was a record of intent. The premise was there, even if it was not always strictly enforced. However, once we become fully functional and independent researchers, we often begin to ignore the scientific method.

The format of Cortex’s “Registered reports” puts back the scientific method where it belongs, as the only way in which science ought to be conducted.  However, even within the same journal, papers can still be submitted the regular way. So, unless the scientific community places greater value or prestige to “registered report” papers, it is unclear why authors might choose to go through the additional trouble of this format. A paper or a citation will still count the same, whether a regular paper or a registered report. Finally, although Cortex might be one of the leading journals in its field, it is still one journal among thousands.

This is the way science is supposed to be done and the only way ethical scientists should do science. Presumably, most scientists agree, and they stray only because the pressures of publishing are sometimes in conflict with strictly adhering to the scientific method. The “registered reports” format, or at least, the principle behind it, should expand into other journals. Authors benefit from the “registered reports” format by getting feedback before collecting any data, sort of like having a thesis advisory committee. Furthermore, once the paper is accepted in principle, the authors might no longer be under pressure to produce data that agree with the hypothesis. Third, they might include it on their CVs as “in principle accepted”, just in time for that tenure review. However, this new format will not remove the pressures of publishing. Unless there is a significant premium associated with publishing “registered reports”, as opposed to regular papers, it is difficult to see how the format can be sustained by itself.  Granting agencies could help. If research proposals are made publicly available (some time afterwards, of course), anyone could confirm whether in the eventual papers, the hypotheses were as originally formulated, or changed for convenience after collecting recalcitrant data. The problem is that what is best for science is not always what is best for individual scientists, but that should not stop us from trying to align their interests.

For more information about “Registered Reports”, see NeuroChambers’ extensive Q&A.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

George Lozano received a B.Sc. from the University of Guelph, earned an M.Sc. from the University of Western Ontario, and holds a Ph.D. from McGill University. He was subsequently awarded FCAR postdoctoral Fellowship which he took to UC Riverside, and an NSERC postdoctoral fellowship, which he took to Simon Fraser University. Since then he has taken several teaching and research positions in three continents, in a concerted effort to add to his multi-cultural experiences. George’s main research deals with the evolutionary, behavioural, and physiological ecology of animals, mostly birds. Along with the empirical research, some of his recent work deals with the evolution and maintenance of multiple sexual signals, the adaptive explanation behind anorexia, evolutionary medicine, research policy and bibliometrics. His personal website can be found at www.georgealozano.com.

 

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Academic publishing | Research ethics

5 Comments