LSE - Small Logo
LSE - Small Logo

Miloš Fišar

Ben Greiner

Christoph Huber

Elena Katok

Ali I. Ozkes

January 23rd, 2024

How journal communities can ensure reproducible social science

1 comment | 19 shares

Estimated reading time: 6 minutes

Miloš Fišar

Ben Greiner

Christoph Huber

Elena Katok

Ali I. Ozkes

January 23rd, 2024

How journal communities can ensure reproducible social science

1 comment | 19 shares

Estimated reading time: 6 minutes

In 2019 the journal Management Science introduced a policy requiring authors to make their study materials available. Taking this as an opportunity to mobilise a journal community to assess the policy’s impact, Miloš Fišar, Ben Greiner, Christoph Huber, Elena Katok and Ali I. Ozkes analyse findings from over 500 reproducibility studies to assess the state of reproducibility in their field.


Research in business and management aims to provide sound and credible evidence upon which business and policy leaders can base their decisions. But to what extent can we trust the scientific results? The answer depends on whether the results are transparently documented (reproducible) and whether they are robust and broadly applicable (replicable). While replicability is ultimately an empirical question, to be explored in further studies, reproducibility is a matter of scientific rigour, and provides the groundwork for replicability. In our recent article, Reproducibility in Management Science, we wanted to explore these issues by (for the first time) estimating the reproducibility of a broad range of almost 500 studies in Management Science, a leading academic journal in business and management.

To enable verification of scientific results, in 2019 Management Science introduced a policy that made it a requirement for authors to provide their study materials (that is, their data, code, and everything else needed for the empirical or computational analyses), with some exceptions applying. In the Management Science Reproducibility Project, we directed the collaborative effort of a community of more than 700 experts from relevant fields of research to (or attempt to) reproduce a large and representative sample of articles published before and after this policy change. The findings of this project, reported in our article, provide a description of the current state of affairs, highlight the critical role of disclosure policies in scholarly research, and allow us to put forward suggestions for improving the reliability of research results.

Fig.1 shows our main findings: the percentage of studies that can be fully or largely reproduced, both before and since the introduction of the disclosure policy.

Fig.1. Reproducibility rate before and since the introduction of the 2019 code and data disclosure policy (percentage).

Fig.1. Reproducibility rate before and since the introduction of the 2019 code and data disclosure policy (percentage).

Consider the initial situation before the introduction of the policy, when providing code and data was voluntary. Because for 88% of articles, materials needed for reproducing a study’s results were not made available, only 7% out of 332 studies could be reproduced (see panel (a) in Figure 1). Among the 40 studies for which the authors did voluntarily provide materials, the reproduction rate is at 55% (see panel (b) in Fig.1).

In our sample of 419 studies published since the introduction of the policy and until January 2023, we observe a remarkable improvement: reproducibility climbed to almost 68% (see panel (c) in Fig.1). When, in addition, all data were available to the assessors and they could meet the soft- and hardware requirements, 95% of articles could be reproduced (see panel (d) in Fig.1).

These results reveal that the largest challenge to reproducibility since introduction of the disclosure policy is data accessibility. For a significant number of studies in our sample, data were not available to the assessors: the datasets may have been under NDA, not available for privacy reasons, or come from subscription databases or other commercial sources to which the assessor did not have access. Fig.2 displays the main reasons for limited reproducibility. Besides data accessibility, obstacles hindering reproduction include issues such as missing or incorrect code, insufficient documentation, and the complexity of technical requirements.

Fig.2. Reasons for non-reproducibility.

Fig.2. Reasons for non-reproducibility.

Our findings emphasise the critical importance of data and code disclosure policies in academic journals. Such policies seem not only to be associated with a considerably higher rate of reproducibility, but also encourage a culture of openness and integrity in academic publishing. They are essential for producing reliable and trustworthy research, which in turn informs sound decision-making in practice.

Several concrete steps can be taken to elevate reproducibility rates further. First, enhancing the data availability through various means such as, including de-identified data in replication packages, forming agreements with subscription databases for data access, or providing data through specialised infrastructures that restrict use to specific purposes. Second, refining the review processes for code and data. This might involve making the acceptance of papers conditional upon the approval of replication packages and integrating the code and data review as an essential step in the manuscript review process at academic journals. Third, professionalising the code and data review either in-house at the journals or publishers, or by delegating reproducibility certification to specialised third-party agencies.

Such institutional reforms, along with a collaborative effort and awareness across the academic community, are key in enhancing the robustness and reliability of results published in academic journals in business and economics. Management Science has already come a long way from when hardly any study materials used to be available, to an enforced disclosure policy that requires that each article provides study materials (even if it allows for exceptions). However, this journey needs to be continued further, with sufficient resources made available by the publisher(s), to bring reproducibility to 100%.

Reproducibility is an essential feature of reliable research results, but it cannot guarantee replicability. It does not imply that redoing a study – in a different context, with different data, analyses, or research designs – will yield the same outcomes and conclusions. However, reproducibility lays the foundations, ensuring validity of reported results and provision of materials that enable replication attempts and robustness checks, thus supporting our aspiration of reliable and credible scientific evidence.

 


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Nanderdewijk on Shutterstock.


Print Friendly, PDF & Email

About the author

Miloš Fišar

Miloš Fišar is an Assistant Professor in Economics at Masaryk University and Code and Data Associate Editor for Management Science. His research interests revolve around the fields of behavioural, experimental, and public economics.

Ben Greiner

Ben Greiner is Professor of Empirical Business Research at WU Vienna University of Economics and Business and Code and Data Editor for Management Science. His research focuses on Behavioural Economics, Market Design, and Strategic Behaviour.

Christoph Huber

Christoph Huber is an Assistant Professor at WU Vienna University of Economics and Business and Code and Data Associate Editor for Management Science. His research lies at the intersection of behavioral and experimental methods and financial economics as well as in replicability, credibility, and transparency in research.

Elena Katok

Elena Katok is Ashbel Smith Professor of Operations Management at UT Dallas. In her research, she works on market design and strategic procurement as well as on behavioural operations management.

Ali I. Ozkes

Ali I. Ozkes is an Associate Professor at SKEMA Business School and Code and Data Associate Editor for Management Science. His research focuses on experimental economics, behavioural game theory, social choice theory, AI ethics and behaviour, text-mining, and political economy.

Posted In: Academic publishing | Open Research

1 Comments