LSE - Small Logo
LSE - Small Logo

Louise Bezuidenhout

Paola Castaño

Sabina Leonelli

Ismael Rafols

Andrea Vargiu

November 27th, 2024

Case studies are vital to monitoring the development of open science

1 comment | 21 shares

Estimated reading time: 7 minutes

Louise Bezuidenhout

Paola Castaño

Sabina Leonelli

Ismael Rafols

Andrea Vargiu

November 27th, 2024

Case studies are vital to monitoring the development of open science

1 comment | 21 shares

Estimated reading time: 7 minutes

As a recent consultation on how to monitor open science practices draws to a close, Louise Bezuidenhout, Paola Castaño, Sabina Leonelli, Ismael Rafols and Andrea Vargiu argue that if monitoring frameworks aim to capture the widest dimensions of open science as a practice they should include case studies.


Open science is moving from theory to implementation and there is currently a profusion of efforts designed to monitor its progress. Recently, one such initiative, the Open Science Monitoring Initiative (OSMI), a collaboration between the French Ministry of Higher Education, Research, Université de Lorraine, Inria, UNESCO, SPARC Europe, PLOS and Charité, launched a global consultation on general principles for monitoring open science (They will be discussing its findings at an open meeting on the 4th of December). We recognise the significance of the UNESCO Recommendation on Open Science, and the OSMI principles that guide this consultation, but how exactly should these principles be implemented?

As reported in UNESCO’s Open Science Outlook, most current monitoring focuses on tracking the creation of open research products. In practice this predominantly means counting open publications. Even when these monitoring approaches include access to journal articles and the reuse of datasets and software, they provide a narrow perspective on the broader goals outlined in the UNESCO Recommendation.

most current monitoring focuses on tracking the creation of open research products. In practice this predominantly means counting open publications

There is however a wide consensus that open science involves a variety of activities, forms of engagement and dialogue with societal actors that go beyond outputs (Fig.1). These processes of participation and dialogue across the science system take different forms, operate at different scales, and often occur informally. Overlooking these aspects of open science could produce perverse incentives, much as they have done in existing research evaluation systems.

We propose that as Open Science implies a wider transformation of the research system, it should also imply a transformation of its monitoring. To this end, we argue case-based approaches for the monitoring of Open Science practices are essential.

Fig.1: Since Open science actors, Source: Open science outlook 1 (2023). Icons: Gan Khoon Lay, Miroslav Kurdov, Adrien Coquet, Wynne Nafus Sayer, courtesy of The Noun Project, CC BY 3.0*

Case studies in open science monitoring systems

Case studies investigate phenomena in their real-life context. There have been several efforts to document in detail existing open science practices and initiatives through case studies – some published, others in development.

Their use in open science monitoring has two crucial advantages. First, it makes clear that the purpose of monitoring is not about competition or benchmarking, but about learning how research systems can be improved. Second, since case studies can identify key processes and pathways to impact, it makes it possible to describe why and how specific open science formats led to specific uses and benefits.

case studies can identify key processes and pathways to impact, it makes it possible to describe why and how specific open science formats led to specific uses and benefits.

These explanations overcome a major blind spot of current monitors, which describe the outputs of the scientific system, but cannot explain how those outputs were produced, if they were used, or whether they contributed to positive scientific or societal outcomes. Better understanding outcomes is crucial to making evidence-informed decisions about where to invest resources in systems that are context specific and avoid ‘one size fits all’ approaches.

However, case studies do not need to be used to the exclusion of all other methods. Surveys could also be useful as a complementary monitoring to existing methods, as shown for example regarding sharing of research materials, open peer review, policy useengagement with non-academics, integrity, or broader perceptions and habits of open science.

Learning from case studies in research assessment

Case studies have previously been used in national exercises in monitoring and evaluation, precisely for the assessment of phenomena where not only research products, but processes and pathways to impacts were seen as important. The most prominent example is the UK’s Research Excellence Framework (REF), which uses case studies to assess research impact. Other examples include Australia’s Evaluation Impact (EI) exercise, Hong Kong’s Research Assessment Exercise (RAE) and Italy’s, the National Evaluation Agency of HE and Research (ANVUR).

While case studies have proven to be effective in assessing research impact, there are drawbacks. Largescale case study approaches are both time-consuming and resource-intensive. Their empirical robustness is questioned due to their context specificity, as is their generalisability. Further challenges are associated with the curation of case studies. For these to be more than proof of activity, they need to be responsibly curated and searchable. Therefore, there is a need for the development of FAIR data standards and controlled vocabularies to facilitate cross-case searches.

case studies have been shown to be useful in research assessments which concern processes and outcomes and face a diversity of contributions that cannot be captured by a small set of indicators

In spite of these limitations, case studies have been shown to be useful in research assessments which concern processes and outcomes and face a diversity of contributions that cannot be captured by a small set of indicators. Conversely, it is important to remember that the use of quantitative indicators is also the result of particular choices. While case studies may not be generalisable in the same way as quantitative metrics, they capture key processes and pathways. The more case studies, the more sense of common threads and clusters we create, in areas that cannot be analysed or even identified otherwise.

An outline for case study based monitoring of open science

In our submission to OSMI’s consultation we provide a template for how a case study approach to open science might look in practice. The goal is to combine multiple-choice answers in the format of drop-down menus with an open narrative in which the various organisations or initiatives can describe their practices and challenges.

This narrative aspect is central, as it gives those involved in populating the template an opportunity to reflect on the activities, and challenges that led to specific outcomes. These reflections are crucial to grasping the different kinds of participation and engagement in open science that quantitative approaches struggle to apprehend. Through thematic analysis of these cases, it will also be possible to collate these data to track activities at various levels.

The Open Science Monitoring Initiative therefore provides a unique and timely opportunity to diversify how measure and understand the current shift towards an open science paradigm.

Sabina Leonelli is currently leading a pilot of this template using case studies from the project Philosophy of Open Science for Diverse Research Environments. The project is structured around case studies that examine practices like data crop management, citizen science, global healthcare, data-intensive space biology, ecology, and modes of governance of large databases and/or citizen science initiatives. We aim to use findings from these case studies to populate the template and conduct a cross-case analysis in order to track the impact of open science practices in these settings and, as described above, identify the processes and pathways to this impact.

Open science monitors could and should be enriched with the inclusion of case studies. If, as widely acknowledge the goals of open science extend beyond simply the making of open outputs, then we need a monitoring framework that is capable of acknowledging this. The Open Science Monitoring Initiative therefore provides a unique and timely opportunity to diversify how measure and understand the current shift towards an open science paradigm.


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: THICHA SATAPITANON on Shutterstock


Print Friendly, PDF & Email

About the author

Louise Bezuidenhout

Louise Bezuidenhout is a senior researcher at CWTS Center for Science and Technology Studies at the University of Leiden. She is a social science researcher who specializes on issues relating to Open Science and data sharing. Her research is broadly oriented around themes such as justice and access, inclusion and marginalization and equity within global research, with a specific focus on the African continent.

Paola Castaño

Paola Castaño is a sociologist and research fellow at the Egenis Centre for the Study of Life Sciences at the University of Exeter. She is part of the “Philosophy of Open Science for Diverse Research Environments” project where she leads a case study on and data-sharing practices in space biology in collaboration with the NASA Open Science Data Repository.

Sabina Leonelli

Sabina Leonelli is a Professor and Chair of Philosophy and History of Science and Technology at the Technical University of Munich, where she directs the Ethical Data Initiative and co-directs the TUM Public Science Lab. She is the principal investigator in the project "Philosophy of Open Science for Diverse Research Environments” funded by the European Research Council. Her work focuses on data-intensive science and empirical inquiry, and the philosophy and history of Open Science, organisms as research models, and plant sciences.

Ismael Rafols

Ismael Rafols is a senior researcher at the Centre for Science and Technology Studies (CWTS) of Leiden University and leads the UNESCO Chair in Diversity and Inclusion in Global Science. He works on science policy developing novel approaches that help in fostering epistemic pluralism, broadening participation, and widening the distribution of the benefits from science.

Andrea Vargiu

Andrea Vargiu is full Professor of Sociology at the University of Sassari (Italy), where he directs the Foist Laboratory for Social Policies and Training Processes: a training and research unit with almost 50 years of experience in university engagement for transformative change. Andrea has extensive experience in science-society cooperation, and a solid record in participatory and responsible research, and in evaluation of higher education and social policies.

Posted In: Featured | Measuring Research | Open Research | Research evaluation

1 Comments