As a recent consultation on how to monitor open science practices draws to a close, Louise Bezuidenhout, Paola Castaño, Sabina Leonelli, Ismael Rafols and Andrea Vargiu argue that if monitoring frameworks aim to capture the widest dimensions of open science as a practice they should include case studies.
Open science is moving from theory to implementation and there is currently a profusion of efforts designed to monitor its progress. Recently, one such initiative, the Open Science Monitoring Initiative (OSMI), a collaboration between the French Ministry of Higher Education, Research, Université de Lorraine, Inria, UNESCO, SPARC Europe, PLOS and Charité, launched a global consultation on general principles for monitoring open science (They will be discussing its findings at an open meeting on the 4th of December). We recognise the significance of the UNESCO Recommendation on Open Science, and the OSMI principles that guide this consultation, but how exactly should these principles be implemented?
As reported in UNESCO’s Open Science Outlook, most current monitoring focuses on tracking the creation of open research products. In practice this predominantly means counting open publications. Even when these monitoring approaches include access to journal articles and the reuse of datasets and software, they provide a narrow perspective on the broader goals outlined in the UNESCO Recommendation.
most current monitoring focuses on tracking the creation of open research products. In practice this predominantly means counting open publications
There is however a wide consensus that open science involves a variety of activities, forms of engagement and dialogue with societal actors that go beyond outputs (Fig.1). These processes of participation and dialogue across the science system take different forms, operate at different scales, and often occur informally. Overlooking these aspects of open science could produce perverse incentives, much as they have done in existing research evaluation systems.
We propose that as Open Science implies a wider transformation of the research system, it should also imply a transformation of its monitoring. To this end, we argue case-based approaches for the monitoring of Open Science practices are essential.
Fig.1: Since Open science actors, Source: Open science outlook 1 (2023). Icons: Gan Khoon Lay, Miroslav Kurdov, Adrien Coquet, Wynne Nafus Sayer, courtesy of The Noun Project, CC BY 3.0*
Case studies in open science monitoring systems
Case studies investigate phenomena in their real-life context. There have been several efforts to document in detail existing open science practices and initiatives through case studies – some published, others in development.
Their use in open science monitoring has two crucial advantages. First, it makes clear that the purpose of monitoring is not about competition or benchmarking, but about learning how research systems can be improved. Second, since case studies can identify key processes and pathways to impact, it makes it possible to describe why and how specific open science formats led to specific uses and benefits.
case studies can identify key processes and pathways to impact, it makes it possible to describe why and how specific open science formats led to specific uses and benefits.
These explanations overcome a major blind spot of current monitors, which describe the outputs of the scientific system, but cannot explain how those outputs were produced, if they were used, or whether they contributed to positive scientific or societal outcomes. Better understanding outcomes is crucial to making evidence-informed decisions about where to invest resources in systems that are context specific and avoid ‘one size fits all’ approaches.
However, case studies do not need to be used to the exclusion of all other methods. Surveys could also be useful as a complementary monitoring to existing methods, as shown for example regarding sharing of research materials, open peer review, policy use, engagement with non-academics, integrity, or broader perceptions and habits of open science.
Learning from case studies in research assessment
Case studies have previously been used in national exercises in monitoring and evaluation, precisely for the assessment of phenomena where not only research products, but processes and pathways to impacts were seen as important. The most prominent example is the UK’s Research Excellence Framework (REF), which uses case studies to assess research impact. Other examples include Australia’s Evaluation Impact (EI) exercise, Hong Kong’s Research Assessment Exercise (RAE) and Italy’s, the National Evaluation Agency of HE and Research (ANVUR).
While case studies have proven to be effective in assessing research impact, there are drawbacks. Largescale case study approaches are both time-consuming and resource-intensive. Their empirical robustness is questioned due to their context specificity, as is their generalisability. Further challenges are associated with the curation of case studies. For these to be more than proof of activity, they need to be responsibly curated and searchable. Therefore, there is a need for the development of FAIR data standards and controlled vocabularies to facilitate cross-case searches.
case studies have been shown to be useful in research assessments which concern processes and outcomes and face a diversity of contributions that cannot be captured by a small set of indicators
In spite of these limitations, case studies have been shown to be useful in research assessments which concern processes and outcomes and face a diversity of contributions that cannot be captured by a small set of indicators. Conversely, it is important to remember that the use of quantitative indicators is also the result of particular choices. While case studies may not be generalisable in the same way as quantitative metrics, they capture key processes and pathways. The more case studies, the more sense of common threads and clusters we create, in areas that cannot be analysed or even identified otherwise.
An outline for case study based monitoring of open science
In our submission to OSMI’s consultation we provide a template for how a case study approach to open science might look in practice. The goal is to combine multiple-choice answers in the format of drop-down menus with an open narrative in which the various organisations or initiatives can describe their practices and challenges.
This narrative aspect is central, as it gives those involved in populating the template an opportunity to reflect on the activities, and challenges that led to specific outcomes. These reflections are crucial to grasping the different kinds of participation and engagement in open science that quantitative approaches struggle to apprehend. Through thematic analysis of these cases, it will also be possible to collate these data to track activities at various levels.
The Open Science Monitoring Initiative therefore provides a unique and timely opportunity to diversify how measure and understand the current shift towards an open science paradigm.
Sabina Leonelli is currently leading a pilot of this template using case studies from the project Philosophy of Open Science for Diverse Research Environments. The project is structured around case studies that examine practices like data crop management, citizen science, global healthcare, data-intensive space biology, ecology, and modes of governance of large databases and/or citizen science initiatives. We aim to use findings from these case studies to populate the template and conduct a cross-case analysis in order to track the impact of open science practices in these settings and, as described above, identify the processes and pathways to this impact.
Open science monitors could and should be enriched with the inclusion of case studies. If, as widely acknowledge the goals of open science extend beyond simply the making of open outputs, then we need a monitoring framework that is capable of acknowledging this. The Open Science Monitoring Initiative therefore provides a unique and timely opportunity to diversify how measure and understand the current shift towards an open science paradigm.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.
Image Credit: THICHA SATAPITANON on Shutterstock.
1 Comments