LSE - Small Logo
LSE - Small Logo

Blog Admin

January 24th, 2017

Embedding open science practices within evaluation systems can promote research that meets societal needs in developing countries

4 comments | 1 shares

Estimated reading time: 5 minutes

Blog Admin

January 24th, 2017

Embedding open science practices within evaluation systems can promote research that meets societal needs in developing countries

4 comments | 1 shares

Estimated reading time: 5 minutes

valeria-arzaemanuel-lopezResearchers’ choices are inevitably affected by assessment systems. This often means pursuing publication in a high-impact journal and topics that appeal to the international scientific community. For researchers from developing countries, this often also means focusing on other countries or choosing one aspect of their own country that has such international appeal. Consequently, researchers’ activities can become dislocated from the needs their societies. Valeria Arza and Emanuel López explain how embedding open science practices in research evaluation systems can help address this problem and ensure research retains local relevance. Collaborative research teams, transparency in peer review, open access, and diverse tools and channels of communication should all inform a new assessment scheme that has collaboration and openness as its backbones.

What type of research better suits the societal needs of developing countries? Quality research must satisfy internal criteria of excellence (validity, reliability) and external criteria of relevance. The concept of relevance needs to be unpacked into the many different meanings it has in a plural society: research may contribute to the stock of knowledge that pushes science ahead; serve policy purposes; have a role in transforming knowledge into market value; help solve problems that affect marginalised groups through direct engagement; or have a media impact that increases public interest in science; and so on. In the context of development, the socioeconomic relevance of research carries great weight.

Researchers’ choices are inevitably affected by assessment systems. Current science evaluation systems have moved towards the use of quantitative indicators, with researchers assessed based on how much they publish and the perceived quality of these publications; with the journal’s impact factor or similar citation metric used as a proxy for ‘quality’. On the face of things, this appears justified as publications are the main outcomes of scientific research and scientists validate good quality research by citing it.

By following these incentives, researchers understandably aim to publish articles in highly ranked journals. Editors’ behaviour is also driven by their incentive to maximise impact factors. They must assess whether the research topic will trigger sufficient worldwide interest to become widely cited. Researchers are then asked to research topics that appeal to the international scientific community. If they come from developing countries, the implication is that researchers should conduct research on other countries and not just their own, or stick to an aspect of their country that is interesting to the wider world. Regardless of how rigorous their research is, they should not draw only on local topics. In short, academic impact trumps excellence and relevance together, the cost of which is researchers deviating from paths they would have followed were the incentive structures different.

mundialImage credit: 2 by Mundial Perspectives. This work is licensed under a CC BY 2.0 license.

Take our personal experience as an example. We once submitted a paper on the potential role of agroecology in inclusive development. As part of their recommendations for revision, the journal editors requested that we contrast agroecology with ‘sustainable intensification’. Of course, at the time that was a hot topic on the international agenda and our country, Argentina, specialised in intensive agriculture. So it was a good field from which to draw global implications. But ‘sustainable intensification’ was hardly relevant to inclusive development, as only large farmers have the resources and access to the sophisticated machinery and data science services required for those practices.

The relation between quantitative indicator-based research assessment and research quality is very fragile, especially in developing countries. If researchers continue to be assessed using such narrow criteria, scientific research activities will become further dislocated from the needs of the society that contributes to science with financial, human, natural and social resources.

We argue that open science practices have an important role in improving research quality and therefore research evaluation schemes should promote them by offering the right incentives.

How can open science contribute to research quality?

In the making

Interaction among researchers promotes processes that amplify collective intelligence by the mere fact of being able to share, validate and quickly rule out different ideas, assumptions, hypotheses or avenues of inquiry. Researchers learn and develop more quickly through interaction with others in their field. Replication and checking of reproducibility is made possible by sharing. Moreover, by increasing the quantity and diversity of actors participating in data collection or analysis, new cognitive and manpower resources become available. A new, fresh look at problems is enabled, particularly those of local or neglected communities.

Reviewing

Open peer review (OPR) models publish reviewers’ names and their reports (as an attached appendix to the papers). This is intended to increase accountability and engagement. They create incentives for reviewers to perform their job as diligently as possible, as their names become attached to the paper and form part of the scholarly record. Also any tensions among the different meanings of relevance, for example those related to different development contexts, are presented to the audience with transparency.

Open access

Open access to scientific outcomes increases the common pool of knowledge and promotes research excellence, as new questions and better answers to old questions can be explored by standing on the shoulders of a taller giant. In terms of relevance, open access allows for appropriation by the general public and therefore promotes the democratization of knowledge. By sharing knowledge resources, communities become more autonomous and able to solve their own problems.

Communication

Improving the accessibility of scientific knowledge increases the societal impact of scientific research, as a larger number and wider variety of actors become endowed with knowledge resources. This could be achieved by diversifying tools and channels of communication: not only through articles published in journals but other formats such as infographics or videos, disseminated via social networks.

What about evaluation?

A new research assessment scheme could be designed with openness and collaboration as
its backbones. The idea is to develop a battery of indicators (as suggested in the Leiden Manifesto), not a single composite one. Researchers would be assessed on different grounds, including a full range of possible scientific outcomes and not simply papers, projects and training. Some ideas for consideration:

  1. To assess researchers as reviewers: researchers must review and provide feedback on research produced by others. Assessing researchers on these grounds creates incentives for the adoption of OPR.
  2. To assess researchers as collaborators: collaboration and competition are two complementary forces that drive creativity and quality, but the former’s role is underplayed in current evaluation systems. We may compensate for that by accounting for the extent to which researchers i) share their final and intermediate outputs on online platforms and ii) collaborate with fellow partners in the developing world so as to collectively construct meanings of relevance.
  3. To include new forms of evaluation that reward societal engagement by scientists: some databases measure mentions of journal publications on social networks, blogs and other electronic platforms. Yet it is important to include forms of impact beyond journals. This may include, for example, a researcher’s public opinions and viewpoints communicated through social media, interviews, policy reports, etc. To this end, we need to experiment with novel uses of digital technologies and unstructured data. Techniques on data gathering and analysis are growing exponentially alongside the increasing power of information and communication technologies. These new metrics could contribute to a variety of indicators that cover different meanings of relevance.

Much of the infrastructure needed for open science and to widen the variety of assessment tools, including software and hardware, already exists. Repositories have a large role to play in a new system. Interoperability across them must be built in to ensure the system works (this is crucial for OPR, for example) and to avoid the current situation whereby where you publish matters more than what you publish.

Big challenges remain. Vested interests, such as those of publishers, may hamper efforts to improve openness. The academic publishing industry is one of the most concentrated with some studies showing that the top five publishers account for more than 50% of published papers. And while it seems clear that the current system will not last long as it is inefficient and not conducive to research quality, it is likely that the industry will play its own cards in the reconfiguration process – as has already happened with open access. For example, Elsevier has recently obtained a patent on its reviewing method; if the big players move first, they could create barriers to more innovative ideas, like OPR.

We must get on board and remain alert in order to navigate the stormy waters ahead. It’s worth making the effort; for the sake of quality research but also to obtain better returns on public investment in science in the developing world.

This discussion was originally prepared for the dialogue session on Evaluation Policy organised by Ismael Rafols and Michael Hopkins during the SPRU 50th Anniversary Conference, 7-9 September 2016 and then nurtured by the viewpoints of the other panellists who participated in the discussion: Judith Sutz, Jonathan Adams, Gavin Reddick and Paul Wouters. The authors offer thanks to all and take responsibility for all errors and omissions.

Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

About the authors

Valeria Arza is independent researcher at the National Scientific and Technical Research Council (CONICET– Argentina) and Director of the Research Center for the Transformation (CENIT – Argentina). She has a degree in Economics (Buenos Aires University); a Masters degree in Development (London School of Economics) and a PhD in Science and Technology Policy (SPRU, Sussex University). Her main topics are open science, knowledge networks, innovation and sustainability and the politics of knowledge, science, technology and innovation. She is the principal investigator in a project on Open and Alternative Science which is part of the Open & Collaborative Science in Development Network.

Emanuel López is a PhD student at Buenos Aires University, funded by the National Scientific and Technical Research Council (CONICET – Argentina). He has a degree in Economics (National University of Córdoba), a specialization in Data Mining and Knowledge Discovery (Buenos Aires University), and is a candidate Master in Economics (San Andrés University). His current workplace is the Research Center for the Transformation (CENIT – Argentina). His research interests focus on innovation and economic development and he has recently started to explore processes of openness and collaboration in production in general and in science in particular.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Academic communication | Research evaluation

4 Comments