The ways in which research quality and research impact are defined and measured are deeply embedded in practices and concepts derived from the Global North. Drawing on examples from the Global South, Jon Harle argues that a fundamental shift is required that understands the value of research – and the institutions producing it – according to the contexts in which knowledge is needed, produced and used.
There is a quiet revolution taking place to shift the ways we think about and measure research. However, measures and metrics derived from the ways we publish research have a powerful grip on our research systems. The ease with which we can measure citations and track readership of journal papers has damaging effects. It has locked in a way of assessing the value of whole bodies of knowledge, as well as the careers of the researchers who produce them, based on work that appears in particular collections of journals predominantly published in and by the Global North.
This distortion in how we value research has been amplified by the incorporation of journal metrics into global university rankings – which systematically exclude centres of Southern knowledge production – and into systems for assessing and evaluating research.
As a result, judgements are made about critical scholarship, people or institutions with relatively little understanding of the environments in which knowledge is needed, produced and used across the world.
Impact in the South
Global South academics want to do work that is relevant to their societies, but they also want to become visible to their peers internationally and recognised by their institutions locally. The former requires that they invest time in work with communities or groups of practitioners or policymakers, or devote energy to teaching to develop the talents of the next generation. The latter demands that they must publish in “top journals” and do work that is sufficiently interesting to global science. They are forced to either choose between the two or uncomfortably straddle both.
This is often felt more keenly by those who encounter further obstacles within the system. Especially, early-career researchers; women juggling professional and personal responsibilities; those in rural universities, researchers with fewer connections to global networks; and researchers working in under-funded disciplines.
Attempts to recognise and measure universities’ broader social contributions are often made, recently in a new SDG impact ranking. However, it is striking that the top slots still go to relatively well-funded universities in Europe, North America and Australia. Success is still determined by existing privilege and wealth – and language (English).
How publishing as a proxy for ‘excellence’ distorts how knowledge is produced
We could ignore the lists and the rankings, but the influence these have on our systems and cultures of research is acute. They influence the questions that are asked and the problems that are tackled, the methods deployed to gather and analyse data, the engagement of local communities in the process, and the ways in which knowledge is communicated and made available to those who could use it.
These systems of measurement also influence what universities choose to invest in: things that are most likely to result in more papers in “top journals” or a better position in the rankings, and not necessarily what will most benefit their staff, students and communities. And, in turn, they impact how the knowledge that is produced is valued by decision makers when weighing evidence to meet policy and operational needs.
Reimaging the value of research from the South
This conversation is shifting. Initiatives such as DORA have pushed the global science system to rethink the way research is measured. The Global Research Council has just published a report urging funders to rethink research assessment. IDRC’s Research Quality Plus framework has demonstrated that, when research is judged against criteria such as relevance to local knowledge needs and connections to local communities as well as scientific rigour, work undertaken by Southern researchers outperforms work produced by academics in the North. The JPPS framework has enabled Southern journals to assess quality and communicate that to authors and readers. It has now been adopted by Nepal’s University Grants Council to help promotion applications. From Latin America, AmeliCA is developing its own indexing services and “responsible metrics” for Iberoamerican science.
If knowledge is a product of the ecosystem, the community, the culture and the society in which it is produced, then the institutions that produce it should be too.
Erika Kraemer-Mbula and colleagues turn the debate on its head to ask instead what excellence might look like if it was defined from the South. As they say, “What the South does not lack is scientific talent”; what it needs are ways to recognise and value its talent.
Our very idea of the university – dominated by ideas and types of institution developed in Europe and North America over several centuries – needs to shift. If knowledge is a product of the ecosystem, the community, the culture and the society in which it is produced, then the institutions that produce it should be too.
To cite but a few examples: in Northern Uganda, Gulu University’s mission is to become a university for and of the community – to be rooted in the knowledge needs of the people it seeks to serve, and to involve them in the very processes of teaching, learning, researching and debating ideas. Uganda Martyrs wants to cultivate a new generation of ethical leaders to bring new approaches and new leadership into business, government or to social and community sector organisations. In Mexico, the Intercultural University of Veracruz seeks to equip young people to serve the needs of indigenous, rural communities under-served by the institutions of capital cities.
Rethinking value to make a new case for investment
These are, of course, also questions of funding and investment as well as intent. But rethinking how knowledge should be valued, and designing better ways of making that visible and assessed, will be needed to make the argument for that investment.
What we need are opportunities for research systems in the South to define their own frameworks for assessing and judging the value of research
What we need are opportunities for research systems in the South to define their own frameworks for assessing and judging the value of research, and for assembling the evidence to demonstrate that value and contribution. This is not by just making improvements on the existing system, starting from the same metrics. Instead, should be by starting with what they want from research and knowledge, and from the people and institutions that produce and communicate it, and from there deciding what and how to measure and evaluate the value of research. Perhaps that might even be done with an understanding that, in a world of complex, cross-community and cross-border challenges, we ought to encourage and reward collaboration and not just competition.
It could be a science council defining something within its own system, or a group of Southern science agencies that acted together to facilitate this process. Importantly, they would need to work with experts in the measurement and evaluation – to avoid the traps and failures of existing approaches – and to bring together a diverse group of users as well as producers, to develop something that really addresses the social value of research and knowledge.
We won’t get to a more equitable knowledge ecosystem if we don’t have a better way to recognise and reward what we most need from research, and what we really ought to value more.
Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below
Image Credit: Nigel Tadyanehondo via Unsplash.