The ways in which research quality and research impact are defined and measured are deeply embedded in practices and concepts derived from the Global North. Drawing on examples from the Global South, Jon Harle argues that a fundamental shift is required that understands the value of research – and the institutions producing it – according to the contexts in which knowledge is needed, produced and used.
There is a quiet revolution taking place to shift the ways we think about and measure research. However, measures and metrics derived from the ways we publish research have a powerful grip on our research systems. The ease with which we can measure citations and track readership of journal papers has damaging effects. It has locked in a way of assessing the value of whole bodies of knowledge, as well as the careers of the researchers who produce them, based on work that appears in particular collections of journals predominantly published in and by the Global North.
This distortion in how we value research has been amplified by the incorporation of journal metrics into global university rankings – which systematically exclude centres of Southern knowledge production – and into systems for assessing and evaluating research.
As a result, judgements are made about critical scholarship, people or institutions with relatively little understanding of the environments in which knowledge is needed, produced and used across the world.
Impact in the South
Global South academics want to do work that is relevant to their societies, but they also want to become visible to their peers internationally and recognised by their institutions locally. The former requires that they invest time in work with communities or groups of practitioners or policymakers, or devote energy to teaching to develop the talents of the next generation. The latter demands that they must publish in “top journals” and do work that is sufficiently interesting to global science. They are forced to either choose between the two or uncomfortably straddle both.
This is often felt more keenly by those who encounter further obstacles within the system. Especially, early-career researchers; women juggling professional and personal responsibilities; those in rural universities, researchers with fewer connections to global networks; and researchers working in under-funded disciplines.
Attempts to recognise and measure universities’ broader social contributions are often made, recently in a new SDG impact ranking. However, it is striking that the top slots still go to relatively well-funded universities in Europe, North America and Australia. Success is still determined by existing privilege and wealth – and language (English).
How publishing as a proxy for ‘excellence’ distorts how knowledge is produced
We could ignore the lists and the rankings, but the influence these have on our systems and cultures of research is acute. They influence the questions that are asked and the problems that are tackled, the methods deployed to gather and analyse data, the engagement of local communities in the process, and the ways in which knowledge is communicated and made available to those who could use it.
These systems of measurement also influence what universities choose to invest in: things that are most likely to result in more papers in “top journals” or a better position in the rankings, and not necessarily what will most benefit their staff, students and communities. And, in turn, they impact how the knowledge that is produced is valued by decision makers when weighing evidence to meet policy and operational needs.
Reimaging the value of research from the South
This conversation is shifting. Initiatives such as DORA have pushed the global science system to rethink the way research is measured. The Global Research Council has just published a report urging funders to rethink research assessment. IDRC’s Research Quality Plus framework has demonstrated that, when research is judged against criteria such as relevance to local knowledge needs and connections to local communities as well as scientific rigour, work undertaken by Southern researchers outperforms work produced by academics in the North. The JPPS framework has enabled Southern journals to assess quality and communicate that to authors and readers. It has now been adopted by Nepal’s University Grants Council to help promotion applications. From Latin America, AmeliCA is developing its own indexing services and “responsible metrics” for Iberoamerican science.
If knowledge is a product of the ecosystem, the community, the culture and the society in which it is produced, then the institutions that produce it should be too.
Erika Kraemer-Mbula and colleagues turn the debate on its head to ask instead what excellence might look like if it was defined from the South. As they say, “What the South does not lack is scientific talent”; what it needs are ways to recognise and value its talent.
Our very idea of the university – dominated by ideas and types of institution developed in Europe and North America over several centuries – needs to shift. If knowledge is a product of the ecosystem, the community, the culture and the society in which it is produced, then the institutions that produce it should be too.
To cite but a few examples: in Northern Uganda, Gulu University’s mission is to become a university for and of the community – to be rooted in the knowledge needs of the people it seeks to serve, and to involve them in the very processes of teaching, learning, researching and debating ideas. Uganda Martyrs wants to cultivate a new generation of ethical leaders to bring new approaches and new leadership into business, government or to social and community sector organisations. In Mexico, the Intercultural University of Veracruz seeks to equip young people to serve the needs of indigenous, rural communities under-served by the institutions of capital cities.
Rethinking value to make a new case for investment
These are, of course, also questions of funding and investment as well as intent. But rethinking how knowledge should be valued, and designing better ways of making that visible and assessed, will be needed to make the argument for that investment.
What we need are opportunities for research systems in the South to define their own frameworks for assessing and judging the value of research
What we need are opportunities for research systems in the South to define their own frameworks for assessing and judging the value of research, and for assembling the evidence to demonstrate that value and contribution. This is not by just making improvements on the existing system, starting from the same metrics. Instead, should be by starting with what they want from research and knowledge, and from the people and institutions that produce and communicate it, and from there deciding what and how to measure and evaluate the value of research. Perhaps that might even be done with an understanding that, in a world of complex, cross-community and cross-border challenges, we ought to encourage and reward collaboration and not just competition.
It could be a science council defining something within its own system, or a group of Southern science agencies that acted together to facilitate this process. Importantly, they would need to work with experts in the measurement and evaluation – to avoid the traps and failures of existing approaches – and to bring together a diverse group of users as well as producers, to develop something that really addresses the social value of research and knowledge.
We won’t get to a more equitable knowledge ecosystem if we don’t have a better way to recognise and reward what we most need from research, and what we really ought to value more.
Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below
Image Credit: Nigel Tadyanehondo via Unsplash.
I feel this acutely because I edit the journal that is in a domain that attracts a lot of authors from the global South, featuring a lot of case studies from Africa, Latin America, and different parts of Asia. We started the journal in the global North, but only just, close to the Mexican border with the US, and have always published in 3 languages including Spanish.
This blog post is generally correct, but the additional point is that there is an obligation on editors of decent journals around the world to encourage global South authors and to improve their manuscripts where this is possible. Line editing and checking biblios is something I do. The problem I have is where the manuscript lacks sufficient well collected and organized empirical data in the first place, or has completely unsubstantiated arguments or claims [particularly a problem in STEM to let these through to publication]. Meanwhile, developing a local case study of national rather than international interest can be fine, as long as context and institutional names and dynamics are explained. We decline to publish when the empirics or arguments are weak or unsubstantiated. So should INASP journals! This goes for any journals or authors, and includes of course submissions by authors originally from the global South who have been working in global north universities for many years [this makes up a large percentage of submissions]. Since we have been Open Access since 1994, what we do publish is very widely accessible.
I think my point is that being from the global South does not exempt authors from meeting standards for solid research and presentation. There are universities in the GS with far better researchers and facilities than my own, although this is of course not always the case. What we can help with is delivery, and when published, a small amount of prestige [we need none ourselves: our journal has no budget and is a labor of love]. For us to receive a good manuscript from any part of the world is a win for the journal and author. But we are not dropping our standards, either.
No lowering of standards should be made in the name of equity, I get the impression this idea is being flirted with sometimes. Perhaps those that excel in the South tend to relocate to the North, contributing to the difference in #’s, which is then mistaken for some undeserved disregard. Or perhaps these types of topics are written about because they are self-propelling due to a fear of pushing back. Whoever broaches that topic is a hero indeed. Fearless honesty is paramount in research, absence of emotion a close second.
It seems like this is an emotionally overheated response to concepts like trying to measure the value of scholarly production by how it meets local needs. Can you explain how this would requiring “lowering standards?” It seems like this is simply a different standard.
Thanks for the comments. Glad the piece resonated with your experiences, Simon. Journals like yours are a vital part of the ecosystem too: scholar led and managed, and going to commendable lengths to be inclusive and to provide that very valuable type of direct support. It’s an ethos that is familiar to many of the journals with which we work (we don’t publish any journals ourselves, only support editors to strengthen theirs). They are often scholar led too, and draw on significant volunteer labour and love.
The rigour of the research, and the data and arguments presented are of course critical. One way we aim to support that from INASP is through the AuthorAID platform (www.authoraid.info), which allows researchers, particularly those earlier on in their careers, to access support, advice and mentorship from peers, and more experienced researchers and editors.
My intention certainly wasn’t to suggest that standards should be lowered in the name of equity. In fact, the Journal Publishing Practices and Standards framework (linked in the post) is a composite of standards derived from international best practice in journal publishing. And the Global South scholars that we work with are determined to be part of the global scientific conversation, and want to ensure their work meets global scientific standards to do so.
The key issue, I think, are the many subtle ways that Northern standards become normative, and can act to exclude knowledge from the South – whether deliberately or not – or even to define what kind of knowledge is needed and should be valued in those countries and contexts. That is where we need more diverse ways of making those assessments, I think, rather than the short cuts that journal-based metrics offer. There are many stories of research that seems “scientifically excellent” but is of doubtful social value or cultural appropriateness done by Northern scholars working in the South.
The IDRC framework – Research Quality Plus – addresses this by taking scientific rigour (no lowering of standards) and then taking in measures of relevance (to the social and economic needs of the particular country or community in question) and positioning for impact (to account for the ways in which research can be designed and done, to make its results more accessible, valuable and actionable by those with an ability to affect change).