LSE - Small Logo
LSE - Small Logo

Blog Admin

October 30th, 2017

Research assessments based on journal rankings systematically marginalise knowledge from certain regions and subjects

14 comments | 148 shares

Estimated reading time: 5 minutes

Blog Admin

October 30th, 2017

Research assessments based on journal rankings systematically marginalise knowledge from certain regions and subjects

14 comments | 148 shares

Estimated reading time: 5 minutes

Many research evaluation systems continue to take a narrow view of excellence, judging the value of work based on the journal in which it is published. Recent research by Diego ChavarroIsmael Ràfols and colleagues shows how such systems underestimate and prove detrimental to the production of research relevant to important social, economic, and environmental issues. These systems also reflect the biases of journal citation databases which focus heavily on English-language research from the USA and north and western Europe. Moreover, topics covered by these databases often relate to the interests of industrial stakeholders rather than those of local communities. More inclusive research assessments are needed to overcome the ongoing marginalisation of some peoples, languages, and disciplines and promote engagement rather than elitism.

Many research evaluation systems are built around vague notions of excellence. Excellent research is presented as that which advances the frontiers of science. Often, governments reward researchers who can show that their publications are “excellent”. The ultimate aim of this reward system is to support only the best research and researchers and to discourage “poor quality” research.

A common practice in many research evaluation systems is to judge the value of a work based on the journal in which it is published. These assessments are based on the assumption that research published in prestigious (“top”) journals is excellent, and therefore should be rewarded. The DORA declaration and the Leiden Manifesto have warned against this practice, yet journal-based evaluation systems continue to be used in many countries such as Spain, Brazil, Colombia, South Africa, and in influential global rankings such as the Shanghai Jiao Tong University’s or the THE World University Rankings. However, our recent research highlights the problems of relying on journal rankings for research assessment, and seriously questions this practice.

One study shows that journal-based research evaluation systems underestimate research that is relevant for important social, economic, and environmental issues, thus marginalising some types of knowledge. For instance, passion fruit horticulture, oil-palm diseases, specific pathogens that attack red carnation, botanical studies of biodiversity, and Latin American business history are subjects published mainly in journals that score low in journal rankings. Since these topics do not fit journal-based ideas of excellence, they receive little governmental recognition and resources, and are not considered desirable from the perspective of public accountability. However, these issues are essential to the economic, environmental, and social development of regions such as Latin America.

Image credit: dirks LEGO world map 7 south america by dirkb86. This work is licensed under a CC BY 2.0 license.

Another study shows that the inclusion of journals in the most prestigious citation databases, the Web of Science (WoS), is not based on objective criteria. Specifically, the study shows that the country, language, and discipline of a journal influence the probability of inclusion, regardless of its editorial quality or scientific impact. For instance, Colombian journals have lower chances of being indexed by WoS compared to Spanish journals with similar scientific impact and editorial quality. These biases are reflected in WoS’s overall coverage of journals, which is focused on those produced in the USA and north and western Europe, on natural sciences, and in English language. In other words, some journals received a more positive assessment than others by WoS due to particularistic reasons such as country or disciplinary subject, which poses questions about the adequacy of using this data source to assess the quality of journals.

Inclusion and exclusion of journals in databases affect the coverage of topics, as illustrated in another case study that explores extent of coverage on rice research in different databases. This study compared the coverage of publications in the databases WoS, Scopus, and CAB Abstracts (CABI). It showed that CABI has a much higher coverage of the issue “rice” (77%) than Scopus (60%) or WoS (43%). In terms of research subjects or topics, WoS and Scopus focus on molecular biology, traditional genetics, and industry-related consumption, whereas CABI focuses more on productivity, plant nutrition, plant characteristics, and plant protection. The foci of WoS and Scopus seem to be related to the research interests of seed companies and food industry, while the foci of CAB are more related to potential interests of local farmers and communities. In this case, research in journals indexed by WoS and Scopus seems to better cover the interests of industrial stakeholders than the interests of small, poorer farmers.

All these studies show the consequences of applying a narrow understanding of excellence to evaluate research regardless of the context, for example in terms of country, discipline, and language. In many countries, evaluation systems are based on one-size-fits-all policy instruments. As a result of this uniformity, these evaluations fail to properly assess research that is socially, environmentally, and economically relevant but not highly cited, or not published in English, or in the more prestigious journals. Evaluations of this type are blind to dimensions of research that are needed to face the challenges posed by sustainable development, such as eradicating poverty, mitigating the effects of climate change, and achieving peace and justice.

Although the evidence presented in this post is focused on journal-based evaluations, the insights also apply to other types of appraisal. For instance, there is an increasing interest in indicators derived from social media – such as tweets, Mendeley reads, etc. – as alternatives to citation counts. Although these are well-intentioned attempts to offer alternative indicators, they are not derived from an alternative understanding of academic knowledge production and communication. In the end, these indicators are based on the same “counting” concept of excellence as reputation or popularity (“the more, the better”), which makes them prone to the same pitfalls as citations-based rankings and assessments (hierarchies, concentration of recognition, marginalisation).

A genuinely alternative understanding of academic knowledge production as an endeavour to benefit people and the environment should motivate different ways to conduct research assessment. Alternative research evaluations should be sensitive to context and account for diversity in scientific production (related to geography, language, discipline, and others) helping to overcome the deficiencies of conventional evaluation practice.

For the reasons above, we believe that more inclusive research assessments are needed to overcome ongoing marginalisation of some peoples, languages, and disciplines in evaluation. Some efforts, such as the Dutch “new standard evaluation protocol” for the social sciences and the “Norwegian model” of research assessment that is sensitive to regional journals, have moved in this direction – trying to account for the value of various disciplines and local research. However, a creative and radical reform in research evaluation from research councils and universities is needed in many other countries that follow traditional quantitative methods.

Facing the challenges posed by sustainable development requires us to transform the concept of excellence from an elitist view, defined at a distance from society, to a more community-oriented, inclusive view, which encourages engagement. Such an understanding could produce research evaluations that support socially robust knowledge and are sensitive to the relevance of many diverse types of knowledge that are currently underestimated.

The views expressed in this blog post are not necessarily those of Colciencias. This post is based on the authors’ article, “Why researchers publish in non-mainstream journals: Training, knowledge bridging, and gap filling”, published in Research Policy (DOI: 10.1016/j.respol.2017.08.002); and the preprints “To What Extent is Inclusion in the Web of Science an Indicator of Journal ‘Quality’?”, available on SSRN; and “Under-reporting research relevant to local needs in the global south. Database biases in the representation of knowledge on rice”, available on SemanticScholar.

Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

About the authors

Diego Chavarro is an advisor to Colciencias, a Colombian governmental organisation in charge of national science and technology policy. His research focuses on research evaluation, knowledge production, and the contribution of science, technology, and innovation to sustainable development. His ORCID ID is 0000-0001-9116-0891.

Ismael Ràfols is a research fellow at Ingenio (CSIC-UPV), a science policy institute at the Universitat Politècnica de València. He is also Visiting Professor at the Centre for Science and Technnology Studies (CWTS) at the University of Leiden. Ismael’s research focuses on plural and inclusive use of S&T indicators for informing evaluation, funding and research strategies. His ORCID iD is 0000-0002-6527-7778.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Academic publishing | Citations | Evidence-based research | Impact | Measuring Research | Research evaluation

14 Comments