There are now more than 13 million users registered to the ResearchGate platform, which doubles as a venue to display one’s academic achievements and a social networking site where scientists can interact with one another. Enrique Orduna-Malea, Alberto Martín-Martín, Mike Thelwall, and Emilio Delgado López-Cózar scrutinise one of its key features, the much-maligned RG Score. While the computation of this metric is not transparent, closer analysis suggests it rewards participation in the platform, especially in its Q&A section, above all else. Is the goal of ResearchGate to reward altruism, the selfless cooperation among scientists? Or is it merely to promote interaction on its platform? Whatever the reason, it is important to remember bibliometric indicators are not neutral and potentially have consequences for the credibility of science.
Academic profile platforms like ResearchGate, Academia.edu, Microsoft Academic, and Google Scholar Citations serve as a kind of window display for researchers, used to showcase scientific and academic achievements. These platforms not only highlight researchers’ outputs (journal articles, book chapters, monographs, working papers, conference papers, posters, teaching materials, datasets, etc.) but also their curriculum vitae, skills, and contacts. In some cases, they are also spaces in which scientists can interact with one another. Both their form and content help to attract information-seekers to the shop window. These platforms also compute simple metrics that attempt to quantify the impact of researchers’ outputs. These indicators (h-index, RG Score, etc.) are usually displayed prominently and therefore influence how researchers are perceived by those that don’t already know them. If researchers consider these sites to be projecting their reputation then they may be tempted to game the metrics to raise their profile.
ResearchGate stands out among its competitors. Its growth seems unstoppable: by July 2017 it had over 13 million registered users and covered over 100 million documents. Additionally, several studies show its growing use by researchers, academics, and practitioners, due to its usefulness as a tool to interact with other users and monitor the impact of research outputs.
Image credit: HIGH SCORE by Kevin Simpson. This work is licensed under a CC BY-SA 2.0 license.
Although ResearchGate started in 2008, the RG Score indicator was only introduced in August 2012. This indicator considers three basic aspects of research activity: productivity (number of studies published in certain venues), interaction with other members of the community (questions, answers, followers, etc.), and evidence of impact of outputs in the community (citations, reads, etc.).
The RG Score indicator has been extensively criticised since its launch because of its absolute lack of transparency. To this day, the company has not disclosed the specific factors used for its computation, nor the weight each factor has on the final composite indicator. Among the studies to have addressed this issue, one published by Peter Kraker, Kay Jordan, and Elisabeth Lex, and previously discussed in this very forum, described the RG Score as a bad metric. The authors demonstrated the difficulty of reproducing the RG Score, as well as its strong dependence on the Impact Points indicator (which is no longer public on the platform). For this purpose, the authors studied “a small sample of academics (30), who have a RG Score and only a single publication on their profile”. They then expanded the sample “to include examples from two further groups of academics (30 academics who have a RG Score and multiple publications; and a further 30 who have a RG Score, multiple publications, and have posted at least one question and answer)”. The authors concluded that the number of Impact Points (the sum of the impact factors of the journals in which the authors have published) largely determined the RG Scores of the researchers studied.
Aiming to better understand the RG Score and assess whether it is reasonable to use it as an indicator of academic reputation, we set out to analyse ResearchGate users with the highest RG Scores. We wanted to find out whether this score is based on classic academic activities (number of publications, citations received, reads, and downloads).
To this end, we collected two samples of users. First, we gathered a sample of 104 users with either extremely high RG Scores (50+) or high values in other indicators (number of followers, number of questions asked and answered, number of publications and citations, etc.). Second, we collected a sample of 73 Nobel Prize winners from the fields of medicine and physiology, chemistry, physics, and economics (from 1975 to 2015).
The results were illuminating. The indicators displayed on the profiles of Nobel Prize winners were based exclusively on publications and citations. They rarely followed or were followed by other users, nor did they engage in asking or answering questions. The highest RG Score in this sample was that of Paul Greengard (awarded the Nobel Prize in Physiology or Medicine in 2000, with Arvid Carlsson and Eric Kandel), whose RG Score was 54.18 at the time of the study. In the sample of users with the highest RG Scores, the top score was that of Adam B. Shapiro (439.82). In this sample, there were 26 users whose RG Score was over 100, primarily due to asking and answering questions in the platform. Thus, a year after carrying out our study, Greengard’s RG Score has only increased by five hundredths of a point (to 54.23), while Shapiro’s has increased by 235 points (to 674.94).
Based on these data, it seems that ResearchGate rewards participation in the platform, especially in the questions and answers section, above all else. Only users that are active in the platform can apparently achieve a three-figure RG Score.
What happens to users who only use the platform to list their research outputs, as most Nobel Prize winners in our sample do? In this case, the RG Score has a hidden ceiling: it is extremely difficult, if not impossible, to reach a RG Score of 60. No matter how many citations, reads, or downloads a user’s documents receive, the RG Score barely changes once the user has reached this level. However, there does not seem to be a limit to the rewards for participating in the platform.
We can therefore classify users by their level of participation in the platform. Passive users, like our Nobel Prize winners, only add their publications to the platform, without interacting with it further. Active users interact extensively with others through the platform and can therefore achieve the highest RG Scores. In the middle are occasional users.
We do not question the value of activities like generating high-quality discussion through questions and answers, being followed by prestigious authors, commenting on publications, or recommending studies to colleagues. However, affording too much weight to these activities in the indicator seems to have led to scores that are misleading from a scientific point of view, especially as they are not transparent.
Is the goal of ResearchGate to reward altruism, the selfless cooperation among scientists? Or is it merely to promote interaction on its platform, encouraging it with these new metrics?
Whether it is for philanthropic and/or commercial reasons, in this era of postmodern science, bibliometric indicators are not neutral. They have consequences because they help build identities and reputations that bring benefits to their holders. A high RG Score can generate a good impression with someone who happens to walk by the shop window and knows only what is shown on an author’s “business card”. Scholars, funders, journalists, and other research users might be impressed by the RG Score but might not dig deeper to find out what is behind it, how it has been built, and what it means.
What deeply concerns us is that this kind of indicator might encourage deviant behaviours among scientists. Although the traditional mechanisms for academic recognition all have limitations (from phantom authors to citation cartels), artificially inflating academic profiles might become one more way in which authors can spend/waste time on activities designed only to achieve recognition in their community.
Have those strategies traditionally used in marketing and advertising to build an apparently impressive – but on closer look, hollow – corporate and personal identity now arrived in academia? We want to warn against these mirages, and suggest we apply healthy scepticism and Cartesian doubt as mechanisms of intellectual defence.
This blog post is based on the authors’ article, “Do ResearchGate Scores create ghost academic reputations?”, published in Scientometrics (DOI: 10.1007/s11192-017-2396-9).
Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.
About the authors
Enrique Orduna-Malea works as a Juan de la Cierva (JdC) postdoctoral researcher at the Universidad de Granada, Spain.
–
–
Alberto Martín-Martín is a PhD Candidate in the field of bibliometrics and scientific communication at the Universidad de Granada, Spain.
–
–
Mike Thelwall is a professor of information science at the University of Wolverhampton, UK.
–
–
Emilio Delgado-López-Cózar is Professor of Research Methods at the Universidad de Granada, Spain.
–
–
1 Comments