Academic profiling services are a pervasive feature of scholarly life. Alberto Martín-Martín, Enrique Orduna-Malea and Emilio Delgado López-Cózar discuss the advantages and disadvantages of major profile platforms and look at the role of ego in how these services are built and used. Scholars validate these services by using them and should be aware that the portraits shown in these platforms depend to a great extent on the characteristics of the “mirrors” themselves.
The model of scientific communication has recently undergone a major transformation: the shift from the “Gutenberg galaxy” to the “Web galaxy”. Following in the footsteps of this shift, we are now also witnessing a turning point in the way science is evaluated. The “Gutenberg paradigm” limited research products to the printed world (books, journals, conference proceedings…) published by scholarly publishers. This model for scientific dissemination has been challenged since the end of the twentieth century by a plethora of new communication channels that allow scientific information to be (self-)published, indexed, searched, located, read, and discussed entirely on the public Web, one more example of the network society we live in.
In this new scenario, a set of new scientific tools are now providing a variety of metrics that measure all actions and interactions in which scientists take part in the digital space, making some hitherto overlooked aspects of the scientific enterprise emerge as objects of study. In the words of Jason Priem the First Revolution promoted the homogeneity of outputs (through academic journals, the main communication channel), and the Second Revolution promotes the diversity of outputs. We can draw a comparison between those revolutions and the changes that are taking place in the field of scientific evaluation: the First Revolution promoted the homogeneity of performance metrics (through the Impact Factor, the “gold standard” of scientific evaluation), and the Second Revolution promotes a diversity of metrics (h-index, altmetrics, usage metrics). The emergence of academic profiling services (most of them created in 2008) was a collateral consequence.
Because each of these tools focuses on fulfilling a different set of needs, caters to a specific audience (diverse communities), and provides a variety of different metrics, it stands to reason that they should reflect different sides of academic impact. Each platform becomes then a mirror reflecting the likeness of the communities that use it.
Recently, we set out to radiograph the discipline of Bibliometrics, not only trying to identify the core authors, documents, journals, and the most influential publishers in the field, but also comparing the diverse portraits shown by each platform, with special attention to the one offered by Google Scholar Citations. We collected data for a sample of 814 researchers who work mainly or incidentally in the field of Bibliometrics. These data can be browsed in the website Scholar Mirrors, and an analysis of the results can be found in this working paper.
During this exercise we isolated some of the main features of these academic profiling services (Google Scholar Citations, ResearchGate, Mendeley, and ResearcherID) in terms of their general advantages and disadvantages, which are summarized below in Table 1.
TABLE 1: COMPARISON OF DIFFERENT ACADEMIC PROFILING SERVICES
Each academic profile platform offered distinct and complementary data on the impact of scientific and academic activities as a consequence of their different user bases, document coverage, specific policies, and technical features. Not all platforms have a homogenous coverage of all scientific disciplines. Likewise, their user bases aren’t uniform either. Researchers should be aware that the bibliometric portraits shown in these platforms depend to a great extent on the individual characteristics of the “mirrors” themselves.
Google Scholar Citations profiles draw upon the vast coverage of Google Scholar (giving voice to all disciplines, languages, countries; academics and professionals) at the cost of a little accuracy (errors in parsing citations or authorship) and with an austere approach (few indicators, and little user interaction). Nonetheless, it offers the most advanced management system for versions and duplicates.
Regarding ResearchGate, the great amount of documents already uploaded by a growing user base (especially from the biomedicine community) supports the usefulness of some of its indicators (especially Views and Downloads, now combined into Reads). However, the lack of transparency compromises its reliability. Likewise, unannounced changes in some of its key features make this platform unpredictable at the moment.
Mendeley, despite being an excellent social reference manager, offers the most basic author profiling capabilities of all the platforms we analysed, although we should acknowledge the usefulness of the Reader metric. The term used to define this metric is, however, misleading, because it doesn’t accurately reflect the nature of the metric. Lastly, the fact that profiles aren’t automatically updated makes the system completely dependent on user activity. This fact strongly limits the use of Mendeley’s profile page for evaluating purposes.
ResearcherID does not offer automatic profile updates either. As a result, a great percentage of profiles have no public contributions listed in their profile (34.4% in our sample of bibliometricians, which should be the ones who are most aware of these tools). Moreover, we found errors in citation counts inherited from the Web of Science. A word of warning: the Web of Science also makes mistakes.
At any rate, the growth of academic profiling services is unstoppable, and practices like the aggressive marketing used by ResearchGate fuel the ego that dwells in every researcher through the systematic e-mail bombing directed to the Narcissus that lives inside of us. The potential positive effects are clear: new channels of information and new collaboration tools. However, this road might also lead us to look ourselves in the scholar mirror every day, and there lies the path to the Dark Side.
Paraphrasing Bill Clinton’s famous quote for the 1992 North American presidential campaign: “it’s the economy, stupid!” we could now state the following: it’s not the collaboration; it’s the ego, stupid! Ego moves Academia. These new platforms, whether they are integrated in other products or not, will be massively used by universities, research institutions, and national funding agencies to evaluate scholars, because scholars are validating them by using them massively. Hence, we should not conclude without warning about the dangers of blindly using any of these platforms for the assessment of individuals without verifying the veracity and exhaustiveness of the data.
This blog post is based on a working paper which can be found here. All the data obtained for each author profile and the results of the analysis can be found at Scholar Mirrors. Featured image credit: William Murphy The sculpture ‘Eco’ outside the New Library at the Queen’s University of Belfast (Flickr, CC BY-SA)
Note: This article gives the views of the author, and not the position of the LSE Impact blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.
About the Authors
Alberto Martín-Martín is a PhD Candidate in the field of bibliometrics and scientific communication at the Universidad de Granada (UGR).
Enrique Orduna-Malea works as a postdoctoral researcher at the Polytechnic University of Valencia (UPV).
Emilio Delgado López-Cózar is a Professor of Research Methods at the Universidad of Granada (UGR).
what about academia.edu?
Hello César, thank you for your interest in our work.
Indeed, academia.edu could also have been included. We know academia.edu is used specially by researchers in the Humanities. However, since we wanted to study only researchers in Bibliometrics, we chose not to.
Hello César. Sorry for the late response. We sent a reply a few days ago but we’ve just realised it was lost in transit.
Indeed, academia.edu could also have been added. We know this platform is used a lot by researchers in the Humanities, but since our study focused on researchers in Bibliometrics, we chose not to.
It’s almost dead already 😉
As I mentioned when I saw this article elsewhere, there are a few corrections that should be made to the comparison of Mendeley. The main thing to understand about the Mendeley catalog is that it doesn’t rely on user input at all, but rather on extraction of information from PDFs and databases when documents are added to a researcher’s Mendeley library.
I would also ask people to think about the claim that the main usage statistic reported by Mendeley can be easily influenced by people seeking to up their score for some reason. In order to do this, one would have to create a new Mendeley account and add the document to it in order to increase the read number by one. Such automatic account creation is difficult to do for the average researcher, yet easily detected and blocked by Mendeley.
Hello Mr. Gunn,
Thank you for taking an interest in our work. I think our phrasing may have caused a misunderstanding. When we talk about “user’s input” we weren’t talking about users manually entering article metadata. We know Mendeley automatically extracts metadata from pdfs (even if the process isn’t always perfect, as some examples in our working paper show). We meant that the document base in Mendeley is dependent on users for adding documents to their library. That is, if no one adds a given document to his/her Mendeley library, there won’t be information about that article on Mendeley.
We think Mendeley is a great reference manager, but its public profile feature is not as well developed as the ones offered by the rest of the platforms… or rather, they were when we did this study. Today we have noticed they have added three new metrics: h-index and citations (Scopus data), and Views (from ScienceDirect).
ResearchGate, by the way, now offers the h-index as well (with and without self-citations).
We haven’t tested the potential exploit you mention (inflating the reader count by creating dummy accounts), but it’s definitely possible some people might try that.
ORCID is missing in this study as well as ScopusID … is it possible to include them?
I think in the current system, where academic researchers earn arguably little money considering their 4+ years of higher education (in Germany around 8 years for a PhD), prestige is their main form of “income”.
Prestige, in form of high impact, awards and “reach” in mainstream media, potentially enables researcher to get more and better suited research grants and a better chance to maybe become a principle investigator / professor.
Thus they might have better chances to pursue their “academic dreams”, meaning that they can research exactly what they want. In my point of view, based rather on anecdotal evidence than hard facts, this is the real form of income of researchers which, i would say, is directly aimed at the ego of the researchers and thus potentially bloat it to “biasing” / pathological proportions.
Thus i think the online academic sharing- and social platforms are just a consequence of current trends and technology but do not change the nature of ego in academia which, in my opinion, was prevalent in the age of print media as well.
But of course the fastness of sharing and visibility and the claimed accurate measurability of the personal impact might aggravate that potential threat.
But on a broader scale, what does not impact the ego in our current society?
‘Ego’ sounds pejorative. Reputation is and always has been the driving dynamic of scientific publishing. The ‘new’ factor is the environment of abundance, as opposed to scarcity. My new book: The Collaborative Era in Science will detail this factor.