LSE - Small Logo
LSE - Small Logo

Blog Admin

December 24th, 2017

2017 in review: round-up of our top posts on metrics

0 comments

Estimated reading time: 5 minutes

Blog Admin

December 24th, 2017

2017 in review: round-up of our top posts on metrics

0 comments

Estimated reading time: 5 minutes


Mendeley reader counts offer early evidence of the scholarly impact of academic articles

mikethelwallAlthough the use of citation counts as indicators of scholarly impact has well-documented limitations, it does offer insight into what articles are read and valued. However, one major disadvantage of citation counts is that they are slow to accumulate. Mike Thelwall has examined reader counts from Mendeley and found them to be a useful source of early impact information. Mendeley reader data can be recorded from the moment an article appears online and so avoids the publication cycle delays that so slow down the visibility of citations.

Google Scholar is a serious alternative to Web of Science

Many bibliometricians and university administrators remain wary of Google Scholar citation data, preferring “the gold standard” of Web of Science instead. Anne-Wil Harzing, who developed the Publish or Perish software that uses Google Scholar data, here sets out to challenge some of the misconceptions about this data source and explain why it offers a serious alternative to Web of Science. In addition to its flaws having been overstated, Google Scholar’s coverage of high-quality publications is more comprehensive in many areas, including in the social sciences and humanities, books and book chapters, conference proceedings and non-English language publications.

How to make altmetrics useful in societal impact assessments: shifting from citation to interaction approaches

The suitability of altmetrics for use in assessments of societal impact has been questioned by certain recent studies. Ismael RàfolsNicolas Robinson-García and Thed N. van Leeuwen propose that, rather than mimicking citation-based approaches to scientific impact evaluation, assessments of societal impact should be aimed at learning rather than auditing, and focused on understanding the engagement approaches that lead to impact. When using altmetric data for societal impact assessment, greater value might be derived from adopting “interaction approaches” to analyse engagement networks among researchers and stakeholders. Experimental analyses using data from Twitter are presented here to illustrate such an approach.

Microsoft Academic is on the verge of becoming a bibliometric superpower

Last year, the new Microsoft Academic service was launched. Sven E. Hug and Martin P. Brändle look at how it compares with more established competitors such as Google Scholar, Scopus, and Web of Science. While there are reservations about the availability of instructions for novice users, Microsoft Academic has impressive semantic search functionality, broad coverage, structured and rich metadata, and solid citation analysis features. Moreover, accessing raw data is relatively cheap. Given these benefits and its fast pace of development, Microsoft Academic is on the verge of becoming a bibliometric superpower.

The new configuration of metrics, rules, and guidelines creates a disturbing ambiguity in academia

Much of academia has become increasingly influenced by metrics and a set of metrical practices. However, few have totally understood the massive wave of conflicting rules and guidelines that are necessary in order to stabilise these metrical practices. Using examples from his own experiences in Denmark, Peter Dahler-Larsen explains how these multiple, cross-cutting rules have created a disturbing ambiguity in academia. The bibliometric system and the rules which accompany it have created an environment in which many if not most researchers can be identified as transgressors.

The methodology used for the Times Higher Education World University Rankings’ citations metric can distort benchmarking

The Times Higher Education World University Rankings can influence an institution’s reputation and even its future revenues. However, Avtar Natt argues that the methodology used to calculate its citation metrics can have the effect of distorting benchmarking exercises. The fractional counting approach applied to only a select number of papers with high author numbers has led to a situation whereby the methodologists have unintentionally discriminated against certain types of big science paper. This raises questions about the benchmarking and also reiterates the importance of such rankings maintaining transparency in their data and methods.

Does research evaluation in the sciences have a gender problem? What do altmetrics tell us?

Many measures used for research evaulation, such as citations or research output, are hindered by an implicit gender bias. Stacy Konkiel examines whether or not altmetrics, which track how research is discussed, shared, reviewed, and reused by other researchers and the public, might be better suited to help understand the influence of research in a more gender-balanced way. Findings suggest that while men’s work may be talked about more in the first year after publication, over time female lead-authored papers eventually get more than their due.

The ResearchGate Score rewards academics’ active participation on the platform above their publications and citations

There are now more than 13 million users registered to the ResearchGate platform, which doubles as a venue to display one’s academic achievements and a social networking site where scientists can interact with one another. Enrique Orduna-MaleaAlberto Martín-MartínMike Thelwall, and Emilio Delgado López-Cózar scrutinise one of its key features, the much-maligned RG Score. While the computation of this metric is not transparent, closer analysis suggests it rewards participation in the platform, especially in its Q&A section, above all else. Is the goal of ResearchGate to reward altruism, the selfless cooperation among scientists? Or is it merely to promote interaction on its platform? Whatever the reason, it is important to remember bibliometric indicators are not neutral and potentially have consequences for the credibility of science.

Metrics, recognition, and rewards: it’s time to incentivise the behaviours that are good for research and researchers

Researchers have repeatedly voiced their dissatisfaction with how the journals they publish in are used as a proxy for the evaluation of their work. However, those who wish to break free of this model fear negative consequences for their future funding and careers. Rebecca Lawrence emphasises the importance of addressing researchers’ recognition and reward structures, arguing it is time to move to a system that uses metrics and indicators that incentivise the types of behaviours that are good for research and researchers. The European Commission’s Open Science Policy Platform has published a series of recommendations on how this might be done, and encourages their adoption by all stakeholder communities across the research process.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Annual review | Citations | Measuring Research

Leave a Reply

Your email address will not be published. Required fields are marked *