LSE - Small Logo
LSE - Small Logo

Taster

April 5th, 2019

Are altmetrics able to measure societal impact in a similar way to peer review?

1 comment | 10 shares

Estimated reading time: 5 minutes

Taster

April 5th, 2019

Are altmetrics able to measure societal impact in a similar way to peer review?

1 comment | 10 shares

Estimated reading time: 5 minutes

Altmetrics have become an increasingly ubiquitous part of scholarly communication, although the value they indicate is contested. In this post, Lutz Bornmann and Robin Haunschild present evidence from their recent study examining the relationship of peer review, altmetrics, and bibliometric analyses with societal and academic impact. Drawing on evidence from REF2014 submissions, they argue altmetrics may provide evidence for wider non-academic debates, but correlate poorly with peer review assessments of societal impact.

Many benefits emerge from academic research, and they have impact on stakeholders in diverse ways. Impact is in this way a multi-faceted phenomenon, which raises the following question; what are the most informative tools to track these different outcomes?

When quantitative approaches to research evaluation were first trialed at the end of the 1980s, they mostly drew on publication and citation data (bibliometrics) and they mostly targeted academic impact – the impact of research on other academics. More highly-cited work was taken as an indicator of research ‘excellence’, which was widely pursued as a public policy goal. Academic research excellence remains important, but the policy agenda has shifted, notably since the introduction of societal impact considerations into the UK’s Research Excellence Framework (REF). However, assessing the nature, scale, and beneficiaries of research impact, especially quantitatively, remains a complex undertaking.

One potential way of quantitatively assessing societal impact has been through altmetrics – online indicators of research use. In a recent study, based on data from the UK REF, we therefore decided to examine the extent to which altmetrics are able to measure societal impact in a way similar to the peer review of case studies. We found a relationship, but not one that provides a firm indicator.

Fortunately, data for REF2014 are available for comprehensive studies. One key feature of the review process is that we have two distinct and therefore comparable types of publications being submitted: (i) evidence of academic achievement based on four selected outputs per researcher and (ii) evidence of socio-economic impact based on case studies with six underpinning references.

Our study focused on those items submitted to REF2014 that can be uniquely identified via DOIs (Digital Object Identifiers): essentially, journal papers rather than other output types. For journal papers, we can also acquire impact and attention data: citation counts and media mentions of various kinds. We anticipated that the impact of papers submitted for academic outputs would be significantly different from the impact of references cited in case studies: the former should be strong in academia, but weak in other sectors of society, whereas, the opposite should be true for the latter.

For our analysis, the test prediction was that papers that were submitted as evidence of academic achievement would be relatively well-cited compared to papers that supported case studies. By contrast, the papers supporting case studies might be relatively less well-cited but would have wider societal recognition, trackable through Altmetric.com data (sourced from Twitter, Wikipedia, Facebook, policy-related documents, news items, and blogs). If we discovered that there was no difference between these publication sets in their bibliometric citations and altmetric mentions, then our ability to quantitatively distinguish between different kinds of impacts is brought into doubt.

In practice, we compared three publication categories, not two, because the pool of submitted outputs and case study references overlap to a substantial degree. There were 120,784 journal papers in the REF2014 database of submitted research outputs (PRO) and 11,822 journal papers among the case study references (PCS) which we were able to match with citation data via their DOI. 5,703 papers were submitted in 2014 as both PROs and PCSs (PRO/PCS). Intriguingly, the overlap was lower in basic research areas than in applied research areas.

Our study examined convergent (and discriminant) validity: do indices of societal and academic impact vary in distinct ways between PCS and PRO articles? Do different approaches to societal impact (REF scores and altmetrics) create comparable measures of a common construct in case study data (if they measure the same construct and are convergently valid, then REF scores should correlate with altmetrics data).

We expected higher correlations (i) for PRO between REF output scores and citation impact and (ii) for PCS between REF impact scores (for case studies) and altmetrics. Lower correlations are expected for (i) REF output scores and altmetrics for PRO and (ii) REF impact scores (for case studies) and citation impact for PCS.

We found:

  • Average bibliometric citation impact is higher for PRO than for PCS.
  • Mentions of papers in policy-related documents (especially) and Wikipedia are significantly higher for PCS than for PRO; the result for news items is similar, though slightly smaller.
  • For Twitter counts, the PCS-PRO difference is close to zero, nor do counts correlate with citations: tweets do not appear to reflect any serious form of impact.
  • The highest scores, across all indicators, were associated with the PCS/PRO overlap. These publications were cited as frequently as the pure PRO set and scored higher than the pure PCS on altmetrics for every source.

We then correlated REF scores and metrics on the basis of UK research institutions, following the approach of comparing decisions in peer review with later citation impact (Bornmann, 2011). We found that REF scores on impact case studies correlated only weakly with altmetrics, thereby disqualifying arguments in favor of using altmetrics for societal or broader impact measurements. The weak relationship between peer assessments of societal impact and altmetrics data mirrors other studies (Thelwall & Kousha 2015) and questions any application of altmetrics in measuring societal impact in research evaluation. Whereas peers can acknowledge a successful link between research and societal impacts (based on descriptions in case studies), altmetrics do not seem to be able to reflect that. Altmetrics may instead demonstrate distinct public discussions around certain research topics (which can be visualized, see Haunschild, Leydesdorff, Bornmann, Hellsten, and Marx, 2019).

Perhaps the most interesting results here, are the relatively high scores – across the board – for publications that were submitted by individual researchers and then also used to support case studies. Some outputs have an evident capacity for impact, whether that is amongst other researchers, or in their potential for wider application. There is therefore no necessary gap between academic and societal value, a conclusion that has been known at least since Vannevar Bush’s seminal “Science, the Endless Frontier”. Societal value can be expected from research that follows high academic standards.

 

This blog post is based on the authors’ co-written article Do altmetrics assess societal impact in a comparable way to case studies? An empirical test of the convergent validity of altmetrics based on data from the UK research excellence framework (REF)

Image credit: Prior Health Sciences Librar Mural via Wikimedia Commons (CC BY-SA 3.0

Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

About the authors

Lutz Bornmann – Division for Science and Innovation Studies, Administrative Headquarters of the Max Planck Society, Hofgartenstr. 8, 80539 Munich, Germany. Email: bornmann@gv.mpg.de

Robin Haunschild – Max Planck Institute for Solid State Research, Heisenbergstr. 1, 70569 Stuttgart, Germany. Email: R.Haunschild@fkf.mpg.de

 

Print Friendly, PDF & Email

About the author

Taster

Posted In: Citations | Impact | Measuring Research | Peer review | Research evaluation

1 Comments