• Permalink Gallery

    Peer review and bibliometric indicators just don’t match up according to re-analysis of Italian research evaluation.

Peer review and bibliometric indicators just don’t match up according to re-analysis of Italian research evaluation.

The Italian research evaluation agency undertook an extensive analysis to compare the results of peer review and bibliometric indicators for research evaluation. Their findings suggested both indicators produced similar results. Researchers Alberto Baccini and Giuseppe De Nicolao re-examine these results and find notable disagreements between the two techniques of evaluation in the sample and outline below the major shortcoming in the Italian Agency’s […]

Print Friendly
  • Permalink Gallery

    Getting our hands dirty: why academics should design metrics and address the lack of transparency.

Getting our hands dirty: why academics should design metrics and address the lack of transparency.

Metrics in academia are often an opaque mess, filled with biases and ill-judged assumptions that are used in overly deterministic ways. By getting involved with their design, academics can productively push metrics in a more transparent direction. Chris Elsden, Sebastian Mellor and Rob Comber introduce an example of designing metrics within their own institution. Using the metric of grant income, their tool ResViz […]

Print Friendly
  • Permalink Gallery

    Evaluating research assessment: Metrics-based analysis exposes implicit bias in REF2014 results.

Evaluating research assessment: Metrics-based analysis exposes implicit bias in REF2014 results.

The recent UK research assessment exercise, REF2014, attempted to be as fair and transparent as possible. However, Alan Dix, a member of the computing sub-panel, reports how a post-hoc analysis of public domain REF data reveals substantial implicit and emergent bias in terms of discipline sub-areas (theoretical vs applied), institutions (Russell Group vs post-1992), and gender. While metrics are […]

Print Friendly
  • Permalink Gallery

    A call for inclusive indicators that explore research activities in “peripheral” topics and developing countries.

A call for inclusive indicators that explore research activities in “peripheral” topics and developing countries.

Science and Technology (S&T) systems all over the world are routinely monitored and assessed with indicators that were created to measure the natural sciences in developed countries. Ismael Ràfols and Jordi Molas-Gallart argue these indicators are often inappropriate in other contexts. They urge S&T analysts to create data and indicators that better reflect research activities and contributions in these “peripheral” […]

Print Friendly
  • Permalink Gallery

    Ancient Cultures of Conceit Reloaded? A comparative look at the rise of metrics in higher education.

Ancient Cultures of Conceit Reloaded? A comparative look at the rise of metrics in higher education.

When considering the power of metrics and audit culture in higher education, are we at risk of romanticising the past? Have academics ever really worked in an environment free from ‘measurement’? Roger Burrows draws on his own recollection of the 1986 Research Selectivity Exercise (RSE), scholarly work on academic labour and fictional portrayals of academic life, which all demonstrate the substantial expansion of the role […]

Print Friendly

The ResearchGate Score: a good example of a bad metric

According to ResearchGate, the academic social networking site, their RG Score is “a new way to measure your scientific reputation”. With such high aims, Peter Kraker, Katy Jordan and Elisabeth Lex take a closer look at the opaque metric. By reverse engineering the score, they find that a significant weight is linked to ‘impact points’ – a similar metric to the […]

Print Friendly
  • Permalink Gallery

    Bringing together bibliometrics research from different disciplines – what can we learn from each other?

Bringing together bibliometrics research from different disciplines – what can we learn from each other?

Currently, there is little exchange between the different communities interested in the domain of bibliometrics. A recent conference aimed to bridge this gap. Peter Kraker, Katrin Weller, Isabella Peters and Elisabeth Lex report on the multitude of topics and viewpoints covered on the quantitative analysis of scientific research. A key theme was the strong need for more openness and transparency: transparency in research […]

Print Friendly
  • Permalink Gallery

    When are journal metrics useful? A balanced call for the contextualized and transparent use of all publication metrics

When are journal metrics useful? A balanced call for the contextualized and transparent use of all publication metrics

The Declaration on Research Assessment (DORA) has yet to achieve widespread institutional support in the UK. Elizabeth Gadd digs further into the slow uptake. Although there is growing acceptance that the Journal Impact Factor is subject to significant limitations, DORA feels rather negative in tone: an anti-journal metric tirade. There may be times when a journal metric, sensibly used, is […]

Print Friendly
  • Permalink Gallery

    We need informative metrics that will help, not hurt, the scientific endeavor – let’s work to make metrics better.

We need informative metrics that will help, not hurt, the scientific endeavor – let’s work to make metrics better.

Rather than expecting people to stop utilizing metrics altogether, we would be better off focusing on making sure the metrics are effective and accurate, argues Brett Buttliere. By looking across a variety of indicators, supporting a centralised, interoperable metrics hub, and utilizing more theory in building metrics, scientists can better understand the diverse facets of research impact and research quality.

In […]

Print Friendly
  • Permalink Gallery

    Pursuing a multidimensional path to research assessment – Elsevier’s approach to metrics

Pursuing a multidimensional path to research assessment – Elsevier’s approach to metrics

The Metric Tide report calls for the responsible use of metrics. As a supplier of data and metrics to the scholarly community, Elsevier supports this approach and agrees that metrics should support human judgment and not replace it, writes Peter Darroch. To be used effectively, there needs to be a broad range of metrics generated by academia and industry […]

Print Friendly
  • Permalink Gallery

    Rather than narrow our definition of impact, we should use metrics to explore richness and diversity of outcomes.

Rather than narrow our definition of impact, we should use metrics to explore richness and diversity of outcomes.

Impact is multi-dimensional, the routes by which impact occur are different across disciplines and sectors, and impact changes over time. Jane Tinkler argues that if institutions like HEFCE specify a narrow set of impact metrics, more harm than good would come to universities forced to limit their understanding of how research is making a difference. But qualitative and quantitative indicators continue […]

Print Friendly
  • Permalink Gallery

    Can metrics be used responsibly? Structural conditions in Higher Ed push against expert-led, reflexive approach.

Can metrics be used responsibly? Structural conditions in Higher Ed push against expert-led, reflexive approach.

Do institutions and academics have a free choice in how they use metrics? Meera Sabaratnam argues that structural conditions in the present UK Higher Education system inhibit the responsible use of metrics. Funding volatility, rankings culture, and time constraints are just some of the issues making it highly improbable that the sector is capable of enacting the approach that the Metric […]

Print Friendly
  • Permalink Gallery

    Using REF results to make simple comparisons is not necessarily responsible. Careful interpretation needed.

Using REF results to make simple comparisons is not necessarily responsible. Careful interpretation needed.

What are the implications of the HEFCEmetrics review for the next REF? Is is easy to forget that the REF is already all about metrics of research performance. Steven Hill, Head of Research Policy at HEFCE, reiterates that like any use of metrics, we need to take great care in how we use and interpret the results of the REF.

This is […]

Print Friendly
  • Permalink Gallery

    The Management of Metrics: Globally agreed, unique identifiers for academic staff are a step in the right direction.

The Management of Metrics: Globally agreed, unique identifiers for academic staff are a step in the right direction.

The Metric Tide report calls for research managers and administrators to champion the use of responsible metrics within their institutions. Simon Kerridge looks at greater detail at specific institutional actions. Signing up to initiatives such as the San Francisco Declaration on Research Assessment (DORA) is a good start. Furthermore, by mandating unique and disambiguated identifiers for academic staff, like ORCID iDs, […]

Print Friendly
  • Permalink Gallery

    The metrics dilemma: University leadership needs to get smart about their strategic choices over what counts.

The metrics dilemma: University leadership needs to get smart about their strategic choices over what counts.

The review of metrics enjoins universities not to drift with the ‘metric tide’. To do this requires a united front of strategic leadership across the sector, argues HEFCE’s Steven Hill. Rather than the inevitable claims about league table positions on website front pages, universities could offer further explanation of how the rankings relate to the distinct mission of the institution.

This is part […]

Print Friendly
  • Permalink Gallery

    The metric tide is rising: HEFCEmetrics report argues metrics should support, not supplant, expert judgement.

The metric tide is rising: HEFCEmetrics report argues metrics should support, not supplant, expert judgement.

James Wilsdon introduces the Independent Review of the Role of Metrics in Research Assessment and Management. The review found that the production and consumption of metrics remains contested and open to misunderstanding. Wider use of quantitative indicators, and the emergence of alternative metrics for societal impact, could support the transition to a more open, accountable and outward-facing research system. But […]

Print Friendly
This work by LSE Impact of Social Sciences blog is licensed under a Creative Commons Attribution 3.0 Unported.