LSE - Small Logo
LSE - Small Logo

Blog Admin

February 23rd, 2018

Where are we with responsible metrics? And where might we go next? Reflections from two recent events

1 comment | 1 shares

Estimated reading time: 5 minutes

Blog Admin

February 23rd, 2018

Where are we with responsible metrics? And where might we go next? Reflections from two recent events

1 comment | 1 shares

Estimated reading time: 5 minutes

elizabeth gaddWidespread scepticism and concern among researchers, universities, representative bodies and learned societies about the broader use of metrics in research assessment and management has led to concerted efforts to promote the “responsible use” of such metrics. But how effectively are UK higher education institutions engaging with this agenda? Lizzie Gadd reflects on two recent responsible metrics-themed events. While it is clear the sector remains at an early stage, certain progress has been made, with the surge of signatories to DORA being particularly notable. However, there is still some way to go and much to be scrutinised, with people ultimately being key to getting this right.

Responsible metrics events are like buses, it would seem. Nothing for nine months, and then three within the space of ten days. I managed to get to two of them (alas, the Higher Education Strategic Planners Association responsible metrics workshop clashed – fancy, two buses on the same day). The first was on 30 January, where Altmetric.com played host to the Lis-Bibliometrics Responsible Bibliometrics in Practice event. The second, on 8 February, was Universities UK and HEFCE’s Turning the Tide: Changing the Culture of Responsible Metrics. David Price, outgoing Chair of the Forum for Responsible Metrics provided the keynote at the former, and David Sweeney opened up the latter, with the highlight being Paul Ayris delivering the findings of the Forum for Responsible Metrics survey of how the UK higher education sector was engaging with this agenda.

It’s not my purpose to provide a blow-by-blow account of who said what, as that would be dull for both of us. However, I did want to offer my reflections as to where I think we’re at and where, in my unsolicited opinion, we might go next.

1. UK HE institutions are at an early stage with this – still

If the Forum’s survey showed us anything, it was that UK HE is largely yet to engage with frameworks for the responsible use of metrics. I don’t want to steal their thunder as they plan to publish the results of their survey on their website. However 75 of 96 HE respondents (78%) said they did not have a research metrics policy, and in the Forum’s opinion only four respondents had undertaken a “comprehensive set of actions” towards implementing responsible research metrics in their institutions. The Lis-Bibliometrics event showed a similar trend. When asked to cluster into “birds of a feather” groups, those who were just “packing their bags” to embark on a responsible metrics journey formed by far the largest group. Having said that, there is clearly huge interest in this. Both events were fully booked with waiting lists, and I understand the Turning the Tide event sold out in 24 hours.

2. When people talk about responsible metrics, they are mainly talking about bibliometrics

The Lis-Bibliometrics event was, unsurprisingly, focused on responsible bibliometrics. So I was keen to see how conversations would differ when discussing the broader theme of responsible research metrics at the UUK/HEFCE event. Actually, the conversations hardly differed at all – particularly when the early-career researchers took the stage. There was very little discussion about responsible grant income targets or the Teaching Excellence Framework or National Student Survey. The main non-biblio metrics to get an airing were the world university rankings but, of course, publications and citation metrics play a considerable part in those. This was particularly interesting to me as Loughborough University deliberately focused on responsible bibliometrics, building on the Leiden Manifesto, when developing its policy. Other institutions, such as Bath, opted to go broader, in line with The Metric Tide’s approach which covered all forms of research evaluation. However, all three frameworks were somewhat conflated in discussions. It would seem that, to most people, it is the use of bibliometrics for research evaluation which can cause the most damage. However, as Evelyn Welch pointed out, it’s not just bibliometrics or even research metrics that can lead to poor outcomes; teaching and knowledge exchange metrics can be just as bad.

Image credit: 23, 24 by Rob Shenk. This work is licensed under a CC BY-SA 2.0 license.

3. The indefatigable rise of DORA

The day before the Turning the Tide event Research Councils UK announced it had become the latest in a long line of increasingly high-profile signatories to DORA. On the day itself David Sweeney confirmed that the UK Research and Innovation wouldn’t be far behind. Survey data showed 31 other institutions were considering signing DORA, though 12 had considered it and decided not to proceed. I don’t think the survey asked whether HEIs were considering alternative approaches – such as Leiden or the Metric Tide – though they were asked if they agreed with their principles (answer: broadly yes). As someone who has long expressed concerns about the comprehensiveness of DORA (it aims just two of its principles at institutions, whereas Leiden has ten), I must confess to being somewhat baffled by this. To my mind DORA lacks the depth of the Leiden Manifesto and the breadth of the Metric Tide. Penny Andrews, PhD student at the University of Sheffield, expressed concern about DORA’s science-based roots (developed by a group of cell biologists with an inevitable focus on journal metrics) and challenged its relevance to non-STEM disciplines.

I can only really put the rise of DORA down to high-profile protagonists and investment (DORA has its own marketing person, Leiden doesn’t). It’s not bad. It’s way better than nothing. And I think most signatories are seeing it as a high-level commitment to a direction of travel on responsible metrics rather than a detailed itinerary. (Indeed, in a recent tweet, Stephen Curry suggested HEIs sign first and think later!)

However, to my mind it will never be the best product on the market, and that’s that. But I’m aware that I might be starting to sound like a lonely proponent of the Betamax, in danger of ending my days rocking on an office chair, sobbing “but it was a much better solution…”

4. It is not just HEIs that need to engage with this. Funders, rankings bodies, and suppliers must all come under scrutiny

OK – hands up, this was my point. But it certainly got some airtime in both actual and virtual discussions. Maybe it’s because I work for an institution that I feel sensitive to the finger-wagging the HE sector has been subject to for not engaging with this agenda, not signing DORA, and not publishing promotion criteria. However, universities only measure because they are measured. If funders are using odd metrics or – worse – world ranking performance to allocate funding or studentships, universities can end up doing odd things.

Similarly, bibliometric tools can be very easy to use irresponsibly and this is outside of our jurisdiction. At the Lis-Bibliometrics event, Altmetric Founder and Director, Euan Adie, talked about two supplier “cop-outs” in this space, one of which was “we’ll just make data available and let the community decide what to do with it”. They won’t, he said. Suppliers need to ensure their products are in line with responsible metrics principles and be transparent about their limitations. A theme at both events was the importance of metrics and service suppliers co-designing with end-users. Lis-Bibliometrics plan to start gathering community feedback on this in the near future.

5. We need to measure what we value

Related to the previous point, both events quickly alighted on the importance of measuring what we value, and the fact that not everything we value can currently be measured. In terms of research impact, citations are not the only fruit. Members of the early-career panel at Turning the Tide were quick to point out that their “best” work was not in the “best” journals, and often academic outputs had an impact on industry that would never result in citations. Penny Andrews gave an example of a piece she had recently written for the publication Prospect with a circulation of thousands which was tweeted by Harriet Harman – but not valued by senior figures in her institution in the same way as an article in a high-impact journal. At the Lis-Bibliometrics event, Katie Evans raised the important question as to how we can encourage openness in early-career colleagues when they face such pressures to publish in usually closed “high-impact” journals. David Price felt senior colleagues had to lead the way. At UCL, Paul Ayris pointed out, promotion criteria now included openness metrics. The challenges of measuring openness, and open measures, were acknowledged. Interestingly enough, Lis-Bibliometrics plans to take a look at this in more detail at a future event.

6. The increasing importance of responsible peer review

All three of the existing frameworks for responsible metrics talk about the need for metrics to support, not supplant, expert peer assessment – and it’s hard to argue with that. However, this gives rise to two questions: 1) what should the balance between metrics and peer review actually be? And 2) just how responsible is peer review anyway? At Loughborough we recently held a publication strategy workshop for probationary academics, many of whom were new to the UK. When we explained the REF’s peer review approach there were literally peals of laughter at the thought that it could actually deliver expert peer review in the time available on the range of disciplines it covered. “Give us metrics!”, cried one, on the basis they would at least know what they were dealing with. When asked whether any had ever had guidance as journal peer reviewers, very few raised their hands. Not surprisingly, then, we have situations like Daniel Graziotin’s where a single journal peer review outcome ranged from reject, through major corrections to minor corrections.

https://twitter.com/dgraziotin/status/960902222744875009

If we’re relying on peer review as the “gold standard” of research evaluation to which metrics must bow the knee, I think it should be subject to the same level of responsible-use scrutiny as metrics.

At the Lis-Bibliometrics event, David Price talked about the challenge REF panels faced in dealing with unconscious bias including the unconscious influence of journal reputation, and suggested that perhaps panels should be supplied with the DOI only, rather than the bibliographic reference. An excellent idea, I think. As a result of the REF open access policy we have thousands of branding-free author-accepted manuscripts sitting in institutional repositories – why don’t we make use of them?

7. It is very important to get this right

I’m a big fan of this point. The Royal Society’s Adam Wright reminded us very clearly of the mental health challenges faced by academics. He cited a University of Leeds study in which 26% of PhD students reported mental health concerns in year one of their PhD, a figure which rose to 48% by year three. Evelyn Welch talked about the prestige economy in which academics sit, and how they are constantly evaluating themselves and each other both explicitly and implicitly. We are all acutely aware of the workload pressures, and the four-fold burden (research, teaching, impact, admin) that academics bear. In this highly charged, highly pressured environment, the consequences of getting metrics wrong can, literally, be fatal.

8. People are the key to getting it right

My final point at both the Lis-Bibliometrics and UUK/HEFCE events was, in a nutshell, that metrics can’t be responsible, only people can. This sentiment clearly resonated as it ended up as the most retweeted tweet after Cameron Neylon picked it up:

To use the Metric Tide’s five principles, we need robust, humble, diverse, transparent, and reflexive people doing metrics. And the owner of any responsible metrics policy (and they do need an owner) should be the most responsible of them all. As I said at the Lis-Bibliometrics event, an organisation can only be as responsible as its most senior decision-maker in this space. I have another blog post brewing on this so I won’t linger on the topic here, but suffice to say, responsible metrics is as important as it is hard. This needs to be the responsibility of well-informed, well-connected, wise people – in universities, funders, ranking organisations, and suppliers – who really care about the precious lives that sit behind the numbers they are churning on their spreadsheets.

This blog post originally appeared under a different title on The Bibliomagician blog and is published under a CC BY 4.0 license.

Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

About the author

Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University. She has a background in libraries and scholarly communication research. She is the co-founder of the Lis-Bibliometrics Forum and is the ARMA Metrics Special Interest Group Champion.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Early career researchers | Government | Higher education | Measuring Research | Peer review | Research evaluation | Research policy

1 Comments