In less than a decade the impact agenda has evolved from being a controversial idea to an established part of most national research systems. Over the same period the conceptualisation of research impact in the social sciences and the ability to create and measure research impact through digital communication media has also developed significantly. In this post, Ziyad Marar […]
Peer review and bibliometric indicators just don’t match up according to re-analysis of Italian research evaluation.
The Italian research evaluation agency undertook an extensive analysis to compare the results of peer review and bibliometric indicators for research evaluation. Their findings suggested both indicators produced similar results. Researchers Alberto Baccini and Giuseppe De Nicolao re-examine these results and find notable disagreements between the two techniques of evaluation in the sample and outline below the major shortcoming in the Italian Agency’s […]
Metrics in academia are often an opaque mess, filled with biases and ill-judged assumptions that are used in overly deterministic ways. By getting involved with their design, academics can productively push metrics in a more transparent direction. Chris Elsden, Sebastian Mellor and Rob Comber introduce an example of designing metrics within their own institution. Using the metric of grant income, their tool ResViz […]
The recent UK research assessment exercise, REF2014, attempted to be as fair and transparent as possible. However, Alan Dix, a member of the computing sub-panel, reports how a post-hoc analysis of public domain REF data reveals substantial implicit and emergent bias in terms of discipline sub-areas (theoretical vs applied), institutions (Russell Group vs post-1992), and gender. While metrics are […]
A call for inclusive indicators that explore research activities in “peripheral” topics and developing countries.
Science and Technology (S&T) systems all over the world are routinely monitored and assessed with indicators that were created to measure the natural sciences in developed countries. Ismael Ràfols and Jordi Molas-Gallart argue these indicators are often inappropriate in other contexts. They urge S&T analysts to create data and indicators that better reflect research activities and contributions in these “peripheral” […]
Ancient Cultures of Conceit Reloaded? A comparative look at the rise of metrics in higher education.
When considering the power of metrics and audit culture in higher education, are we at risk of romanticising the past? Have academics ever really worked in an environment free from ‘measurement’? Roger Burrows draws on his own recollection of the 1986 Research Selectivity Exercise (RSE), scholarly work on academic labour and fictional portrayals of academic life, which all demonstrate the substantial expansion of the role […]
Bringing together bibliometrics research from different disciplines – what can we learn from each other?
Currently, there is little exchange between the different communities interested in the domain of bibliometrics. A recent conference aimed to bridge this gap. Peter Kraker, Katrin Weller, Isabella Peters and Elisabeth Lex report on the multitude of topics and viewpoints covered on the quantitative analysis of scientific research. A key theme was the strong need for more openness and transparency: transparency in research […]
When are journal metrics useful? A balanced call for the contextualized and transparent use of all publication metrics
The Declaration on Research Assessment (DORA) has yet to achieve widespread institutional support in the UK. Elizabeth Gadd digs further into the slow uptake. Although there is growing acceptance that the Journal Impact Factor is subject to significant limitations, DORA feels rather negative in tone: an anti-journal metric tirade. There may be times when a journal metric, sensibly used, is […]
We need informative metrics that will help, not hurt, the scientific endeavor – let’s work to make metrics better.
Rather than expecting people to stop utilizing metrics altogether, we would be better off focusing on making sure the metrics are effective and accurate, argues Brett Buttliere. By looking across a variety of indicators, supporting a centralised, interoperable metrics hub, and utilizing more theory in building metrics, scientists can better understand the diverse facets of research impact and research quality.
The Metric Tide report calls for the responsible use of metrics. As a supplier of data and metrics to the scholarly community, Elsevier supports this approach and agrees that metrics should support human judgment and not replace it, writes Peter Darroch. To be used effectively, there needs to be a broad range of metrics generated by academia and industry […]
Can metrics be used responsibly? Structural conditions in Higher Ed push against expert-led, reflexive approach.
Do institutions and academics have a free choice in how they use metrics? Meera Sabaratnam argues that structural conditions in the present UK Higher Education system inhibit the responsible use of metrics. Funding volatility, rankings culture, and time constraints are just some of the issues making it highly improbable that the sector is capable of enacting the approach that the Metric […]
Using REF results to make simple comparisons is not necessarily responsible. Careful interpretation needed.
What are the implications of the HEFCEmetrics review for the next REF? Is is easy to forget that the REF is already all about metrics of research performance. Steven Hill, Head of Research Policy at HEFCE, reiterates that like any use of metrics, we need to take great care in how we use and interpret the results of the REF.
This is […]
The Management of Metrics: Globally agreed, unique identifiers for academic staff are a step in the right direction.
The Metric Tide report calls for research managers and administrators to champion the use of responsible metrics within their institutions. Simon Kerridge looks at greater detail at specific institutional actions. Signing up to initiatives such as the San Francisco Declaration on Research Assessment (DORA) is a good start. Furthermore, by mandating unique and disambiguated identifiers for academic staff, like ORCID iDs, […]
The metrics dilemma: University leadership needs to get smart about their strategic choices over what counts.
The review of metrics enjoins universities not to drift with the ‘metric tide’. To do this requires a united front of strategic leadership across the sector, argues HEFCE’s Steven Hill. Rather than the inevitable claims about league table positions on website front pages, universities could offer further explanation of how the rankings relate to the distinct mission of the institution.
This is part […]
The metric tide is rising: HEFCEmetrics report argues metrics should support, not supplant, expert judgement.
James Wilsdon introduces the Independent Review of the Role of Metrics in Research Assessment and Management. The review found that the production and consumption of metrics remains contested and open to misunderstanding. Wider use of quantitative indicators, and the emergence of alternative metrics for societal impact, could support the transition to a more open, accountable and outward-facing research system. But […]