As governments increasingly look to national research systems as important inputs into the ‘knowledge economy’, developing ways to assess and understand their performance has become focus for policy and critique. This post brings together some of the top posts on research metrics and assessment that appeared on the LSE Impact Blog in 2019.
As Goodhart’s law states: ‘when a measure becomes a target, it ceases to be a good measure’. Using bibliometrics to measure and assess researchers has become increasingly common, but does implementing these policies therefore devalue the metrics they are based on? In this post Alberto Baccini, Giuseppe De Nicolao and Eugenio Petrovich, present evidence from a study of Italian researchers revealing how the introduction of bibliometric targets into the Italian academy has changed the way Italian academics cite and use the work of their colleagues.
The UK research system has historically been innovative in its approach to measuring and assessing the impacts of academic research. However, the recent development of the Knowledge Exchange Framework (KEF), has elicited scepticism as to how this framework will significantly differ from the impact element of the Research Excellence Framework (REF). In this post Hamish McAlpine and Steven Hill outline the aims and objectives of the KEF and argue that it provides an important means of understanding the wider totality of research impacts taking place in UK universities.
Higher education and research institutions are increasingly coming to terms with the issue of gender inequality. However, efforts to move in this direction are often isolated and difficult to compare and benchmark against each other. In this post, Caroline Wagner presents a new initiative from the Centre for Science and Technology Studies at Leiden (CWTS), to assess gender inequality in research publication across different institutions internationally and drive further change in the sector.
Promoting public engagement with research has become a core mission for research funders. However, the extent to which researchers can assess the impact of this engagement is often under-analysed and limited to success stories. Drawing on the example of development aid, Marco J Haenssgen argues we need to widen the parameters for assessing public engagement and begin to develop a skills base for the external evaluation of public engagement work.
The Journal Impact Factor (JIF) – a measure reflecting the average number of citations to recent articles published in a journal – has been widely critiqued as a measure of individual academic performance. However, it is unclear whether these criticisms and high profile declarations, such as DORA, have led to significant cultural change. In this post, Erin McKiernan, Juan Pablo Alperin and Alice Fleerackers present evidence from a recent study of review, promotion and tenure documents, showing the extent to which (JIF) remains embedded as a measure of success in the academic job market.
Altmetrics have become an increasingly ubiquitous part of scholarly communication, although the value they indicate is contested. In this post, Lutz Bornmann and Robin Haunschild present evidence from their recent study examining the relationship of peer review, altmetrics, and bibliometric analyses with societal and academic impact. Drawing on evidence from REF2014 submissions, they argue altmetrics may provide evidence for wider non-academic debates, but correlate poorly with peer review assessments of societal impact.
The dominance of scholars from the global North is widespread, and this extends to the student curriculum. Data on reading lists shows large authorial imbalances, which has consequences for the methodological tools available in research and allows dominant paradigms in disciplines to remain unchallenged.
The impact agenda is an international and evolutionary phenomenon that has undergone numerous iterations. Discussing the development and recent release of the results of the Australian Engagement and Impact Assessment (EIA), Ksenia Sawczak considers the effectiveness of this latest exercise in impact assessment, finding it to provide an inadequate account of the impact of Australian research and ultimately a shaky evidence base for the development of future policy.
Quantitative measures of the effect of caring for children on research outputs (published papers and citations) have been used by some universities as a tool to address gender bias in academic grant and job applications. In this post Adrian Barnett argues that these adjustments fail to capture the real impacts of caring for children and should be replaced with contextual qualitative assessments.
The next round of the EU’s research and innovation funding program, Horizon Europe, will include a requirement to develop a mission statement outlining how the research will achieve societal impact. In this post Stefan de Jong and Reetta Muhonen explore how regional variations across Europe in the understanding of research impact are likely to impact the opportunities of researchers from widening participation countries in securing EU funding.
Academic hiring and promotion committees and funding bodies often use publication lists as a shortcut to assessing the quality of applications. In this post, Janet Hering argues that in order to avoid bias towards prestigious titles, plain language statements should become a standard feature of academic assessment.
Sascha Friesike, Benedikt Fecher and Gert. G. Wagner outline three systemic shifts in scholarly communication that render traditional bibliometric measures of impact outdated and call for a renewed debate on how we understand and measure research impact.
Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.