LSE - Small Logo
LSE - Small Logo

Taster

December 28th, 2019

2019 In Review: Metrics and research assessment

0 comments | 7 shares

Estimated reading time: 5 minutes

Taster

December 28th, 2019

2019 In Review: Metrics and research assessment

0 comments | 7 shares

Estimated reading time: 5 minutes

As governments increasingly look to national research systems as important inputs into the ‘knowledge economy’, developing ways to assess and understand their performance has become focus for policy and critique. This post brings together some of the top posts on research metrics and assessment that appeared on the LSE Impact Blog in 2019.


Working to the rule – How bibliometric targets distorted Italian research

As Goodhart’s law states: ‘when a measure becomes a target, it ceases to be a good measure’. Using bibliometrics to measure and assess researchers has become increasingly common, but does implementing these policies therefore devalue the metrics they are based on? In this post Alberto Baccini, Giuseppe De Nicolao and Eugenio Petrovich, present evidence from a study of Italian researchers revealing how the introduction of bibliometric targets into the Italian academy has changed the way Italian academics cite and use the work of their colleagues.


Knowledge exchange or research impact – what is the difference between REF and KEF?

The UK research system has historically been innovative in its approach to measuring and assessing the impacts of academic research. However, the recent development of the Knowledge Exchange Framework (KEF), has elicited scepticism as to how this framework will significantly differ from the impact element of the Research Excellence Framework (REF). In this post Hamish McAlpine and Steven Hill outline the aims and objectives of the KEF and argue that it provides an important means of understanding the wider totality of research impacts taking place in UK universities.


Measuring Inequality – Creating an indicator to assess gender bias in universities

Higher education and research institutions are increasingly coming to terms with the issue of gender inequality. However, efforts to move in this direction are often isolated and difficult to compare and benchmark against each other. In this post, Caroline Wagner presents a new initiative from the Centre for Science and Technology Studies at Leiden (CWTS), to assess gender inequality in research publication across different institutions internationally and drive further change in the sector.


Developing a finer grained analysis of research impact: Can we assess the wider effects of public engagement?

Promoting public engagement with research has become a core mission for research funders. However, the extent to which researchers can assess the impact of this engagement is often under-analysed and limited to success stories. Drawing on the example of development aid, Marco J Haenssgen argues we need to widen the parameters for assessing public engagement and begin to develop a skills base for the external evaluation of public engagement work.


The “impact” of the Journal Impact Factor in the review, tenure, and promotion process

The Journal Impact Factor (JIF) – a measure reflecting the average number of citations to recent articles published in a journal – has been widely critiqued as a measure of individual academic performance. However, it is unclear whether these criticisms and high profile declarations, such as DORA, have led to significant cultural change. In this post, Erin McKiernanJuan Pablo Alperin and Alice Fleerackers present evidence from a recent study of review, promotion and tenure documents, showing the extent to which (JIF) remains embedded as a measure of success in the academic job market.


Are altmetrics able to measure societal impact in a similar way to peer review?

Altmetrics have become an increasingly ubiquitous part of scholarly communication, although the value they indicate is contested. In this post, Lutz Bornmann and Robin Haunschild present evidence from their recent study examining the relationship of peer review, altmetrics, and bibliometric analyses with societal and academic impact. Drawing on evidence from REF2014 submissions, they argue altmetrics may provide evidence for wider non-academic debates, but correlate poorly with peer review assessments of societal impact.


How diverse is your reading list? (Probably not very…)

The dominance of scholars from the global North is widespread, and this extends to the student curriculum. Data on reading lists shows large authorial imbalances, which has consequences for the methodological tools available in research and allows dominant paradigms in disciplines to remain unchallenged.

 


Assessing Impact Assessment – What can be learnt from Australia’s Engagement and Impact Assessment?

The impact agenda is an international and evolutionary phenomenon that has undergone numerous iterations. Discussing the development and recent release of the results of the Australian Engagement and Impact Assessment (EIA), Ksenia Sawczak considers the effectiveness of this latest exercise in impact assessment, finding it to provide an inadequate account of the impact of Australian research and ultimately a shaky evidence base for the development of future policy.


The careers of carers – A numerical adjustment cannot level the playing field for researchers who take time off to care for children

Quantitative measures of the effect of caring for children on research outputs (published papers and citations) have been used by some universities as a tool to address gender bias in academic grant and job applications. In this post Adrian Barnett argues that these adjustments fail to capture the real impacts of caring for children and should be replaced with contextual qualitative assessments. 


Ahead of the Game – How impact is an additional hurdle for scholars from widening countries to receiving EU funding.

The next round of the EU’s research and innovation funding program, Horizon Europe, will include a requirement to develop a mission statement outlining how the research will achieve societal impact. In this post Stefan de Jong and Reetta Muhonen explore how regional variations across Europe in the understanding of research impact are likely to impact the opportunities of researchers from widening participation countries in securing EU funding.


Counting is not enough – How plain language statements could improve research assessment

Academic hiring and promotion committees and funding bodies often use publication lists as a shortcut to assessing the quality of applications. In this post, Janet Hering argues that in order to avoid bias towards prestigious titles, plain language statements should become a standard feature of academic assessment.

 


Now is the time to update our understanding of scientific impact in light of open scholarship

Sascha Friesike, Benedikt Fecher and Gert. G. Wagner outline three systemic shifts in scholarly communication that render traditional bibliometric measures of impact outdated and call for a renewed debate on how we understand and measure research impact.

 

 

Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

Featured Image Credit adapted from Tyler Eastonvia Unsplash (CC0 1.0)

Print Friendly, PDF & Email

About the author

Taster

Posted In: Annual review | Measuring Research | REF2021 | Research evaluation | Research funding

Leave a Reply

Your email address will not be published. Required fields are marked *