The limitations of simple ‘citation count’ figures are well-known. Chris Carroll argues that the impact of an academic research paper might be better measured by counting the number of times it is cited within citing publications rather than by simply measuring if it has been cited or not. Three or more citations of the key paper arguably represent a […]
Novel breakthroughs in research can have a dramatic impact on scientific discovery but face some distinct disadvantages in getting wider recognition. Jian Wang, Reinhilde Veugelers, Paula Stephan present an overview of their findings which suggest an inherent bias in bibliometric measures against novel research. The bias is of particular concern given the increased reliance funding agencies place on classic bibliometric indicators in making funding […]
Peer review and bibliometric indicators just don’t match up according to re-analysis of Italian research evaluation.
The Italian research evaluation agency undertook an extensive analysis to compare the results of peer review and bibliometric indicators for research evaluation. Their findings suggested both indicators produced similar results. Researchers Alberto Baccini and Giuseppe De Nicolao re-examine these results and find notable disagreements between the two techniques of evaluation in the sample and outline below the major shortcoming in the Italian Agency’s […]
Drawing on citation data that spans disciplines and time periods, Elliott Green has identified the most cited publications in the social sciences. Here he shares his findings on the 25 most cited books as well as the top ten journal articles. The sheer number of citations for these top cited publications is worth noting as is the fact that […]
The recent UK research assessment exercise, REF2014, attempted to be as fair and transparent as possible. However, Alan Dix, a member of the computing sub-panel, reports how a post-hoc analysis of public domain REF data reveals substantial implicit and emergent bias in terms of discipline sub-areas (theoretical vs applied), institutions (Russell Group vs post-1992), and gender. While metrics are […]
Accounting for Impact? How the Impact Factor is shaping research and what this means for knowledge production.
Why does the impact factor continue to play such a consequential role in academia? Alex Rushforth and Sarah de Rijcke look at how considerations of the metric enter in from early stages of research planning to the later stages of publication. Even with initiatives against the use of impact factors, scientists themselves will likely err on the side of caution and continue […]
Credit where credit is due: Research parasites and tackling misconceptions about academic data sharing
Benedikt Fecher and Gert G. Wagner look at a recent editorial which faced considerable criticism for typecasting researchers who use or build on previous datasets as “research parasites”. They argue that the authors appear to miss the point, not only of data sharing, but of scientific research more broadly. But as problematic as the editorial may be, it points to […]
A call for inclusive indicators that explore research activities in “peripheral” topics and developing countries.
Science and Technology (S&T) systems all over the world are routinely monitored and assessed with indicators that were created to measure the natural sciences in developed countries. Ismael Ràfols and Jordi Molas-Gallart argue these indicators are often inappropriate in other contexts. They urge S&T analysts to create data and indicators that better reflect research activities and contributions in these “peripheral” […]
It’s time to put our impact data to work to get a better understanding of the value, use and re-use of research.
If published articles and research data are subject to open access and sharing mandates, why not also the data on impact-related activity of research outputs? Liz Allen argues that the curation of an open ‘impact genome project’ could go a long way in remedying our limited understanding of impact. Of course there would be lots of variants in the type of impact […]
Access to more and more publication and citation data offers the potential for more powerful impact measures than traditional bibliometrics. Accounting for more of the context in the relationship between the citing and cited publications could provide more subtle and nuanced impact measurement. Ryan Whalen looks at the different ways that scientific content are related, and how these relationships could be […]
Why we need a hub for software in science: Research software developers deserve wider academic recognition.
Jure Triglav looks at how software recognition, credit and discovery is a massive problem facing the scientific community. Though instrumental to scientific innovation, software creation and maintenance is rarely recognised. By building a hub for research software, scientists could shine a spotlight on its developers and show the extent, importance and impact of their work. Enter Depsy – a new research […]
Bringing together bibliometrics research from different disciplines – what can we learn from each other?
Currently, there is little exchange between the different communities interested in the domain of bibliometrics. A recent conference aimed to bridge this gap. Peter Kraker, Katrin Weller, Isabella Peters and Elisabeth Lex report on the multitude of topics and viewpoints covered on the quantitative analysis of scientific research. A key theme was the strong need for more openness and transparency: transparency in research […]
Matthew Woollard discusses the importance of UK data infrastructure and how the systematic management and sharing of research data can lead to many benefits for the research community and the public. Here he introduces #DataImpact2015 where a panel of leading data innovators will explore data re-use in policy and research, sharing their experiences of demonstrating data enhanced impact.
The UK Data Service is trusted to […]
Applied Altmetrics: How university presses, academic publishing services and institutional repositories benefit.
Academic institutions are increasingly looking for ways to demonstrate the value and breadth of their publishing activity. Danielle Padula and Catherine Williams look at how one university, the University of Michigan, have incorporated altmetrics data as an author service to help academic colleagues articulate institutional-wide successes.
A key benefit of altmetrics for younger or smaller publishers is that, unlike the Thomson Reuters’ […]
When are journal metrics useful? A balanced call for the contextualized and transparent use of all publication metrics
The Declaration on Research Assessment (DORA) has yet to achieve widespread institutional support in the UK. Elizabeth Gadd digs further into the slow uptake. Although there is growing acceptance that the Journal Impact Factor is subject to significant limitations, DORA feels rather negative in tone: an anti-journal metric tirade. There may be times when a journal metric, sensibly used, is […]
Altmetrics offer a record of the wider attention and engagement that academic work generates and these broad indicators can provide a helpful starting point for understanding the influence and impact of your research. Danielle Padula and Catherine Williams provide ten simple steps for researchers looking to boost online engagement and wider attention of academic research.
Authors are facing more competition than ever for funding […]