LSE - Small Logo
LSE - Small Logo

Blog Admin

October 9th, 2012

We can do much better than rely on the self-fulfilling impact factor: Academics must harness ideas of engagement to illustrate their impact

7 comments

Estimated reading time: 5 minutes

Blog Admin

October 9th, 2012

We can do much better than rely on the self-fulfilling impact factor: Academics must harness ideas of engagement to illustrate their impact

7 comments

Estimated reading time: 5 minutes

Impact Factors are a god-send for overworked and distracted individuals, and while Google Scholar goes some way to utilizing multiple measures to determine a researcher’s impact, Jonathan Becker argues that we can go one better. He writes that engagement is the next metric that academics must conquer.

Those of you in the professoriate are likely in the same position as me, where tenure and/or promotion are tied to scholarly output, which is largely judged by the impact factors of the journals in which we have published. That is, we largely judge each other’s scholarship based on impact factor.

Numerous critiques of this “quantification of quality” have been made. In a recent article in the Journal of Philosophy of Education, Smeyers & Burbules (2011) offer a particularly compelling critique. They offer a number of problems with the way impact factors are computed and used. Consider just the following four concerns:

  1. First and foremost, the numerator does not include citations/references in books, book chapters, newspapers, or web-based publications (which is particularly problematic in the age of digital information).
  2. The impact factor is a journal-level metric and not necessarily an indicator of article quality. They cite an example offered by Monastersky (2005): “a quarter of the articles in Nature last year drew 89 per cent of the citations to that journal, so a vast majority of the articles received far fewer than the average of 32 citations reflected in the most recent impact factor.”
  3. Furthermore, among the 25 per cent of articles that drew 89 per cent of the citations, one article might have been cited frequently (thus driving up the impact factor of the journal that it is in) when it is actually being “cited critically, as a bad example or as a foil for another argument.” In this way, impact factor may be more a measure of publicity than quality.
  4. Finally, there is the self-fulfilling nature of the impact factor. We might call this the social construction of journal quality. For example, if you were to ask any professor of educational leadership what the “top” journal in the field is, you would more often than not hear “Educational Administration Quarterly (EAQ).” Why? Because we are socialized within the discipline to believe that. Therefore, scholars within the discipline read that journal most frequently and look to it most frequently for warrants for their knowledge claims. Busy scholars of educational leadership will not necessarily find the time to search for the best warrants. So, of course EAQ articles will be cited more frequently, thus boosting its impact factor.

Smeyers & Burbules are most convincing when writing about the privileged place of quantification in academia.

There is something intrinsically reifying about measures like impact factors: they are a god-send for overworked and distracted officials, because they serve as a handy shortcut; they obviate the need for more difficult and potentially controversial judgments, such as actually reading and arguing over the value of scholarship; their pseudo-scientific aura seems to protect review processes from certain kinds of challenge, including legal challenge, because everyone is treated ‘the same’ procedurally and held to ‘the same’ criteria.

Smeyers & Burbules mention alternate methods of computing impact factors (see e.g. the Australian government’s Australian Research Council, HERE and HERE). They also address the “future of the impact factor” and throw out possibilities such as “counting downloads, or hyperlinks,” but, ultimately, come up short in this part of the article.

For “overworked and distracted officials,” short of expecting them to actually read and make independent judgments of a body of scholarship, the question, then, is how might we “judge” impact in the future. Or, stated differently, if we take seriously the idea of scholars as public intellectuals embracing the affordances of the open web, it behooves us to consider how to rethink “impact factor.”

Fortunately, there are pieces of a puzzle that are starting to come together. There are certainly other ideas out there, but consider these four possibilities:

  • Rick Hess’s public presence ranking system is far from perfect. He uses a combination of Google scholar, Amazon.com, and mentions in widely-disseminated education publications (namely Education Week and the Chronicle of Higher Education), blogs, newspapers, and the Congressional Record.
  • Hess’ use of use of the Google Scholar score preceded Google’s announcement of Google Scholar Citations which is, “a statistical model based on author names, bibliographic data, and article content to group articles likely written by the same author.” Once an author identifies her articles, Google collects citations to them, computes three different dynamic citation metrics (they are automatically updated as new citations to articles are found) and graphs the metrics over time.

Those two approaches allow us to think about two things: the use of multiple measures to get at “impact,” and how one might start to scrape data from the open web. Getting a bit more technologically sophisticated, however, we start to see some greater possibilities.

  • brainSCANr (The Brain Systems, Connections, Associations, and Network Relationships) is a budding/growing system that explores the relationships between terms (in their case, for now, neuroscience) in peer-reviewed publications. As is written on their homepage, the system “assumes that somewhere in all the chaos and noise of the more than 20 million papers… there must be some order and rationality.” By clicking here, you see a dynamic sociograph of the term “happiness” is related to other terms within the articles in the PubMed database. The image below is a static version of the sociograph pulled on August 3, 2011.

 

While BrainSCANr is a technology used to examine relationships between terms, it does not take a huge stretch to imagine a similar system that examines relationships between articles, authors, etc. BrainSCANr uses social network analysis (SNA) which can be used to generate statistics related to the density of a network or the centrality of a node (which could be an individual, an idea, an article, etc.). Density of a network around an article is not unlike current impact factor metrics, but centrality metrics might add to our thinking about impact.

  • In addition to centrality within a network of articles/ideas, we might also consider adding “engagement” metrics to a multi-factored look at impact. PostRank “monitors and collects social engagement events correlated with online content in real-time across the web. PostRank gathers where and when stories generate comments, bookmarks, tweets, and other forms of interaction from a host of social hubs.” PostRank has been acquired by Google. Recently, the folks at PostRank ran a “quick experiment” to see if they could determine what the most engaging TED talks have been. The initial result was a spreadsheet of 766 TED talks, which has become a dynamic site where you can view TED talks ranked by PostRank’s engagement score. Engagement is measured by the number of times a given talk has been tweeted, bookmarked (on Delicious), posted (to Facebook), and voted (on Reddit). Engagement is essentially a measure of virality.

Public presence, density, centrality, engagement…these are concepts that we need to embrace and harness as we re-think what “impact factor” might look like as scholars disseminate knowledge more widely on the open web. Modern archiving and computing technologies allow us to do SO much better than relying on outdated, limited, self-fulfilling metrics of impact factor.

Note: This article gives the views of the author(s), and not the position of the Impact of Social Sciences blog, nor of the London School of Economics.

This blog was originally published on Jonathan’s own blog, Educational Insanity, and is republished here with permission.

About the author:
Jonathan Becker
is an assistant professor at the Educational Leadership Department at Virginia Commonwealth University where he teaches courses in school law and educational research methods.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Academic publishing | Citations | Impact

7 Comments