LSE - Small Logo
LSE - Small Logo

Sierra Williams

March 31st, 2014

Four reasons to stop caring so much about the h-index.

10 comments | 8 shares

Estimated reading time: 5 minutes

Sierra Williams

March 31st, 2014

Four reasons to stop caring so much about the h-index.

10 comments | 8 shares

Estimated reading time: 5 minutes

stacyThe h-index attempts to measure the productivity and impact of the published work of scholar. But reducing scholarly work to a number in this way has significant limitations. Stacy Konkiel highlights four specific reasons the h-index fails to capture a complete picture of research impact. Furthermore, there are a variety of new altmetrics tools out there focusing on how to measure the influence of all of a researcher’s outputs, not just their papers.

You’re surfing the research literature on your lunch break and find an unfamiliar author listed on a great new publication. How do you size them up in a snap? Google Scholar is an obvious first step. You type their name in, find their profile, and–ah, there it is! Their h-index, right at the top. Now you know their quality as a scholar.

Or do you?

The h-index is an attempt to sum up a scholar in a single number that balances productivity and impact. Anna, our example, has an h-index of 25 because she has 25 papers that have each received at least 25 citations. Today, this number is used for both informal evaluation (like sizing up colleagues) and formal evaluation (like tenure and promotion).

h index gscholar

 

We think that’s a problem. The h-index is failing on the job, and here’s how:

1. Comparing h-indices is comparing apples and oranges.

Let’s revisit Anna LLobet, our example. Her h-index is 25. Is that good?

Well, “good” depends on several variables. First, what is her field of study? What’s considered “good” in Clinical Medicine (84) is different than what is considered “good” in Mathematics (19). Some fields simply publish and cite more than others.

Next, how far along is Anna in her career? Junior researchers have a h-index disadvantage. Their h-index can only be as high as the number of papers they have published, even if each paper is highly cited. If she is only 9 years into her career, Anna will not have published as many papers as someone who has been in the field 35 years.

Furthermore, citations take years to accumulate. The consequence is that the h-index doesn’t have much discriminatory power for young scholars, and can’t be used to compare researchers at different stages of their careers. To compare Anna to a more senior researcher would be like comparing apples and oranges.

Did you know that Anna also has more than one h-index? Her h-index (and yours) depends on which database you are looking at, because citation counts differ from database to database. (Which one should she list on her CV? The highest one, of course. :))

2. The h-index ignores science that isn’t shaped like an article.

What if you work in a field that values patents over publications, like chemistry? Sorry, only articles count toward your h-index. Same thing goes for software, blog posts, or other types of “non-traditional” scholarly outputs (and even one you’d consider “traditional”: books).

Similarly, the h-index only uses citations to your work that come from journal articles, written by other scholars. Your h-index can’t capture if you’ve had tremendous influence on public policy or in improving global health outcomes. That doesn’t seem smart.

3. A scholar’s impact can’t be summed up with a single number.

We’ve seen from the journal impact factor that single-number impact indicators can encourage lazy evaluation. At the scariest times in your career–when you are going up for tenure or promotion, for instance–do you really want to encourage that? Of course not. You want your evaluators to see all of the ways you’ve made an impact in your field. Your contributions are too many and too varied to be summed up in a single number. Researchers in some fields are rejecting the h-index for this very reason.

So, why judge Anna by her h-index alone?

Questions of completeness aside, the h-index might not measure the right things for your needs. Its particular balance of quantity versus influence can miss the point of what you care about. For some people, that might be a single hit paper, popular with both other scholars and the public. (This article on the “Big Food” industry and its global health effects is a good example.) Others might care more about how often their many, rarely cited papers are used often by practitioners (like those by CG Bremner, who studied Barrett Syndrome, a lesser known relative of gastroesophageal reflux disease). When evaluating others, the metrics you’re using should get at the root of what you’re trying to understand about their impact.

4. The h-index is dumb when it comes to authorship.

Some physicists are one of a thousand authors on a single paper. Should their fractional authorship weigh equally with your single-author paper? The h-index doesn’t take that into consideration.

What if you are first author on a paper? (Or last author, if that’s the way you indicate lead authorship in your field.) Shouldn’t citations to that paper weigh more for you than it does your co-authors, since you had a larger influence on the development of that publication?

h-indexThe h-index doesn’t account for these nuances. So, how should we use the h-index? Many have attempted to fix the h-index weaknesses with various computational models that, for example, reward highly-cited paperscorrect for career lengthrank authors’ papers against other papers published in the same year and source, or count just the average citations of the most high-impact “core” of an author’s work.

None of these have been widely adopted, and all of them boil down a scientist’s career to a single number that only measures one type of impact. What we need is more data. Altmetrics–new measures of how scholarship is recommended, cited, saved, viewed, and discussed online–are just the solution. Altmetrics measure the influence of all of a researcher’s outputs, not just their papers. A variety of new altmetrics tools can help you get a more complete picture of others’ research impact, beyond the h-index. You can also use these tools to communicate your own, more complete impact story to others.

So what should you do when you run into an h-index? Have fun looking if you are curious, but don’t take the h-index too seriously.

This piece originally appeared on the Impactstory blog and is reposted with the author’s permission.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Stacy Konkiel is the Director of Marketing & Research at Impactstory, an open-source, web-based tool that helps researchers explore and share the diverse impacts of all their research products. A former academic librarian, Stacy has written and spoken most often about the potential for altmetrics in academic libraries.

Stacy has been an advocate for Open Scholarship since the beginning of her career, but credits her time at Public Library of Science (PLOS) with sparking her interest in altmetrics and other revolutions in scientific communication. You can connect with Stacy on Twitter at @skonkiel.

Print Friendly, PDF & Email

About the author

Sierra Williams

Posted In: Academic communication | Citations | Impact

10 Comments