PN_a_HDj_400x400Kate Wing breaks down the criteria used for Science magazine’s list of Top Scientists on Twitter and finds the approach severely lacking. With a range of tools available for measuring social media impact, simplistic measures like follower number as a proxy of influence is outdated.  But for all the flaws in Science’s approach, they have opened up a conversation about what should be measured and how. For example, is a scientist who gets the most retweets more influential than one who cultivates a network of scientists across disciplines?

Last week there was an awkward date between Science and Twitter. A fairly thoughtful piece about how scientists can, should and do use Twitter was wrapped around a list of the “Top 50 Science Stars of Twitter.” The internet was not amused:

Science, you can do better than this. As can the big scientific community that is already actively using social media. Let’s break this down and get to some learnings, starting with the criteria for “Top Scientists”.

Twitter follower numbers are not a great metric for anything but number of Twitter followers. Five percent of Twitter accounts are fake and eight percent are autoretweeting robots. That’s 30 million followers that aren’t actually listening. Generally, the more you tweet, the more robots you’ll get following you. So, without analyzing the individual follower lists of each of each scientist, you can’t be sure that, say, P.Z. Myers actually reaches more humans than Steven Pinker. Science points out that follower number is a ‘very crude proxy of influence’ but that they chose this as a criteria because it was the ‘most accessible’. And yet somehow not accessible enough to catch many female scientists, including Dr. Sylvia Earle, whose 34,600 followers would have placed her 20th.

Geneticist Neil Hall was perturbed that scientists seemed to be ‘famous for being famous’ while not contributing to science in the true coin of the realm – academic publications. He created a formula comparing scientific citations against Twitter followers and dubbed it the “K-Index.” I give Dr. Hall credit for coming up with a metric to get at what he saw as a problem: the overexposure of scientists who don’t publish enough. Even in its current rough and cheeky form, the K-Index still provides slightly more information than Science’s choice of straight Twitter followers. If Science did its rankings by K-Index, Neil Tyson drops to the bottom, Daniel Gilbert rockets into third place and two women – Karen James & Amy Mainzer – move into the top 20. Dr. Hall suggests that anyone with a K-Index over five should stop Tweeting and get back to publishing. Everyone on Science’s list has at least an eight, so they won’t get invited to any of Dr. Hall’s panels.

That gives us two definitions of “Top” — the most followers and the K-Index. Neither of which tell you much about influence. Is a scientist who gets the most retweets more influential than one who cultivates a network of scientists across disciplines? How would you evaluate scientists who use Twitter for conversations, so their high tweet count is more replies than broadcasts? If only there were data scientists and social media companies thinking about this very issue. Oh wait, it’s 2014.

I took the totally free step of checking the 9/21/2014 Tweetreach of Science’s top scientist, Neil Tyson, along with Sylvia Earle, and popular science blogger Dr. Bethany Brookshire. Both Sylvia & Bethany crushed Neil. You would expect Science, a nonprofit media organization, would have some social media protocols and metrics that they use to track their own blogs and excellent writers. Perhaps they could have used those to rank scientists and compared that to the K-Index.

lego scientistImage credit: LEGO Collectible Minifigures Series 11 : Scientist (Flickr, CC BY-NC)

For all the flaws in Science’s approach, they’ve opened up a conversation about what should be measured and how. Dr. Hall initially thought his K-Index would highlight the contributions of women scientists like Rosalind Franklin and Ada Lovelace, who did not get much public attention in their day. You could tweak the K-Index to include other social media, adjust for time in the field (and Twitter start date) and other factors if your goal was to identify scientists who accomplish great things without notoriety. But is that the greatest goal for scientists? Or for the universities and foundations who support them?

There are scientists who use social media very deliberately. They have a strategy, as any good organization or business does. They’re crowdfunding research or recruiting volunteers. They’re having public conversations about their work with colleagues and potential graduate students. They’re actively building networks to discover new research partners. They use Twitter and other social media platforms as “an informal arena for the pre-review of works in progress.” These scientists should be evaluated by their success in hitting their strategic goals, not by follower numbers. I’m not saying every scientist needs to be on Twitter or Instagram or whatever the latest platform is but it wouldn’t hurt scientists to understand social media better — including how their peers & funders will evaluate their performance — so they can make an informed decision to opt out.

A few months ago, I was at dinner with a very social media savvy biologist who’d just finished reviewing grants for the National Science Foundation. NSF asks scientists to explain how a project will have ‘broader impact.’ He said the vast majority of proposals filled in that section by saying “I’ll put the results on the internet” and then, magically, impact will happen. I know other colleagues who’ve put social media metrics into their tenure packages, which are then reviewed by senior members of their department who have little or no experience with social media. Or view it with some of the same skepticism as Dr. Hall. The latest generation of scientists grew up with social media woven into their lives. If the scientific community adopts an index that penalizes scientists for their online presence we run the risk of discriminating against an entire cohort of new researchers.

Until scientists across disciplines get to better agreement on social media metrics – what to measure and why – good science won’t get out to the audience it deserves and scientists won’t be applauded for using social media wisely and well. Which is a waste.

This piece originally appeared on the author’s Medium blog and is reposted with permission.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Kate Wing spent twenty years working on ocean science & policy, including spending a season in Antarctica. She’s currently a U.S.-based consultant helping social sector groups think about strategy and impact. And fish.

Print Friendly, PDF & Email