LSE - Small Logo
LSE - Small Logo

Blog Admin

October 17th, 2012

There’s something fishy about citations: We need a method of assessing the support of research if we want to change the ‘publish or perish’ culture

9 comments

Estimated reading time: 5 minutes

Blog Admin

October 17th, 2012

There’s something fishy about citations: We need a method of assessing the support of research if we want to change the ‘publish or perish’ culture

9 comments

Estimated reading time: 5 minutes

Current citation biases give us only the narrowest slice of scientific support. Bradley Voytek writes that while BrainSCANr may have flaws, it gives the reader a quick indication of how well-supported an academic argument is and could provide a new way of thinking about citations.

Science has a lot of problems. Or rather, scientometrics has a lot of problems. Scientific careers are built off the publish or perish foundation of citation counts. Journals are  ranked by impact factors. There are serious problems with this system, and many ideas have been offered on how to change it but so far little has actually been affected. Many journals, including the PLoS and Frontiers series, are making efforts to bring about change, but they are mostly taking a social tactic: ranking and commenting on articles.I believe these methods are treating the symptom, not the problem.

Publish or perish reigns because our work needs to be cited for scientists to gain recognition. Impact factors are based on these citation counts. Professorships are given and tenure awarded to those who publish in high-ranking journals. However citations are biased and critical citations are often simply ignored.

Bear with me here for a minute. How do you spell “fish“? “G-h” sounds like “f”, as in “laugh“. “O” sounds like “i”, as in “ women”. “T-i” sounds like “sh”, as in “scientific citations”. This little linguistic quirk is often incorrectly attributed to George Bernard Shaw; it’s used to highlight the strange and inconsistent pronunciations found in English. English spelling is selective. You can find many spelling examples that look strange, but support your spelling argument. Just like scientific citations.

There are a lot of strange things in the peer-reviewed scientific literature. Currently, PubMed contains more than 18 million peer-reviewed articles with approximately 40,000-50,000 more added monthly. Navigating this literature is a crazy mess. When my wife Jessica and I created brainSCANr and its associated peer-reviewed publication in the Journal of Neuroscience Methods (“Automated cognome construction and semi-automated hypothesis generation”), our goal was to simplify complex neuroscience data. But we think we can do more with this system.

At best, as scientists we have to be highly selective about what studies we cite in our papers because many journals limit our bibliographies to 30-50 references. At worst, we’re very biased and selectively myopic. On the flip side, across these 18+ million PubMed articles, a scientist can probably find at least one peer-reviewed manuscript that supports any given statement no matter how ridiculous.

What we need is a way to quickly assess the strength of support of a statement, not an authors’ biased account of the literature. By changing the way we cite support for our statements within our manuscripts, we can begin to address problems with impact factors, publish or perish, and other scientometric downfalls.

brainSCANr is but a first step in what we hope will be a larger project to address what we believe is the core issue with scientific publishing: manuscript citation methods.

We argue that, by extending the methods we present in brainSCANr to find relationships between topics, we can adopt an entirely new citation method. Rather than citing only a few articles to support any given statement made in a manuscript, we can create a link to the entire corpus of scientific research that supports that statement. Instead of a superscript number indicating a specific citation within a manuscript, any statement requiring support would be associated with a superscript number that represents the strength of support that statement has based upon the entire literature.

For example, “working memory processes are supported by the prefrontal cortex” gets strong support, and a link to PubMed showing those articles that support that statement. Another statement, “prefrontal cortex supports breathing”also gets a link, but notice how much smaller that number is? It has far less scientific support. (The method for extracting these numbers uses a simple co-occurrence algorithm outlined in the brainSCANr paper.)

This citation method removes citation biases. It provides the reader a quick indication of how well-supported an argument is. If I’m reading a paper and I see a number indicating strong support for a statement, I might not bother to look it up given the scientific consensus is relatively strong. But if I see an author make a statement with low support–that is, a weak scientific consensus–then I might want to be a bit more sceptical about the claim.

We live in a world where the entirety of scientific knowledge is easily available to us. Why aren’t we leveraging these data in our effort to uncover truth? Why are we limiting ourselves to a method of citations that has not substantially changed since the invention of the book? The system as it is currently instantiated is incomplete and we can do more.

This method may have flaws, but it would be much harder to game than the current citation biases that only give us the narrowest slice of scientific support. This citation method entirely shifts the endeavour of science from numbers and rankings of journals and authors (a weak system for science, to say the least!) to a system wherein research is about making truth statements. Which is what science should be.

Note: This article gives the views of the author(s), and not the position of the Impact of Social Sciences blog, nor of the London School of Economics.

This post was originally published on Bradley’s blog, Oscillatory Thoughts, and is produced here with permission.

About the author:
Bradley Voytek, PhD is an NIH-funded neuroscience researcher working with Dr. Adam Gazzaley at UCSF. Brad makes use of big data, brain-computer interfacing, and machine learning to figure out cognition and is an avid science teacher, outreach advocate, and world zombie brain expert. He runs the blog Oscillatory Thoughts, tweets at @bradleyvoytek, and co-created brainSCANr.com with his wife Jessica Bolger Voytek.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Citations | Impact

9 Comments