Paul J. Silvia is creeped out by the correlation between quality and quantity in academic publishing, but why do the people who publish the most also publish the work that has greatest influence?
This article first appeared on the LSE’s Impact of Social Sciences blog
Gregory Feist—a distinguished creativity researcher at San Jose State University—is not a haunting man, but his research on scientific eminence creeps me out. One of his early papers—“Quantity, Quality, and Depth of Research as Influences on Scientific Eminence: Is Quantity Most Important?”—strikes chills in the hearts of thwarted writers who suspect they aren’t publishing enough. As you’d suspect from the title, his research (on university biologists, chemists, and physicists) found that the mere quantity of publications was the largest predictor of eminence, assessed via citation rates, awards and distinctions, professional visibility, and peer evaluations of research contributions.
This is creepy stuff indeed, especially to those of us who are reading or writing blog posts on writing as part of a sophisticated and self-deceiving procrastination strategy. Before picking at the nits on Feist’s study, Feist isn’t the only scientist to find this effect. A huge correlation between quality and quantity is found for nearly anyone who looks. This fact forms the basis for many theories of scientific impact and eminence, such as Dean Kean Simonton’s influential writings.
Why are quality and quantity related? Why do people who publish a lot of work also publish work that has a greater influence? The quality-quantity correlation, like any other correlation, can reflect many causal directions. Here are a few speculations:
- Writing both improves and creates ideas. Most people think of writing as a kind of transcription: we gather our facts, form our ideas, and hit the mental “Print” button to output what we know. Instead of being the endpoint of a knowledge-creation cycle, however, writing is often the beginning. Quality and quantity might be linked because the process of writing improves quality, by forcing us to confront and sharpen our ideas, and quantity, by sparking more ideas. In Writing to Learn, William Zinsser argued that writing was a way to create knowledge, a way to understand what we half-know. Grappling with our ideas makes them more sophisticated and eventually sparks some new ones. As anyone who writes regularly knows, writing about one thing leads to ideas for new projects. Robert Boice, in Professors as Writers, showed that forcing professors to write daily caused a sharp increase in text output but also a many-fold increase in the number of new ideas for writing. Writing begets good ideas, which beget more writing.
- Early quality or quantity attracts resources that foster both. Here in the United States—where we are free from the REF madness but afflicted by a few peculiar American maladies—early publishing success opens access to contexts, cultures, and institutions that foster more success. Someone with a hot early career might receive a federal training grant—thus jumpstarting a research program and reducing time spent teaching—and get hired at a resource-rich department with low teaching loads, energetic doctoral students, and a warm intellectual climate. In this case, the quality-quantity correlation is a spurious result of what creativity science calls a “Matthew Effect”—the rich get richer by virtue of access to training, resources, and opportunities.
- Quantity attracts positive attention from peers. Writing is hard, so people who do a lot of it stand out. Because quantity is noticed and valued, it can spark a cycle that leads to markers of quality, such as more citations and a stronger reputation. In a small subfield, one person can generate a notable proportion of publications. High quantity then attracts positive attention from peers, which leads them to read the papers, assign them to students, and cite them in their own work. Beyond attracting attention, quantity attracts citations through mere probability: more papers, however humble, mean more potential things for a domain of scholars to cite. Unlike the prior two, this explanation implies that quantity begets an impoverished kind of quality—the work merely gets more attention and citations, not genuine improvement.
These three paths strike me as reasonable possibilities—all three might have some merit, but the quantity-quality correlation still vexes me. Have some explanations of your own? The comments section—a warm home for creators and procrastinators alike—awaits.
Note: This article gives the views of the author, and not the position of the British Politics and Policy blog, nor of the London School of Economics. Please read our comments policy before posting.
Paul J. Silvia, an Associate Professor of Psychology at the University of North Carolina at Greensboro, is the author of How to Write A Lot: A Practical Guide to Productive Academic Writing.
There may be a correlation, but you are not measuring ‘quality’. Yes, these people who pump out crap get cited, invited to conferences etc. but anyone who actually reads their work soon figures out how weak it is. This is particularly obvious if you consider their ‘half-life’ – a better indicator of quality/use.
I wonder whether most productive and influential scholars are also most commonly those who occupy a mainstream/orthodox position in their fields? For example, if you want to publish a lot and be influential in American political science, are you better off doing difficult mathematical studies of political parties and elections that might interest GOP/Democrat politicians, or cogitating on Marx and Foucault with a view to maybe changing the world?
Practice makes perfect?
In addition to the above, it could be a simple matter of: The more you throw out there (quantity), the more likely it is that something sticks / that you get at least some hits for it (which then counts for quality, because all too often that, as you note, we assess “via citation rates, awards and distinctions, professional visibility, and peer evaluations of research contributions”).
Especially in the U.S. academy I am consistently baffled by how much of the published work clearly relies on only a cursory reading of the literature – and continually reinvents the wheel, so to speak. A lot of what I would consider to be high quality work gets missed that way …
Impact is not a good measure of quality. Short-term impact is an even worse measure of quality. A paper’s real quality often becomes apparent only many years after its publication. And many good papers never get recognized.
It’s a bit cheeky to immediately equate ‘greater visibility’ to ‘greater quality’, I think. At least in the fields I’m familiar with, one doesn’t have to look very far to find prolific and oft-cited fools.
this sounds very interesting. but why is there no recognition on networking and publication policies and (its) discriminatory practices?