Many scholars are encouraged to focus on the quality not the quantity of their publications, the rationale being that becoming too focused on productivity risks reducing the quality of one’s work. But is this, in fact, the case? Peter van den Besselaar and Ulf Sandström have studied a large sample of researchers and found that, while results vary by field, there is a positive and stronger than linear relationship between productivity and quality (in terms of the top cited papers). This same pattern appears to apply to institutions as well as individual researchers.
It seems obvious that science is about quality, not quantity. As a consequence evaluation processes do not take productivity into account, with there now being a trend to limit the number of publications per researcher under consideration. This way a panel can assess quality without being confronted with huge publication lists that would be impractical to evaluate. So, scholars may be encouraged to focus on quality, not quantity. Indeed, from this perspective, quality and quantity are opposite characteristics of research activity, and focusing on quantity is not only a mistake but will even have perverse effects as it will reduce quality of the work. However, these ideas might be misguided. Our understanding, developed in a recent PLoS ONE paper, is that there is a positive and stronger than linear relationship between productivity and quality (in terms of the top cited papers).
Figure 1 shows the various possible relationships between “quality” and “quantity” that may exist: (i) there may be an “impact ceiling”, meaning that after a certain number of papers the number of highly cited papers stabilises; there could be (ii) diminishing; (iii) constant; or even (iv) increasing “impact returns” from productivity; and finally (v) “small may be beautiful” in the sense that the number of high-impact papers decreases when authors become too productive, trading quality for quantity. The good thing is that this can be investigated empirically.
Figure 1: Possible views on the relationship between quantity and quality. Note: x-axis=#papers; y-axis=citation impact. Click to enlarge.
We show our findings by field, presented in the same way as the possible relationships above. We used a disambiguated dataset of more than 40,000 Swedish researchers (2008-2011). For each researcher we calculated the (field-normalised and fractionally counted) total number of publications and also the number of those publications that figure in the (2.5%) highest cited papers. Figure 2 shows the results: in the natural sciences & engineering, in the life & medical sciences, and in psychology & education, we find the “increasing returns” pattern. In agriculture, biology, environmental studies & geography, the “constant” pattern dominates. In the social sciences, the (slightly) “diminishing returns” pattern seems to dominate; whereas the “impact ceiling” pattern was found for computer science & mathematics and for the humanities.
Figure 2: Actual relationship between quantity and quality based on Swedish data 2008-2011 per discipline area. Note: x-axis=#papers; y-axis=citation impact (left axis – natural and medical sciences; right axis – human and social sciences). Source: Web of Science Online. Click to enlarge.
Interestingly, the “small is beautiful” pattern does not occur. The question remains as to why the pattern in the computer sciences and the humanities is so different from other fields. This is probably related to the nature of these fields, which, more than elsewhere, have several audiences other than their scientific peers. We would suggest that prolific authors in the computer sciences and humanities move towards writing more for stakeholders other than the peer audience once their scholarly output is above a (reasonably high) threshold. However, to sort this out would need further research.
How can we explain the patterns we find? First of all, research shows that scholars are generally highly motivated and committed, so there is no good reason to expect that they would try to maximise output at the expense of quality. Secondly, theories of scientific creativity suggest the opposite: the more creative someone is, the more new ideas someone generates, the more potential and (as our results suggest) realised papers the researcher has. Of course, not all ideas are good ideas, but the more ideas one has, the higher the probability that some are good. It is all about trying, experimenting, learning: the more one does, the better one becomes, on average.
This size effect also holds at organisational level, something we did not address in the PLoS ONE paper but tested after a reviewer claimed the opposite: “the CWTS Leiden Ranking, for example, does not show such a linear relationship as the authors claim between the number of papers produced by a university and the percentage of highly cited papers produced by that university”. We tested this using the Leiden Ranking 2014, which details the number of papers and the number of papers belonging to the top 10% highly cited papers for all universities included. As Figure 3 shows, the relationship between quantity and quality/impact also follows the “positive returns” pattern at the level of universities and public research organisations.
Figure 3: Relationship between number of papers per university and impact in terms of Top10% papers. Source: The Leiden ranking 2014.
In conclusion, what we have seen is that, overall, there is a positive quality return from increased productivity. Higher numbers of papers do result in even higher numbers of top cited papers. In other words, stimulating output matters, and would seem to be a positive and not a negative incentive for researchers. This being the case, output levels should be taken into account in the evaluation of researchers and organisations. This, of course, does not imply that output is the only relevant evaluation criterion; but that it is relevant seems indisputable.
This blog post is based on the authors’ article, “Quantity and/or Quality? The Importance of Publishing Many Papers”, published in PLoS ONE (DOI: 10.1371/journal.pone.0166149).
Featured image: Kansas Sunflowers by Ben Smith. This work is licensed under a CC BY 2.0 license.
Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.
About the authors
Peter van den Besselaar is Professor of Organization Sciences at the Vrije Universiteit Amsterdam, the Netherlands, Faculty of Social Sciences. His research focuses on the organisation, governance and dynamics of science. In the recent years, he has been engaged in an interdisciplinary team developing the SMS platform, an infrastructure for integration and enrichment of heterogeneous data.
Ulf Sandström is Senior Lecturer at Linköping University and affiliated researcher at KTH Royal Institute of Technology in Stockholm. Currently he is working with the Gedii-team studying gender diversity in research teams. He focuses on questions related to research policy, often with bibliometric methods. Some of his latest reports and articles can be found on the forskningspolitik website.
Interesting article, like the analysis! But actually what this shows is that quality and quantity are related for most of the STEM disciplines while the relationship is less clear for social sciences (slightly diminishing returns) and very murky for humanities, and computer science and mathematics. We need more solid analytical work like this. We also need to remember that what holds for STEM disciplines is rarely the case for HASS, and glossing over the differences doesn’t help people who fall outside of STEM.
I would like to point out two most obvious problems with this post. The first, which is assumedly not the authors’ fault, but an editor’s, is the completely misleading fisrt paragraph: it is simply wrong that “while results vary by field, there is a positive and stronger than linear relationship between productivity and quality”. What the data show is that such a strong positive relationship obtains unequivocally only in physics/chemistry//engineering and psychology/education. Admittedly I have not checked the regressions, only the graph, but it is up to the editor to support such a categorical statement. Nevertheless, it would not be possible to provide support for it, because the graph also shows unequivocally that in the case of the humanities and computer science/math there are inflection points in the curves. They thus cannot be farther from linear or stronger than linear. In fact, and now to the authors’ fault, both curves are obviously in what they term the “small is beautiful” class, and not in any way, as they state, in the “impact ceiling” class. The mistake is so glaring that one is left wondering how is it possible that referees and editors did not point it out.
In the paper we discuss why computer science and humanities show a different pattern. It is a ceiling, as the curve hardly goes down. Of course, in a short summary in this blog we can only address the main points.
I remember reading that the many of the most cited articles in the medical literature were not able to be replicated. Is citation rate really a proper index of quality?
Does the paper control for researcher ability? If not, then it’s not surprising that there’s a positive correlation between quantity and quality, since most researchers strive for both.
Interesting study. But to plot impact as a function of the number of papers forces conclusions that may be unwarranted. If instead you swapped axes to plot number of papers as a function of the impact, you would conclude that the number of papers went down with increasing impact (for chem, phys, eng).
Now, chronologically this is of course wrong. Papers need to be published before they have impact. But chronological reasoning is already assuming mechanisms behind impact. And about paper writing, for that matter. Try this for size instead: You concentrate on high impact activities and then your productivity goes down. Completely consistent with data.
PS your data points need error bars (wagging finger).
Not sure how you ‘swap’ the axes. If one swaps the axes, than of course the positive relation remains: the higher the impact, the more publications: so the more you focus on quality, the more you publish (if swapping would make sense at all).
You swap axes by exchanging abscissa and the ordinate. With a convex function y(x) you then get a concave function x(y) and your conclusion would be different, and to some extent reversed.
You should not confuse a positive gradient with a positive curvature, which is what you do in your reply vs. what you do in the original post.
It seems to me that this analysis does nothing to account for how long a researcher has been producing research. Researchers with few papers are (most likely) those that have been in the field for a short time. It makes sense that their papers would have fewer citations; they are not as experienced or as knowledge. They might also take longer to get papers published. In contrast, those with more papers have been doing research in their area longer, and so they should understand the issues and theory better. They should also have a better understanding of how to get papers published. This could explain their results without addressing the quantity versus quality argument.
This paper does not seem to address this, and it seems like a major flaw in their analysis. Perhaps they should introduce controls for how long researchers have been practicing, or look at publications per year.
What would you conclude if you swapped axes and plotted the number of publications vs. impact? Then you could conclude that writing higher impact papers makes you produce less (at least for phys, chem and eng science).