LSE - Small Logo
LSE - Small Logo

Taster

December 11th, 2019

Blind Luck – Could lotteries be a more efficient mechanism for allocating research funds than peer review?

3 comments | 8 shares

Estimated reading time: 5 minutes

Taster

December 11th, 2019

Blind Luck – Could lotteries be a more efficient mechanism for allocating research funds than peer review?

3 comments | 8 shares

Estimated reading time: 5 minutes

Peer review is integral to the award of funds for academic research. However, as an increasingly large number of researchers attempt to secure limited funding, it is clear that much funding is awarded based on marginal assessments of the quality of different proposals. In this post, Lambros Roumbanis argues that randomly awarding research funding via lotteries presents a more rational, efficient and most importantly unbiased means of distributing research funding.

A survey conducted by Publons and described by James Hardcastle in his recent blogpost, showed 78% of 4700 researchers with experience as reviewers of scientific proposals stated that they think peer review is the best way to decide how to allocate research grants. However, the same study also revealed the researchers’ concerns about the review system being overly time-consuming and lacking in transparency. These concerns come as no surprise. During the last two decades, several studies have pointed out different types of problems affiliated with grant peer review. This has led commentators to argue for a new funding system that is more rational, efficient and impartial for the distribution of scarce resources.

The question is: are there any plausible alternatives that can do the job better? There might be. What would happen, say, if research grants were allocated by a lottery? For the vast majority of researchers, politicians and ordinary citizens, such an idea would probably seem too radical, even absurd. Because the very idea of implementing a lottery implies that one takes scientific expertise and academic judgement out of play, to instead rely on a random mechanism; and this violates the common conception of how modern science should be legitimately organised. The risk, some researchers argue, is that a larger portion of weaker and unworthy proposals would receive funding if peer review was removed as a quality control. It is true that a lottery does not only eliminate unwanted biases and conflicts of interest; it is also blind to obvious differences in scientific quality that could be spotted by an experienced reviewer. But the fundamental crux of the matter is this: what are these scientific qualities more exactly, and how are they recognised and evaluated in the process of peer review? How can we identify and agree on which of two equally good projects should be funded before they have been carried out?

Today, the competition over funding has changed into a situation that could rather be characterised as one of “hyper-competition”, where large numbers of high quality proposals are rejected. In many OECD countries, research councils and private funding agencies have acceptance rates of between 5-20%. What is a fair and impartial decision when resources are this scarce? Or expressed differently: what are the actual premises for making important decisions of this type that shape the future of science? One thing is for sure: uncertainty, biases and randomness  are to a certain extent embedded in the final outcome.  Experimental studies have shown how some applications, that were ranked highly by one assessment panel, would be without a grant, if another group had assessed it. Changes in the composition of reviewers may, indeed, have dramatic effects, because differences in expertise and subjective taste do affect how a group reaches consensus during the negotiations. This is a well-established fact. Furthermore, disagreements within a panel group very often destroy the chances for genuinely innovative and risky projects to get funding. Unwanted implicit bias about gender, ethnicity or social status can also creep into the review process. A lottery would solve several of the problems that come with the peer review system by making the selection method impartial. It would also increase the heterogeneity of funded projects through the diffusion effects of chance. In other words, a lottery could allocate grants in a more dynamic way, increasing the likelihood of accepting unorthodox and risky projects.

A lottery would also be much cheaper and save a substantial amount of time. Today, researchers spend more and more of their precious time writing applications; time that they could have spent on doing research. In a report by The Royal Swedish Academy of Science from 2010, an estimate was made of the time spent on writing the approximately 3500 research proposals submitted to The Swedish Research Council for the year 2008. Less than one in four applications was approved, meaning that about sixty working years were directly wasted, or at least did not lead to any concrete results. In addition, this did not even include the time spent by reviewers assessing the proposals. Since then, the situation has hardly improved, quite the opposite. This waste of time is counterproductive for science and society. With a lottery, only a very brief sketch would be required in which the basic idea would be described. Given the fact that peer review already seems to have an unavoidable kind of lottery dimension, it might be more rational and efficient to use a real lottery. And this leads us to another argument, namely that using a lottery could break up the disproportionate amount of power and influence that a small group of reviewers can have over the future of a large number of researchers.

By way of concluding I would like to say something about the fact that there already exist some rare examples of funding agencies who have begun to allocate research grants with a modified lottery (an initial, basic screening is conducted). These include the National Institute of Health of New Zealand and the Volkswagen Foundation in Germany. Although these are relatively small-scale lotteries, these agencies have taken a first important step that other national research councils and private funding agencies might consider taking as well. A joint effort to test the outcome of lotteries, and to compare these with the outcome of ordinary peer review, would be of great value to the academic community. If there exist no substantial differences or even an improvement with lotteries (which simulation studies has shown), then more large-scale lotteries should be organised in the future.

The important thing that should be remembered in this context is, that at the end of the day, funding is just a question of giving researchers an opportunity to deepen and test their ideas. Even under-developed proposals or those that are difficult to assess might hide great potential. But also, and maybe more importantly, proposal competitions are inevitably inefficient method for funding science when the number of grants is smaller than the number of meritorious proposals. I would like to end this blog post with the words of Daniel H. Osmond who wrote: “the word ’competition’ has less meaning than is commonly supposed. It smacks of the quest for excellence, but may militate against it. Those who conduct ’competitions’ must be more humble and realistic about the validity of what they do. In most cases they are in fact deciding that one shade of blue is competitively superior to another shade of blue, which is, of course, nonsense.”

 

This post draws on the authors article, Peer Review or Lottery? A Critical Analysis of Two Different Forms of Decision-making Mechanisms for Allocation of Research Grants, published in Science, Technology, and Human Values.

About the author

Lambros Roumbanis, is an Associate Professor of Sociology at SCORE, the Stockholm Centre for Organizational Research. 

 

Note: This article gives the personal view of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

Featured image credit Donations_are_appreciated via Pixabay (Licensed under a CC0 licence).

 

 

Print Friendly, PDF & Email

About the author

Taster

Posted In: Research funding | Research policy

3 Comments