LSE - Small Logo
LSE - Small Logo

Gerald Schweiger

Adrian Barnett

Stijn Conix

June 9th, 2025

Funding competition sabotages science 

1 comment | 7 shares

Estimated reading time: 6 minutes

Gerald Schweiger

Adrian Barnett

Stijn Conix

June 9th, 2025

Funding competition sabotages science 

1 comment | 7 shares

Estimated reading time: 6 minutes

Competition in the form of research proposals can seem like a reasonable, fair and efficient process to determine which scientists and projects to allocate funding to. In fact, the way funding competitions are structured is inefficient, unfair, and promotes conformism, argue Gerald Schweiger, Adrian Barnett and Stijn Conix. Allocating funding through lottery processes could lead to better science than the current model.


Imagine you are part of a group facing a problem that only one of you can solve. What would you want that one person to spend their time on? Most likely, we would act like the managers of an elite athlete: removing distractions and organising their environment, to enable them to focus entirely on that one thing.

On a larger scale, these problems become the grand challenges of our time, and it is scientists who strive to tackle them. Do we treat our scientists like top athletes, and is that what we, the taxpayers, as their primary sponsors, should aim for?

It seems common sense to aim for the best return: tax money in, knowledge, technology, and innovation out. Since there are far more scientists with ideas than available resources, we must determine how to allocate them. This is where competition comes in. It promises a process that is efficient, fair, and reliable, so that it rewards truly deserving scientists, but does it succeed on its own terms? We argue it does not.

Competition leads to time wasting

Scientists compete for funding by submitting their ideas as research proposals. However, this process comes with significant opportunity costs, as participants must invest a lot of time and resources that could otherwise be spent advancing science.

Senior scientists spend around 45% of their time on administrative activities relating to the acquisition and management of funding.

Preparing a proposal takes up to 50 working days. Given the low success rates, up to 50% of the total grant money can be spent on covering the costs of all submitted proposals. Accounting for decision-making, administrative costs, and project management, this figure grows even higher. This creates a situation where senior scientists spend around 45% of their time on administrative activities relating to the acquisition and management of funding.

Competition leads to unfairness

Competition is falling well short of its promise to allocate resources efficiently—yet is it at least fair and reliable? The decision-making process—peer review by other scientists—is only slightly better than rolling the dice, and there is little to no correlation between the rankings in the decision processes and subsequent scientific success. That should not surprise us, however, as predicting the potential benefits of research is nigh-on impossible. Who could have foreseen that a funding application titled “Why do jellyfish glow?” would have led to discoveries that revolutionised medicine and won the scientists the Nobel prize.

Competition leads to conformity

Competitive systems also tend to discourage high-risk research, thereby stifling innovation. Instead of fostering groundbreaking ideas, they create incentives to propose predictable, safe, and often mediocre projects. But the next obvious step is not enough. Nobel Prize-winning physicist Anton Zeilinger puts it this way: “It’s about discovering the unusual and staying open to the unexpected. It’s not about the next step that can be clearly defined—that would be too little”. Zeilinger highlights a fundamental aspect of research: curiosity-driven science.

Funding curiosity-driven science is the lifeblood of scientific and technological progress. Scientists should be encouraged to pursue it—not discouraged by the risk-averse tendencies of peer review.

Competition leads to “winner takes all” outcomes

The high-stakes of the funding competition, where careers can be ended by a single decision, create enormous stress and drives many talented young scientists to abandon research altogether. Competitive funding processes create distinct groups of winners and losers, with the Matthew effect reinforcing the gap between these. Low success rates and reliance on funding can also create incentives for questionable research practices, such as salami slicing (dividing one substantial study into multiple smaller publications to increase publication count) to falsely boost CVs.

Alternatives to the current system

Science is a complex system, and the efficient distribution of scarce resources is a challenging question. But we have one thing that has proven to work time and again in such situations: the scientific method. More experiments, less platitudes; more data, less dogma.

In recent years, many incremental proposals have been made to improve competitive systems, but also radically new approaches have been proposed that address the seemingly inherent problem of an expensive and unreliable selection process. For example, the Volkswagen Foundation and the Swiss National Science Foundation use lottery procedures in the final decision-making stage—after the peer review process—to allocate funding. These lottery systems (tiebreaker lotteries) aim to reduce biases in the selection process and improve fairness among applicants who make it to the final round. However, the problem of high opportunity costs remains.

To address such issues, the concept of an initial lottery (Lottery-First) has been proposed, determining who is eligible to submit a full application for later peer review. Unlike tiebreaker lotteries, Lottery-First has the potential to greatly lower opportunity costs. A pilot investigation in Australia asked a large number of scientists to name up to 10 scientists in the country who they thought would be deserving of funding. The study showed that this democratic voting process reduced time requirements compared with a traditional grant review system.

These experiments challenge the idea that elaborate, labour-intensive selection procedures lead to better outcomes. If we want to get the most bang for our buck, we should reduce the burden that the current methods of funding distribution place on scientists.

As rational taxpayers aiming to maximise our return, we should treat scientists more like elite athletes: let them focus on what they do best—science.


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: roibu on Shutterstock.


About the author

Gerald Schweiger

Gerald Schweiger is a Professor at TU Wien. He works on intelligent systems and "research of research".

Adrian Barnett

Adrian Barnett is a Professor of Statistics at Queensland University of Technology. He is the president of the Association for Interdisciplinary Meta-Research and Open Science, whose mission is to improve research quality.

Stijn Conix

Stijn Conix is a philosopher of science at the Université Catholique de Louvain–La Neuve. His research addresses how social and institutional structures shape and sometimes undermine the effectiveness of science.

Posted In: Research funding

1 Comments