LSE - Small Logo
LSE - Small Logo

Blog Admin

April 12th, 2016

Fundable, but not funded: How can research funders ensure ‘unlucky’ applications are handled more appropriately?

1 comment

Estimated reading time: 5 minutes

Blog Admin

April 12th, 2016

Fundable, but not funded: How can research funders ensure ‘unlucky’ applications are handled more appropriately?

1 comment

Estimated reading time: 5 minutes

Oli Preston headshotHaving a funding application rejected does not necessarily mean the research is unsupportable by funders – maybe just unlucky. There is a significant risk to wider society in the rejection of unlucky but otherwise sound applications: good ideas may slip through the cracks, or be re-worked and dulled-down to sound more likely to provide reliable results. Oli Preston looks at how funders could be better at addressing the burden of more high-quality applications than are financially manageable.

Ask most researchers whether they’ve been unfairly rejected in a funding application and the answer would probably be a resounding ‘yes’. Researchers are understandably directing their attention towards the mechanisms of funding and there has been recent criticism of the processes used to allocate research grants.

Not all funders award their grants in the same way, but the basic structure is this:

  1. Applications are discussed by review groups who select suitable projects to invite to interview. Applications may be scored at this stage to help with administration of applications.
  2. Peer reviews are completed for each application. These consist of in depth analyses of each proposal and a summary of whether the project is deemed supportable. These are attached to the application as supporting evidence.
  3. Funding committees receive a written summary from review groups, peer review forms, and full applications. Applicants are interviewed and answer questions related to their research.
  4. Committees then give a ‘yes’, ‘no’, or ‘requires further discussion’ rating.
  5. Applications requiring further discussion are examined and numerical scoring may again be used to help rank applications as ‘yes’ or ‘no’.
money-soothes-1032649_640Image credit: Pixabay Public Domain

A news item in Nature in 2015 highlighted that the peer review scores used by the Medical Research Council (MRC) to select grants did not correlate with final funding decisions – highlighting the ‘black box’ nature of some funders’ processes. The article may have downplayed some of the complexities of award funding – for instance the additional due diligence that committees may conduct of grant applicants and their research teams; however it did highlight several limitations in the system. Indeed, much of the literature in this space suggests that peer reviewed funding and typical funding processes may have shortcomings that need addressing.

The central criticism to current funding models is this: The people deciding between two equally credible applications cannot accurately say which project will have the highest impact.

There is strong evidence that peer reviewers disagree on the best applications, and cannot rank grant applications based on which will have the greatest eventual output (see Danthi et al 2014 and Kaltman et al 2014). In fact, other indicators, such as previous publication record, have served as a better marker of future productivity than reviewer scores.

The consensus seems to be that peer review is extremely capable of differentiating between good and bad applications, as only field experts could be; but where there are more good applications than funding available, peer reviewers are unable to accurately predict which applications will be the most successful. Therefore, it could be argued that from the peer review stage (stage 2, above) onwards, funders are not able to accurately pick winners.

The Threshold Theory of intelligence describes a similar phenomenon. By this account, once a person’s IQ passes a certain threshold, it is not correlated with future individual success – as made famous by various critiques of Lewis Terman’s longitudinal study of child geniuses. Instead, as long as IQ is above threshold, other variables determine future accomplishment. Similarly, in funding decisions, once applications are supportable the studies suggest that funder scoring decisions don’t correlate with research impact.

Unfortunately, few funders have the capacity to fund every high-quality application they receive. Therefore, the process focuses on whittling down applications to a number within the funder limitations. It is this group of applicants whose proposals are deemed supportable science, but who cannot be funded under financial limitations, who deserve attention.

This group of applications could be more accurately described as ‘unlucky’. Applications that do not get selected may be rejected or told to reapply under a different guise. Many do not get feedback at all. But, if the selection process cannot predict which supportable applications would have been successful, why should these people face rejection?

There is a significant risk in the rejection of unlucky applications: good ideas may slip through the cracks, or be re-worked and dulled-down to sound more likely to provide reliable results. This practice contributes to the perceived mediocrity in grant applications that has been reported in the literature: Risk-aversion and the fear of funding a dud project could be the very thing standing in the way of the next breakthrough in science.

Unlucky applications should instead be treated as ‘unsuccessful at this time’. They could then still be held in line for the next round of funding allocation. Furthermore, if this pool of fundable but unfunded applications was made available to other funding bodies, researchers may attain funding for acknowledged good ideas without ever having been rejected, reducing the stress put on researchers to create new, passable applications.

For funders, the evidence poses several key questions:

  1. How can we ensure ‘unlucky’ applications are handled more appropriately?
  2. Should funders be more honest about their financial capacity to fund, and look towards publicising supportable but unfunded grants to other funding bodies?
  3. Does this idea support the notion of publicly available grant applications?
  4. How can we address the limitations of peer review in funding decisions and look towards more objective ways to allocate funding? Or is peer review the best option available to the issues faced by funders with shrinking budgets?

In defence of the status quo, the allocation of funding is an incredibly complex process and it is unlikely that any quick fixes will solve these problems. However, it may be that the lack of transparency in the decision making process is leading to a misunderstanding by researchers and excessive focus on certain metrics such as peer review numerical scores.

Without this transparency surrounding decisions, grant applicants can only assume that there is no sound evidence against their application; or that the decision was made subjectively. If funders do indeed believe in their processes, there should be no issues in increasing transparency.

Ironically, many funders are concerned about the attrition of research talent, yet researchers cite tough funding environment as a reason for leaving research. So how could funders be convinced to get on board? Unfortunately, there is little evidence that suggests how alternative approaches may be better than current practices.

What is needed is a bold funder who can see the potential benefits of improving funding to take a risk and trial an alternative system. One such alternative is the ‘Powerball approach’ to funding suggested by Fang and Casadevall, which advocates the allocation of funds based on a lottery for applications deemed ‘meritorious’.

To date, I am not aware of any funder adopting this approach. One way of testing the concept would be through a randomised control trial, whereby a group of supportable applications to a funder are randomly assigned to one of two groups, each with a limited pot of funds. The control group applications are whittled down through traditional peer review and committee processes, and the test group applications are selected at random. The outputs of these two groups would then be compared to assess whether the system is a fairer way of attaining similar results using a toolkit of research evaluation measures.

This proposed trial has its limitations: Comparing one research programme to another is by no means simple; and the timeframe over which the outcomes and impact of research are seen can take many years meaning that developing the evidence base to support a change could be slow to materialise.

But regardless of the barriers to running a trial like this, the benefits for researchers could be great. Instead of a generation of researchers who are sceptical of funding and potentially deterred from continuing their careers, we could have researchers who accept funding decisions willingly, and who spend less time re-writing their grant applications and more time conducting research.

Note: This article gives the views of the author, and not the position of the LSE Impact blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Oli Preston is an Impact and Evaluation Officer at the British Heart Foundation and formerly worked in the Evaluation team at the Wellcome Trust. He has worked across various areas of evaluation in healthcare and biomedical research, including understanding scientific careers, reporting on the impact of research, and assessing healthcare interventions. You can follow Oli on Twitter (@OJPreston).

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Evidence-based research | Higher education | Research funding

1 Comments