Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Conversation
The Conversation
Science
Tony Blakely, Professor of Epidemiology, Population Interventions Unit, Centre for Epidemiology and Biostatistics, Melbourne School of Population and Global Health, The University of Melbourne

More money and smarter choices: how to fix Australia's broken NHMRC medical research funding system

Shutterstock

Most health research in Australia is funded by the National Health and Medical Research Council (NHMRC), which distributes around $800 million each year through competitive grant schemes. An additional $650 million a year is funded via the Medical Research Future Fund, but this focuses more on big-picture “missions” than researcher-initiated projects.

Ten years ago, around 20% of applications for NHMRC funding were successful. Now, only about 10–15% are approved.

Over the same ten-year period, NHMRC funding has stayed flat while prices and population have increased. In inflation-adjusted and per capita terms, the NHMRC funding available has fallen by 30%.

As growing numbers of researchers compete for dwindling real NHMRC funding, research risks becoming “a high-status gig economy”. To fix it, we need to spend more on research – and we need to spend it smarter.

More funding

To keep pace with other countries, and to keep health research a viable career, Australia first of all needs to increase the total amount of research funding.

Between 2008 and 2010, Australia matched the average among OECD countries of investing 2.2% of GDP in research and development. More recently, Australia’s spending has fallen to 1.8%, while the OECD average has risen to 2.7%.


Read more: COVID has left Australia's biomedical research sector gasping for air


When as few as one in ten applications is funded, there is a big element of chance in who succeeds.

Think of it like this: applications are ranked in order from best to worst, and then funded in order from the top down. If a successful application’s ranking is within say five percentage points of the funding cut-off, it might well have missed out if the assessment process were run again – because the process is always somewhat subjective and will never produce exactly the same results twice.

So 5% of the applications are “lucky” to get funding. When only 10% of applications get funding, that means half of the successful ones were lucky. But if there is more money to go around and 20% of applicants are funded, the lucky 5% are only a quarter of the successful applicants.

This is a simplistic explanation, but you can see that the lower the percentage of grants funded, the more of a lottery it becomes.

This increasing element of “luck” is demoralising for the research workforce of Australia, leading to depletion of academics and brain drain.

The ‘application-centric’ model

As well as increasing total funding, we need to look at how the NHMRC allocates these precious funds.

In the past five years, the NHMRC has moved to a system called “application-centric” funding. Five (or so) reviewers are selected for each grant and asked to independently score applications.

There are usually no panels for discussion and scoring of applications – which is what used to happen.

The advantages of application-centric assessment include (hopefully) getting the best experts on a particular grant to assess it, and a less logistically challenging task for the NHMRC (convening panels is hard work and time-consuming).

However, application-centric assessment has disadvantages.

First, assessor reviews are not subject to any scrutiny. In a panel system, differences of opinion and errors can be managed through discussion.

Second, many assessors will be working in a “grey zone”. If you are expert in the area of a proposal, and not already working with the applicants, you are likely to be competing with them for funding. This may result in unconscious bias or even deliberate manipulation of scores.

And third, there is simple “noise”. Imagine each score an assessor gives is made up of two components: the “true score” an application would receive on some unobservable gold standard assessment, plus or minus some “noise” or random error. That noise is probably half or more of the current variation between assessor scores.

Smarter scoring

So how do we reduce the influence of both assessor bias and simple “noise”?

First, assessor scores need to be “standardised” or “normalised”. This means rescaling all assessors’ scores to have the same mean (standardisation) or same mean and standard deviation (normalisation).


Read more: The NHMRC program grant overhaul: will it change the medical research landscape in Australia?


This is a no-brainer. You can use a pretty simple Excel model (I have done it) to show this would substantially reduce the noise.

Second, the NHMRC could use other statistical tools to reduce both bias and noise.

One method would be to take the average ranking of applications across five methods:

  • with the raw scores (i.e. as done now)
  • with standardised scores
  • with normalised scores
  • dropping the lowest score for each application
  • dropping the highest score for each application.

The last two “drop one score” methods aim to remove the influence of potentially biased assessors.

The applications that make the cutoff rank on all the methods are funded. Those that are always beneath the threshold are not funded.

Applications that make the cut on some tests but fail on others could be sent out for further scrutiny – or the NHMRC could judge them by their average rank across the five methods.

This proposal won’t fix the problem with the total amount of funding available, but it would make the system fairer and less open to game-playing.

A less noisy and fairer system

Researchers know any funding system contains an element of chance. One study of Australian researchers found they would be happy with a funding system that, if run twice in parallel, would see at least 75% of the funded grants funded in both runs.

I strongly suspect (and have modelled) that the current NHMRC system is achieving well below this 75% repeatability target.

Further improvements to the NHMRC system are possible and needed. Assessors could provide comments, as well as scores, to applicants. Better training for assessors would also help. And the biggest interdisciplinary grants should really be assessed by panels.

No funding system will be perfect. And when funding rates are low, those imperfections stand out more. But, at the moment, we are neither making the system as robust as we can nor sufficiently guarding against wayward scoring that goes under the radar.


Read more: 7 things the Australian Research Council review should tackle, from a researcher's point of view


The Conversation

Tony Blakely was a member of the Peer Review Advisory Committee of the NHMRC, convened in 2021–22 to advise the NHMRC on improving the peer review process. However, this analysis and recommendations are Tony Blakely's, not a reflection of the final report of the committee.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.