Research into merging human brain cells with artificial intelligence has received a $600,000 grant from defence and the Office of National Intelligence (ONI).
The research team, led by Monash University and Cortical Labs, is the one that created DishBrain – brain cells capable of playing the vintage video game Pong.
Associate prof Adeel Razi, from the university’s Turner Institute for Brain and Mental Health, said their work “merges the fields of artificial intelligence and synthetic biology to create programmable biological computing platforms”.
Hundreds of thousands of live, lab-grown brain cells learn how to do different tasks – such as playing Pong. A multi-electrode array uses electrical activity to give the cells feedback about when the “paddle” is hitting the “ball”.
The researchers wrote in an article, published in the science magazine Neuron, that a synthetic biological intelligence “previously confined to the realm of science fiction” could be within reach.
Razi said the team won the ONI and Department of Defence National Security Science and Technology Centre grant because a new type of machine intelligence that could “learn throughout its lifetime” was needed.
Such intelligence would improve machine learning for technology including self-driving cars, autonomous drones and delivery robots, he said.
“This new technology capability in the future may eventually surpass the performance of existing, purely silicon-based hardware.
“The outcomes of such research would have significant implications across multiple fields such as – but not limited to – planning, robotics, advanced automation, brain-machine interfaces, and drug discovery, giving Australia a significant strategic advantage.”
Brains are good at lifelong learning, which is needed to gain new skills, adapt to change, and apply existing knowledge to new tasks, while artificial intelligence suffers from what researchers call “catastrophic forgetting”. AI forgets information from previous tasks when it starts new ones.
The DishBrain research aims to understand the biological mechanisms behind ongoing learning.
“We will be using this [national intelligence and security discovery research] grant to develop better AI machines that replicate the learning capacity of these biological neural networks,” Razi said.
“This will help us scale up the hardware and methods capacity to the point where they become a viable replacement for in-silico computing [using simulations].”
The news comes as AI leaders call on the government to recognise “the potential for catastrophic or existential risks from AI”.
The organisation Australians For AI Safety has written a letter to industry, science and technology minister, Ed Husic, signed by academics and industry heads.
Husic has announced a government review of AI, saying “what we want is modern laws for modern technology”.
The letter calls on him to “recognise that catastrophic and existential consequences are possible”, to work with the global community to manage the risks, to support research into AI safety, and to “urgently train the AI safety auditors that industry will soon need”.
Spokesman Greg Sadler said Australia was “falling behind” when it came to paying attention to AI dangers.
“What’s alarming is that even deliberate and methodical bodies like the United Nations have recognised the potential for catastrophic or existential risks from AI, but the Australian government won’t,” he said.
Husic said when launching the review that using AI safely and responsibly was “a balancing act the whole world is grappling with”.
“The upside is massive, whether it’s fighting superbugs with new AI-developed antibiotics or preventing online fraud,” he said.
“But as I have been saying for many years, there needs to be appropriate safeguards to ensure the safe and responsible use of AI.”