Artificial intelligence stole the limelight from Nobel laureates in both physics and chemistry this year, with winners noting its immense power but warning of uncontrolled development.
This Wednesday, the Nobel Prize in Chemistry was awarded to three pioneering scientists whose work has revolutionised our understanding of proteins – the fundamental building blocks of life – through the use of artificial intelligence.
The honour was shared by David Baker of the University of Washington, along with Demis Hassabis and John Jumper – both of Google DeepMind – an AI research lab based in London.
Baker has been instrumental in advancing the understanding of protein structures.
His breakthrough came in 2003 when he successfully designed a new protein, opening the door to numerous other innovative creations.
Since then, Baker's research group has developed a wide variety of novel proteins with practical applications, such as pharmaceuticals, vaccines, nanomaterials, and highly sensitive molecular sensors.
AI 'cracks the code'
Meanwhile, Hassabis and Jumper achieved a revolutionary feat by developing an AI model capable of predicting the structure of nearly all known proteins.
This includes the roughly 200 million proteins identified by researchers worldwide.
In 2020, they successfully decoded the problem of protein structure prediction using AI, a significant leap forward for both biology and computational science.
According to Linke, this achievement is akin to “cracking the code” of protein structure prediction.
He went on to explain that understanding protein structures is essential because proteins are the molecules that make life possible, forming bones, tissues, skin, and other biological materials.
Knowing the shape of these molecules is crucial to understanding their functions, and thus how life operates at a molecular level.
The significance of their work goes beyond just biology or chemistry – it showcases the immense potential that AI holds in transforming scientific fields.
'Potential dangers'
While AI has revolutionised fields like chemistry and biology, it has also sparked concerns about its rapid and uncontrolled development.
These concerns were highlighted earlier this week by John Hopfield, one of the recipients of the 2024 Nobel Prize in Physics for his foundational work on artificial intelligence.
Hopfield, a professor emeritus at Princeton, expressed deep concern over the unpredictable nature of modern AI technologies, which he warned could lead to potentially catastrophic dangers if not properly understood and regulated.
Speaking from the United Kingdom via video link, the 91-year-old Hopfield drew parallels between AI and two other powerful technologies: biological engineering and nuclear physics.
He argued that both these technologies have brought about both positive and negative consequences, and cautioned that AI might follow a similar trajectory.
“As a physicist, I’m unnerved by something that has no control, something which I don’t understand well enough,” Hopfield said, warning of the unforeseen risks that AI could pose if allowed to develop unchecked.
Hopfield is famous for creating the "Hopfield network," an artificial neural network model that simulates how biological brains store and retrieve memories.
His work laid the foundation for the later advances of Geoffrey Hinton, who is widely regarded as the “Godfather of AI".
'Downfall of civilization'
Hinton’s contributions included the "Boltzmann machine," which introduced randomness into AI models and helped pave the way for today's deep learning systems.
However, Hinton has also become a vocal critic of AI’s rapid evolution, expressing grave concerns that AI could surpass human intelligence and, potentially, gain control over human affairs.
Both Hopfield and Hinton have called for a deeper understanding of AI systems, with Hopfield stressing that AI’s collective properties may not be fully predictable. He referenced the fictional "ice-nine" from Kurt Vonnegut's Cat's Cradle, an artificial substance that inadvertently freezes the world’s oceans, leading to the downfall of civilization.
While this scenario is fictional, Hopfield believes it serves as a warning about the unintended consequences of powerful technologies.
OpenAI comes to Paris
These warnings come amid a wave of rapid advancements in AI, including the global expansion of OpenAI, the company behind ChatGPT.
OpenAI, based in California, announced this week that it will open new offices in Paris, Brussels, Singapore, and New York, reflecting its growing international influence.
The company recently raised $6.6 billion in funding, bringing its total estimated value to $157 billion.
According to Clara Chappaz, the new French Secretary of State for Artificial Intelligence and the Digital Economy, OpenAI’s expansion into France highlights the growing dynamism of the French AI ecosystem, a sector that the government aims to position as a world leader.
Paris will also host the next major international AI Action Summit in February 2025, where global leaders will discuss the future of AI and its impact on society.
Amid the growing influence of artificial intelligence, unresolved tensions persist between OpenAI and French press organisations, particularly concerning the use of content produced by the media.
Nevertheless, OpenAI’s expansion further highlights the central role AI is set play in shaping the future of technology, science, and everyday life – for better, or for worse.
(with newswires)