Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Evening Standard
Evening Standard
Business
Lewis Liu

Forget the Terminator — The real AI threat is coming from the same people who are warning us

Humans have finally overstepped themselves. Through the creation of artificial intelligence, we have sowed the seeds of our own eventual destruction. Job losses will only be the beginning: in a matter of years, AI will be stalking the streets and the web and killing us, Terminator-style...

...At least, that’s what world leaders will suggest at the AI Safety Summit being hosted in the UK next month according to a draft communique warning of AI’s potential for “catastrophic harm”. It will warn that artificial intelligence systems could be used to launch cyber-attacks and create bioweapons, necessitating “especially urgent” measures to prevent this.

This echoes the views of Sam Altman - both the founder of OpenAI and, paradoxically, one of the loudest warning voices out there - would have us believe.

It’s a morbidly appealing vision: one which has powered Hollywood and sci-fi for decades. But it's not plausible. So now that we’ve got that out of our systems, we can start asking the sensible questions, like: how much societal disruption can we actually expect from the rise of artificial intelligence? And why are today’s AI leaders so determined to scare us?

The societal issues around AI are already emerging: job displacement is a big one. But the more insidious threat - one which could challenge our notions of what it means to be human - is coming from a very different and rather mundane source: property. Property is a fundamental tenet of modern democracy and capitalism, and AI companies are threatening this.

Let me explain. Many AI companies have a cavalier approach to creativity and intellectual property. No measures have been put in place to ensure that creators receive any payment, compensation or even recognition for their images, words and music which have helped train generative AI. ChatGPT will write you a script for Frozen in the style of Quentin Tarantino; Midjourney will create an image of a sunbathing penguin in the style of Picasso.

The question is: who owns the IP? The writers’ strike over in LA is a great example of this: the status quo sees writers essentially incentivized against creating new work, because there is a risk it will be sucked up by a large language model, and used against the writer to automate away their future work. Ensuring that creators are compensated for their creations is the only way to ensure a fair and equitable future for generative AI.

I think we all know that a big part of the answer is regulation. So: how can we get this right? If only there was a way to know how this would play out…

Thankfully, there is: it’s called social media. 15 years ago, regulators didn’t know (let’s give them the benefit of the doubt) what sort of threat mass digital communication could pose: they couldn’t have foreseen foreign powers attempting to influence elections, as happened in 2016, or the myriad harms perpetuated on children and young people. But even today, Western regulators still struggle to enact legislation which makes social media safe for democracy and vulnerable groups. AI leaders and policymakers have some serious lessons to learn.

If it has been dangerous to have social media regulators asleep at the wheel over the past decade, then it could be catastrophic for governments to do the same for AI. Or worse: if governments were to take the current generation of tech leaders - like Sam Altman - at their word, as this draft communique suggests.

This is the iPhone moment for AI, and the decisions we make now will have huge ramifications for the societies of the future.

The trouble is that the leaders pushing forward generative AI are also - counterintuitively - the ones ringing the alarm bell and seeking to integrate themselves into the policymaking process. OpenAI and its peers have been among the most influential lobbyists in the run up to the Summit.

They certainly care about policy, but - following the amoral playbook which Big Social Media has set down - the goal is regulatory capture, and a seat at the table which decides who can compete. If it sounds disingenuous, that’s because it is.

This is yet another playbook which we can see being rolled out before our eyes: 15 years ago a young entrepreneur, founder of a digital tracking app, went straight to the media to actually warn them and the general public about the risks of his creation. His name was Sam Altman: warning of the damage he could cause humanity, while hoping to make some money in the process.

It’s not too late for the AI Safety Summit to change course. There are two ways to avoid falling victim to the disastrous AI future which tech leaders like Sam Altman are simultaneously warning against, and perpetuating. The first is to find a way to regulate, addressing safety, privacy and property concerns without killing all chances of competition for smaller operators. To do otherwise is to follow blindly the path that Sam Altman has set out for us.

The second is for enterprises to be part of this discussion from the get-go: not just the likes of OpenAI, and DeepMind, but the smaller operators like us and our many peers who are doing exciting, valuable work with AI yet have been excluded from the Summit. I say to world leaders and regulators: look beyond the headlines, look at who stands to gain the most from an AI-powered world.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.