Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Comment
James Wise

Imagine your child calling for money. Except it’s not them – it’s an AI scam

Deena So'Oteh

This year, I was sent a link to a video of myself, passionately explaining why I had invested into a new technology company. In the video I spoke enthusiastically about the great faith I had in the company’s leadership and encouraged others to try the service out. The problem was, I had never met the company nor used its product.

It looked and sounded like me, right down to the fading Mancunian accent. But it wasn’t. It was an AI-generated fake used in a business pitch and designed to wow me into investing in a company. Far from impressing me, it left me concerned about the myriad ways these new tools could be used for fraudulent purposes.

From data breaches to phishing attacks, where fraudsters trick people into sharing passwords or sending money to an unknown account, cybercrime is already one of the most commonly experienced and pernicious forms of crime in the UK. In 2022, the UK had the highest number of cybercrime victims per million internet users in the world. In part we are victims of our own digital success. Britons have been fast to adopt new technologies such as online shopping and mobile banking, activities that cybercriminals are keen to exploit. As AI becomes more sophisticated, these criminals are being given even more ways to trick us into believing they are someone they are not.

Many of the impressive advancements in human imitation are being developed on our doorstep. The company ElevenLabs has built and released a tool that can almost perfectly replicate any accent, in any language. You can go on its website and have its pre-trained models read out statements using the fast-talking New Yorker “Sam” or the more mellow, midwestern tones of “Bella”.

The London-based company Synthesia goes further. Its technology allows customers to create new sales people. You can generate a photorealistic video of a synthetically generated person speaking in any language, pitching your product or providing customer support. These videos are incredibly lifelike, but the person doesn’t exist.

ElevenLabs make the rules about use, and misuse, of their technology very clear. They explicitly state that “you cannot clone a voice for abusive purposes such as fraud, discrimination, hate speech or for any form of online abuse”. But less ethical companies are launching similar products at pace as well.

It is rather ironic that imitating humans, for good or ill, is one of the first major uses of AI. Alan Turing, the godfather of modern computing, created the Turing test, which he originally called the “imitation game”, to assess an AI’s ability to fool a human into thinking it was real. Passing this test quickly became a benchmark for an AI developer’s success. Now that anyone can create synthetic people with a click of a button, we need an anti-Turing test to establish who is real and what is generated.

How will you now know, when you get a video call from your teenage child asking for emergency gap-year funds, that it is really them? How should you respond to an agitated voicemail that sounds like it’s from your boss demanding you wire the company funds, when you can no longer be sure it is really them? These questions are no longer hypotheticals.

Fortunately, some services exist already to tackle this challenge. Just as quickly as ChatGPT was adopted by canny students to complete their homework, AI-detection tools such as Originality.ai were released to tell teachers the likelihood that an essay was in fact written by AI. Similar solutions are in development to assess whether a video is real, relying on pixel-level mistakes that still give away even the most sophisticated AI tools.

And new initiatives are being launched. Synthesia is among many members of the Content Authenticity Initiative, which was started in 2019 to provide users with more insight into where the content they receive comes from, and how it was created. More controversially, but perhaps inevitably, a national form of digital identity – a way of verifying whether you are talking to a real person or a bot – will almost certainly be required if you want to separate your mate from a fake.

In the interim, much greater efforts need to be made to increase public awareness of the increasing sophistication of cybercriminals, and just what is now possible. While we wait for governments to act and regulation to be drawn up, there is the much more immediate risk of a thousand AI tricksters exacerbating Britain’s existing cyber-fraud problem.

  • James Wise is a partner at the venture capital firm Balderton, and a trustee of the thinktank Demos

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.