Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Comment
Ellen Judson

2024 will be a litmus test for AI’s effect on elections – and voters’ faith in them

Illustration: Deena So Oteh for the Guardian.
Illustration: Deena So Oteh for the Guardian. Illustration: Deena So Oteh

Next year will be a bumper year for democracy across the world, with general elections in India, Mexico and the EU parliament, as well as presidential elections from the US to Venezuela to Taiwan. With a UK general election also coming up no later than 28 January 2025, a significant proportion of the global population will go to the polls.

But this tidal wave of political activity will also mark the first major electoral cycle in the era of widespread generative AI. The “fake news” rows of recent years will be intensified and accelerated in ways we are only just beginning to imagine.

This is my prediction: the major disruption brought by this generation of AI will be to further damage how citizens think about the information they see. It feels almost inevitable that we will see scandals of political candidates being accused of using AI to generate their content, even if they haven’t. People will question everything, trust less and turn off. This pattern, often exploited in disinformation campaigns, not only deceives but creates chaos, so people don’t know what to trust. When so much is questionable, they simply give up on even trying to sort fact from fiction.

There are things that can be done by others to tackle content veracity, such as the Content Authenticity Initiative, an organisation that seeks to establish standards so the provenance of information can be verified. Newsrooms are developing codes and policies about how they will or will not use generative AI in their own work.

Political parties should take the lead and commit to transparency about how they are using generative AI by publishing open and accessible policies ahead of election periods. There’s already been a furore in the US over an AI-generated Republican political ad – and criticism, including from the Centre for Countering Digital Hate, of a lack of commitment from political parties to address the issue head-on.

This hasn’t happened in the UK yet, but the odds are that it will. This could be AI-generated images of candidates doing heroic things that never happened. Or it could be attack ads that are simply made up. At some point, a political candidate will give a speech that was written with ChatGPT (an AI chatbot) that contains “hallucinated” statistics.

Beyond political parties, the risks continue: female politicians who already face online hate on a daily basis will find manipulated sexualised images of themselves circulating, such as pornographic deepfakes. Or outside actors looking to subvert democracy could create AI “content farms”, flooding social media with false or misleading information about elections.

Parties in the UK should take note, mark themselves out from nefarious online actors and get ahead of the inevitable scandals by telling us how they are going to deploy AI. At the very least, developing such policies will make them consider the risks now, before it’s too late.

To find lasting solutions, we need to understand that our problem is much deeper than citizens not being able to reliably identify what information is correct. The mechanisms through which information is produced, distributed and consumed are increasingly opaque to us.

Previously the hidden mechanism in question has been around the ways in which social media platforms incentivise, curate and distribute information: the mysterious workings of the “algorithm”. We retain all those problems, and more, with a potential mass adoption of generative AI technologies whose owners are not open about how they have been developed.

Without transparency about how they have been designed, what data they have been trained on and how they have been fine-tuned, we have no way of knowing what drives the information outputs these tools offer and therefore how much we can rely on them. Instead of citizens being empowered to engage critically with these tools, we’re reliant on the tech companies’ word as to what the benefits and risks are. The tech companies hold all the power, and are unaccountable.

The Competition and Markets Authority in the UK is now reviewing the AI market with an eye on the dangers of false or misleading information consumption, with the Federal Trade Commission in the US carrying out a similar exercise. The risks are clearly on regulators’ and policymakers’ minds, which is a step in the right direction. But we haven’t cracked how to do digital regulation effectively yet – and the question is becoming ever more urgent.

The 2024 elections offer a critical litmus test to decide who is in control: citizens and their democratically elected governments or big tech. We urgently need to address the impact of AI on politics before it happens, rather than when things go wrong. The UK’s political parties could play their part by making sure they are transparent about their use of these tools.

  • Ellen Judson is head of CASM (the Centre for the Analysis of Social Media) at the thinktank Demos

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.