Get all your news in one place.
100's of premium titles.
One app.
Start reading
The Conversation
The Conversation
Yu Xiong, Chair Professor of Business Analytics, University of Surrey; Northumbria University, Newcastle

Why the world needs the UN to keep an eye on AI

Summit Art Creations/Shutterstock

AI doesn’t have a boss. It doesn’t really care about rules. And most of us don’t have any say over what it will do next.

Yet the technology is all around us, firmly established in workplaces, financial systems, healthcare and defence. So maybe it needs someone to keep an eye on its progress and set some boundaries.

The UN certainly thinks so, and recently decided to set up an independent panel to monitor AI’s future development. It seems like a sensible move, but this attempt to create a successful forum for “rigorous, independent scientific insight” also highlights the inherent difficulties of governing technology on a global scale.

For a start, the US, which dominates AI development, doesn’t want anything to do with the panel. It voted against the UN’s idea (so did Paraguay), calling it “significant overreach”.

But the UN argues that AI affects everyone, and requires some global coordination. UN secretary-general António Guterres has described the new panel as the first “fully independent scientific body dedicated to helping close the AI knowledge gap and assess the real impacts of AI”.

As with some of the UN’s other forums, like the Intergovernmental Panel on Climate Change or the International Atomic Energy Agency, the AI panel would not write the laws, but would help establish common ground rules and standards that everyone can agree on.

AI is a different beast though. Unlike climate policy or nuclear materials, which are the responsibility of national governments, AI’s progress is largely driven by private – and very wealthy – firms.

International coordination is much more difficult, and already the US, the EU and China are taking different approaches to governance.

The EU takes a fairly cautious line, with strict rules on high-risk applications in areas like recruitment or law enforcement. The US favours voluntary standards within the industry. Meanwhile, China treats AI development and control as a matter of state.

When different parts of the world approach things so differently, there is a risk that any attempt at global cooperation will simply not work. Big firms could simply move their headquarters to whichever part of the world they consider to be the least restrictive. Technical rules can then become geopolitical tools rather than shared protections.

But the biggest challenge goes beyond technical coordination, because AI is fundamentally a technology of power which involves control over information, opportunity and surveillance.


Read more: Could revisiting Asimov’s laws help us avoid AI’s ‘Chernobyl moment’?


There have already been cases of AI being used in predictive policing models that disproportionately target communities. It has been part of automated welfare systems that exclude the vulnerable and decide on access to credit or housing.

Digital accountability

This is not the first time that a powerful digital force has surged ahead while oversight lags behind.

I witnessed this firsthand with research I carried out with colleagues about Bitcoin.

When we published our findings about Bitcoin’s massive energy footprint in 2021, the reaction was immediate and global. It triggered a debate that shook the industry, and demonstrated that digital systems can cause to the world.

AI is now on the same path, but the stakes are exponentially higher, affecting not just energy grids, but society itself.

AI-generated political statements, religious sermons and news footage circulate on screens everywhere. And when people cannot reliably distinguish authentic authority from artificial output, social trust is eroded.

AI could make online incitement easier, cheaper, more personalised and more widespread. Some civil leaders have claimed that digital radicalisation, the process by which people adopt extremist views through online content, could be intensified by these tools.

Societies everywhere are already grappling with AI’s wider social consequences.

The head of the Muslim World League, an international non-governmental Islamic organisation, Mohammad bin Abdulkarim Al-Issa, has warned that AI may “manipulate the ideologies and beliefs that connect and influence billions” with extremist messaging.

Having seen how groups like Islamic State exploited social platforms for recruitment and division, he also argues that the danger lies not only in what is said, but in the loss of identifiable authority behind it. Elsewhere, the Pope has warned that AI must never diminish human dignity or reduce people to data points.


Read more: AI laws overlook environmental damage – here’s what needs to change


These kinds of worry reflect legitimate concerns about how technological platforms can fracture societies when ethical guardrails fall behind. And this is precisely where the UN may have an important role to play.

The UN building in New York.
UN v AI? Aditya E.S. Wicaksono/Shutterstock

Historically, its strength has never depended on enforcement power so much as on symbolic authority and its ability to articulate widely shared goals designed to improve people’s lives.

The UN’s 1948 Universal Declaration of Human Rights became the foundation of modern human rights law, by reshaping what governments could plausibly justify. Likewise, the global eradication of smallpox showed how a shared UN-backed objective could enable cooperation even across geopolitical divides.

Perhaps the real question, then, is not whether the UN should try to regulate AI directly. It is whether the world can afford a fragmented AI order defined solely by markets, geopolitics and billionaires, with no common ground.

Because while the promise of AI is staggering, serious and dangerous failings could yet emerge from the unfilled gaps in governance. The UN could help to avoid them.

The Conversation

Yu Xiong does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.