Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Al Jazeera
Al Jazeera
World

What to know about the UK’s AI Safety Summit

Attendees will gather at Bletchley Park, where British code breakers deciphered Nazi Germany's codes more than 80 years ago [File: Justin Tallis/AFP]

Britain is to open its first artificial intelligence summit, bringing together heads of state and tech giants at a technological landmark near London.

The two-day summit begins on Wednesday as concerns grow that the emerging technology may pose a danger to humanity. The meeting will focus on strategising a global, coordinated effort to address the risks and misuse of AI tools.

The summit is led by UK Prime Minister Rishi Sunak, who has called AI “the defining technology of our time”.

Here is what to know about the summit:

Where and when is the summit?

The summit will take place on Wednesday and Thursday.

It is being held at Bletchley Park in Buckinghamshire, where top British code breakers cracked Nazi Germany’s Enigma code during World War II. The group, which included computer science pioneer Alan Turing, used the the world’s first digitally programmable computer.

Bletchley Park houses the world’s largest collection of working historic computers at the National Museum of Computing.

Today, the UK is home to twice as many AI companies as any other European country. The AI sector employs more than 50,000 people and pours ​​3.7 billion pounds ($4.5b) into the economy each year. In June, London also became home to the first office outside the United States for OpenAI, Chat GPT’s developer.

What is the summit about?

The summit is centred around ‘frontier AI’, which is defined as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety”, according to OpenAI.

Although Sunak has been reiterating the potential benefits of AI in recent weeks, he said the tool’s yet-unknown dangers also call for planning and regulations that can ensure safer development  of the technology.

The summit aims for attendees to “work towards a shared understanding of risks” and coordinate a global effort to minimise them, according to the UK government’s website.

What’s on the agenda?

The agenda includes the possible misuse of AI systems by terrorists to build bioweapons and the technology’s potential to outsmart humans and wreck havoc on the world.

According to a programme released by the UK government, the first day’s agenda includes discussions on risks of frontier AI to global safety and society as well as the threat of losing control over the technology.

On the second day, delegates are to address questions on how those risks can be mitigated and how AI can be scaled up more responsibly. The discussions will look at the roles various groups from the scientific community to national policymakers can play in the combined effort.

Who is participating?

About 100 people will attend, but the full guest list has not been made public, according to the Reuters news agency. The attendees will include heads of state, top AI companies, civil society groups and research experts.

Some of the notable figures known to be attending include:

  • US Vice President Kamala Harris
  • China’s vice technology minister, Wu Zhaohui
  • CEO of X, formerly Twitter, Elon Musk
  • European Commission President Ursula von der Leyen
  • United Nations Secretary-General Antonio Guterres
  • Italian Prime Minister Giorgia Meloni, who is the only G7 leader attending
  • OpenAI CEO Sam Altman and executives from other AI companies, including, Meta, Anthropic, and Google’s UK-based Deepmind

Musk and Sunak will close the summit with a discussion that will be livestreamed on X.

What are the main concerns about AI?

Some have criticized Sunak’s summit for being preoccupied with “far-off dangers” and setting a narrow focus. Experts said at a Chatham House panel last week that broader issues with algorithmic bias and its disproportionate impacts on marginalized communities need to also be explored.

A Pew Research Center study found “growing concern” among Americans over the role of AI in daily life, including doubts about whether AI would really improve it and hesitation over its use in healthcare.

AI may also spread misinformation and lead to job losses and political instability.

Who else is acting on AI risks?

On October 10, Britain’s data watchdog said it had issued a notice to Snapchat for possibly failing to properly assess the privacy risks of its generative AI chatbot to users, particularly children.

In the US, President Joe Biden issued an executive order on Monday to regulate the development of AI.

The European Union is preparing to pass an AI act while G7 countries have agreed to introduce a code of conduct for companies using the technology.

As countries work towards their own rules, the UN is also pushing for a global, collaborative effort.

On October 26, Guterres announced the creation of an advisory body that would address the international governance of AI with technology executives, government officials and academics.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.