Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Ryan Morrison

Governments unveil new AI security rules to prevent superintelligence from taking over the world

AI robot hand touching human hand.

New rules controlling how artificial intelligence can be developed have been unveiled by governments around the world. The new guidelines were released in the hope of preventing the technology from being used in ways that could harm humanity.

Produced by the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the U.K. National Cyber Security Centre (NCSC), they build on previous voluntary commitments secured by the Biden Administration earlier this year.

The 20-page agreement has been signed by 18 countries that have companies building AI systems. It urges those companies to develop and deploy the technology in such a way that it keeps customers and the public safe from misuse.

Broad focus on risk

The new rules are more of a non-binding framework for monitoring AI systems against abuse. They include suggestions for protecting data used in training the models, and ensuring information is secure.

While these are largely voluntary, it is assumed that if companies fail to agree or allow their models to fall into the wrong hands or be misused then they could face tougher and more restrictive regulation in the future.

Speaking to Reuters, Jen Easterly from the U.S. Cybersecurity and Infrastructure Security Agency said getting a global agreement was vital to the success of the guidelines.

"This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," she said.

Secure by design

(Image credit: Shutterstock)

While the focus of global AI safety discussion has been on the risk of superintelligence and high-risk foundation models, the new guidelines also cover current generation and more narrow AI. The focus is on protecting data and preventing misuse, rather than functionality.

The guidelines cover four key areas: secure design, secure development, secure deployment, and secure operation and maintenance.

They emphasized companies building AI models taking ownership of security outcomes for customers, embracing radical transparency and accountability, and building organizational structure and leadership so that "secure by design" is a major priority.

Secure systems benefit users

Toby Lewis, Global Head of Threat Analysis at Darktrace said ensuring data and AI models are secure from attacks should be a pre-requisite for any developer.

“Those building AI should go further and build trust by taking users on the journey of how their AI reaches its answers. With security and trust, we’ll realize the benefits of AI faster and for more people.”

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.