Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Top News
Top News

Regulating AI Power: Calculating The Security Thresholds

President Joe Biden signs an executive on artificial intelligence in the East Room of the White House, Oct. 30, 2023, in Washington. Vice President Kamala Harris looks on at right. (AP Photo/Ev

Artificial intelligence (AI) systems are becoming increasingly powerful, raising concerns about potential security risks if not properly regulated. Regulators are focusing on the computing power of AI models as a key indicator of their potential danger.

Currently, AI models trained on 10 to the 26th floating-point operations per second are required to be reported to the U.S. government. California is considering even stricter regulations that could impact AI development in the state.

The concern is that AI systems with such high computing power could be used to develop weapons of mass destruction or carry out catastrophic cyberattacks. Lawmakers and AI safety advocates are working to differentiate between existing high-performing AI systems and the next generation that could be even more potent.

While some criticize these thresholds as arbitrary, others see them as a necessary step to prevent potential harm. President Joe Biden's executive order and California's proposed AI safety legislation both rely on specific computing power thresholds to determine regulatory requirements.

European Union and China are also looking at similar measures to regulate AI development. The focus on floating-point operations per second is seen as a practical way to assess AI capabilities and risks.

Despite ongoing debates among AI researchers, the flops metric is currently considered the most effective way to evaluate AI technology. It provides a straightforward method to gauge an AI model's capabilities and potential risks.

While some tech leaders argue that these metrics are too simplistic and may not effectively mitigate risks, others defend them as a necessary safeguard. The regulatory thresholds are seen as a starting point that can be adjusted as AI technology evolves.

Overall, the debate around regulating AI systems highlights the need for ongoing monitoring and adaptation of regulatory frameworks to ensure the safe development and deployment of AI technology.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.