Get all your news in one place.
100's of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Amanda Caswell

Why some AI tools are being banned by the US government — and what it means for you

President Trump signing an executive order.

The era of "move fast and break things" in AI may be coming to an abrupt halt. According to a recent New York Times report, the Trump administration is reportedly preparing a landmark Executive Order that would require Big Tech to submit their most powerful models for government vetting before they are allowed to go public.

This move underscores how the rules are changing and that AI is no longer seen as a regular tech tool, but a national security asset. Here's what's behind the conversation.

Why the sudden change?

(Image credit: Getty Images)

The catalyst for this shift appears to be the recent limited release of Anthropic’s Claude Mythos. While touted as a breakthrough in cybersecurity, federal officials have raised alarms about the model's "frightening" ability to autonomously discover and exploit unpatchable software vulnerabilities in critical infrastructure.

According to the report, the administration’s new stance is driven by three key factors:

  • The 'Mythos' effect: Claims that frontier models are now skilled enough to bypass traditional cyber defenses.
  • Domestic compute sovereignty: A push to ensure the U.S. government has priority access to the world's most powerful processing power.
  • The Anthropic rift: A reported fallout between the White House and Anthropic over military usage rights, leading the administration to lean more heavily on partnerships with OpenAI and Google.

Inside the discussion

(Image credit: Getty Images)

Last week, high-ranking White House officials reportedly met with CEOs Sundar Pichai (Google), Sam Altman (OpenAI), and Dario Amodei (Anthropic) to discuss the logistics of a government-led "working group."

The goal of the discussion was said to create a standardized "red-teaming" process where federal experts audit a model’s capabilities before they are ever launched.

The takeaway

If signed, this order could slow the breakneck pace of AI innovation in ways you’ll actually notice. New “Pro” and “Ultra” model updates may take longer to arrive as they move through a rigorous vetting process, finally trading speed for added safety.

Supporters say that’s a win for reliability, but critics warn it could give international rivals like Deepseek an edge if they face fewer restrictions.

This potential shift indicates that we may be heading toward a two-tier AI world of government-certified “safe” models for businesses and institutions and a separate, less regulated lane for hobbyists and power users. Time will tell. For now, it’s a tradeoff, slower progress in exchange for tighter control.

More from Tom’s Guide

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.