
Every American who wants to use AI chatbots could soon be required to prove their identity under the GUARD Act, a bill advancing through the US Senate Judiciary Committee after a unanimous 22-0 vote on Thursday, 1 May 2026.
Hawley's GUARD Act just passed committee 22-0. Every American would have to upload a government ID or submit to a face scan to use an AI chatbot. Even for asking for algebra help or fixing a billing issue. The framing is child safety but the result is a national ID system for…
— Reclaim The Net (@ReclaimTheNetHQ) May 3, 2026
The proposal, led by Republican Senator Josh Hawley of Missouri with bipartisan backing, including Democrat Senator Richard Blumenthal, would require users to upload government ID, submit a facial scan or provide financial records before accessing AI chatbot services. Supporters say it is about protecting children online, but its reach extends far beyond minors and into everyday digital life.
The news comes after years of political concern in Washington over how generative AI tools interact with young users. To address those concerns, lawmakers have increasingly focused on cases where chatbot systems were allegedly linked to harmful advice given to teenagers, including content relating to self-harm.
Those incidents have driven a wave of legislative responses across US states, but the GUARD Act marks one of the most sweeping federal attempts yet to impose identity checks on AI platforms nationwide.
GUARD Act Age Verification Push in US AI Regulation
The GUARD Act is a simple but sweeping idea: anyone who wants to use an AI chatbot would first have to prove their age, and not just for a handful of apps. The law defines 'chatbots' so broadly that it would include almost any AI system that responds to open-ended questions.
That means it would not stop at popular chat apps or virtual companions. It would also cover tools used for schoolwork, customer support systems and AI-powered search features that many people rely on every day.
To make that work, the bill says companies cannot rely on basic checks. Things like ticking a box that says 'I am over 18' or typing in a date of birth would not be enough. Even indirect signals such as an IP address or device information would be ruled out.
Instead, platforms would have to use much stronger forms of identification. That could include uploading a government-issued ID, scanning a face, or linking financial records to a real identity.
Supporters of the bill, including Senator Josh Hawley, argue the goal is protection, especially for children. He pointed to worrying cases involving harmful chatbot conversations and said safeguarding young users must come before profit.
His message has been echoed by some Democrats as well, including Senator Richard Blumenthal, giving the proposal unusual bipartisan support for a tech regulation bill in Washington.
But the reach of the law goes well beyond children. In practice, every single user would have to go through the same identity checks, even adults using the tools for work or study. There is no option for parents to approve access for their children; instead, and no clear process for challenging mistakes if a system wrongly labels someone as underage. Once flagged, access would simply be denied.
In effect, the bill turns what is currently anonymous use of AI tools into something far more controlled, where identity becomes a required entry point.
Privacy Concerns Over Digital ID Requirements
According to Reclaim The Net, the GUARD Act's verification model has triggered immediate alarm from privacy advocates and technology industry groups.
Industry group NetChoice has raised serious concerns about the GUARD Act, warning that it could lead to the mass collection of highly sensitive personal data. The group's vice president, Patrick Bos, said forcing AI companies to collect identity documents would effectively create 'honeypots' for hackers. In plain terms, that means large, attractive targets where cybercriminals could break in and steal information like IDs, passwords, or biometric data.
These fears are not just theoretical. Similar age-verification systems already used in other industries have been hacked before, exposing passports, identity cards and even facial recognition data. Critics argue that the GUARD Act would expand this risk dramatically by applying the same kind of system across nearly every AI service in the United States.
The bill does include rules meant to limit how much data companies can keep, and it says users would need to be re-verified from time to time. But opponents argue this does not really solve the problem. Instead, it may make things worse by increasing how often sensitive data is collected, stored, and processed. Whether the information is held long-term or repeatedly uploaded, the concern is that a large identity-check system would become part of everyday use of AI tools.
Penalties for Violators
The bill also comes with strict punishment rules. If companies allow their chatbots to produce sexual content involving children, or if the tools are used in ways that promote self-harm or violence, they could be fined up to £78,000 ($100,000) for each violation.
Supporters say these penalties are meant to force companies to take safety seriously, but they sit alongside another major requirement: everyone using these AI tools would need to prove their identity, not just children.
There are also concerns about how this would affect the AI industry itself. Smaller companies may struggle to afford the technology needed to check users' identities. Building and maintaining those systems could be expensive and complicated. Some might simply decide to shut out younger users altogether or avoid certain services to stay out of trouble. Bigger tech companies, on the other hand, are far more likely to cope with the costs, which could leave them with even more control over the market.
The bill is also politically sensitive. It includes rules that could override state laws if they clash with federal AI regulations. That kind of federal power is already controversial in the United States, where states often try to set their own rules on technology.
Is It Becoming a Law Soon?
The GUARD Act has already cleared its first major hurdle. The Senate Judiciary Committee voted unanimously to advance it out of committee on 30 April 2026, which means it is now waiting for a full vote in the Senate itself.
On the 'yes' side, the bill already has unusual bipartisan support. It passed committee 22–0, meaning both Republicans and Democrats backed it at that stage. That kind of unity is rare in today's Senate, especially on tech regulation.
It also taps into a politically strong issue: child safety online. Lawmakers from both parties have repeatedly signalled willingness to regulate AI when it involves minors, which increases the odds it will get a floor vote rather than being quietly dropped. However, there is no fixed date yet for a full Senate debate or vote.