Hello and welcome to Eye on AI.
Leading AI companies OpenAI and Google took turns making headlines on Monday and Tuesday, announcing a slew of new generative AI products and model updates. On Wednesday, the U.S. government had its moment.
A bipartisan group of senators, led by Chuck Schumer (D-N.Y.), released a long-awaited “roadmap” for regulating AI. The 20-page report is the culmination of an almost year-long listening tour during which the Senate AI Working Group held closed-door meetings with more than 170 tech industry leaders, academic researchers, civil rights leaders, and more to better understand how to regulate the technology. The lawmakers called for the U.S. government to start spending at least $32 billion annually on AI “as soon as possible”—and that’s not including what it intends to spend on AI for defense. But while they acknowledged the variety of harmful applications and currently unfolding consequences of AI, they didn’t propose any specific regulation. Instead, they punted it to the Senate subcommittees, laying out areas where they should focus their efforts, including AI training for the private workforce, mitigating AI’s relentless energy demands, and the problem of AI being used to create election disinformation and child sexual abuse material (CSAM).
Many groups, watchdogs, and experts who provided insights during the listening sessions are calling the results a disappointment, lacking vision, a delay of much-needed regulation, and a missed opportunity to act when much of the world already has.
“The nine ‘insight forums’ functioned as a stalling tactic,” said Amba Kak and Sarah Myers West, co-executive directors of AI Now Institute, an AI policy research group, arguing in a statement that momentum to regulate AI was instead diverted into a closed-door, industry dominated—and now industry-benefiting—process.
Alondra Nelson, a former acting director of the White House Office of Science and Technology Policy who participated in the listening forum on supporting U.S. innovation in AI, described the roadmap to Fast Company as “too flimsy to protect our values” and lacking “urgency and seriousness.” Suresh Venkatasubramanian, another former White House official who coauthored the Biden administration’s Blueprint for an AI Bill of Rights, a set of principles intended to guide the development and use of AI systems, said he feels “betrayed” after he and other AI ethicists participated in the discussions “in good faith,” despite concerns about industry regulatory capture.
If there’s one thing Schumer made clear, it’s that U.S. domination of AI is the goal, referring during the press conference to the $32 billion as "surge emergency funding to cement America's dominance in AI," including "outcompeting China.” The money would go to R&D, infrastructure, funding the outstanding CHIPS and Science Act, a series of “AI Grand Challenge” programs, and dozens of other government agencies and AI-related initiatives.
Delaying regulation certainly helps U.S. tech companies—especially dominant players, which argue that regulation would “stifle innovation”—to do so. The AI Working Group, according to its own objective as stated in the report, was created because AI is too “broad” and “does not neatly fall into the jurisdiction of any single committee.” It seems all this special group decided, however, is that the subcommittees should be the ones to propose legislation after all and that while billions in taxpayer funding can’t wait, regulation can. Government funding for science and technology is important, and it’s how we got the Internet in the first place. But prioritizing dominance over safety is a dangerous path.
Plenty of AI bills have been introduced in subcommittees only to stall. The Senate Rules Committee did yesterday pass three bills intended to safeguard elections from deceptive AI, but they still need to advance in the House and pass in the full Senate—and the clock until the election is ticking. If the AI Working Group hadn’t taken a year only to suggest zero tangible proposals for regulations, progress on mitigating this clear and present danger of AI could be much further along. In the report, the lawmakers also expressed explicit support for a national data privacy law, which again, is the type of thing they are in charge of introducing and passing.
Schumer told the New York Times, “It’s very hard to do regulations because AI is changing too quickly. We didn’t want to rush this.” Pouring $32 billion a year—plus however much more in defense funding—into AI “as soon as possible” certainly seems like decisive, swift action. At the same time, it’s been almost two years since the launch of a new era of generative AI made the need for regulations ever more pressing. We’ve also had extensive evidence of AI wielding real-world harm long before that. The EU enacted comprehensive AI policy earlier this year, and we’re just a few months out from a heated election that has already seen AI being used to deceive and misinform voters. Considering all this, proposing AI regulation after a year of supposedly working to do so would hardly feel like rushing.
Just last week, I reported how the most comprehensive state AI bill yet bill blew up because of concerns that the federal government should be taking the lead (not to mention, opposition from the tech industry). At this rate, the states shouldn't hold their breath.
And with that, here’s more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com