Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Reason
Reason
Andy Jung

California's AI Bill Threatens To Derail Open-Source Innovation

This month, the California State Assembly will vote on whether to pass Senate Bill 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. While proponents tout amendments made "in direct response to" concerns voiced by "the open source community," critics of the bill argue that it would crush the development of open-source AI models.

As written, S.B. 1047 would disincentivize developers from open-sourcing their models by mandating complex safety protocols and holding developers liable for harmful modifications and misuse by bad actors. The bill offers no concrete guidance on the types of features or guardrails developers could build to avoid liability. As a result, developers would likely keep their models closed rather than risk getting sued, handicapping startups and slowing domestic AI development.

S.B. 1047 defines open-source AI tools as "artificial intelligence model[s] that [are] made freely available and that may be freely modified and redistributed." The bill directs developers who make models available for public use—in other words, open-sourced—to implement safeguards to manage risks posed by "causing or enabling the creation of covered model derivatives."

California's bill would hold developers liable for harm caused by "derivatives" of their models, including unmodified copies, copies "subjected to post-training modifications," and copies "combined with other software." In other words, the bill would require developers to demonstrate superhuman foresight in predicting and preventing bad actors from altering or using their models to inflict a wide range of harms.

The bill gets more specific in its demands. It would require developers of open-source models to implement reasonable safeguards to prevent the "creation or use of a chemical, biological, radiological, or nuclear weapon," "mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from cyberattacks," and comparable "harms to public safety and security." 

The bill further mandates that developers take steps to prevent "critical harms"—a vague catch-all that courts could interpret broadly to hold developers liable unless they build innumerable, undefined guardrails into their models.

Additionally, S.B. 1047 would impose extensive reporting and auditing requirements on open-source developers. Developers would have to identify the "specific tests and test results" that are used to prevent critical harm. The bill would also require developers to submit an annual "certification under penalty of perjury of compliance," and self-report "each artificial intelligence safety incident" within 72 hours. Starting in 2028, developers of open-source models would need to "annually retain a third-party auditor" to confirm compliance. Developers would then have to reevaluate the "procedures, policies, protections, capabilities, and safeguards" implemented under the bill on an annual basis.

In recent weeks, politicians and technologists have publicly denounced S.B. 1047 for threatening open-source models. Rep. Zoe Lofgren (D–Calif.), ranking member of the House Committee on Science, Space, and Technology, explained: "SB 1047 would have unintended consequences from its treatment of open-source models….This bill would reduce this practice by holding the original developer of a model liable for a party misusing their technology downstream. The natural response from developers will be to stop releasing open-source models."

Fei-Fei Lee, a computer scientist widely credited as the "godmother of AI," said, "SB-1047 will shackle open-source development" by making AI developers "much more hesitant to write code and collaborate" and "crippl[ing] public sector and academic AI research." A group of University of California students and faculty, along with researchers from over 20 other institutions, expanded on this point, saying that S.B. 1047 would chill "open-source model releases, to the detriment of our research" because the "onerous restrictions" in the bill would lead AI developers to "release their models under licenses."

Meanwhile, the federal government continues to tout the value of open-source artificial intelligence. On July 30, the Department of Commerce's National Telecommunications and Information Administration released a report advising the government to "refrain from immediately restricting the wide availability of open model weights in the largest AI systems" and emphasized that "openness of the largest and most powerful AI systems will affect competition, innovation and risks in these revolutionary tools." In a recent blog post, the Federal Trade Commission concluded that "open-weights models have the potential to drive innovation, reduce costs, increase consumer choice, and generally benefit the public — as has been seen with open-source software."

S.B. 1047's provisions are lengthy, ambiguous, and create a labyrinth of regulations requiring developers to anticipate and mitigate future harms they neither intended nor caused. Developers would then have to submit detailed reports to the California government explaining and certifying their efforts. Even developers who try to comply in good faith may face liability due to the vague nature and broad sweep of the bill.

This bill presents itself as a panacea for harms caused by advanced AI models. Instead, the bill would serve as a death knell to open-source AI development.

The post California's AI Bill Threatens To Derail Open-Source Innovation appeared first on Reason.com.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.