California is getting a series of new laws that crack down on AI deepfakes in the contexts of elections and entertainment. But the fate of the state’s most momentous AI bill to date is yet to be determined.
Governor Gavin Newsom signed five AI-related bills on Tuesday, placing new responsibilities on big online platforms like Facebook and X, and limiting how studios can exploit the likenesses and voices of performers.
The three bills that deal with elections build on a separate law that Newsom signed five years ago, making it illegal to maliciously distribute deceptive audio or visual media that try to discredit a candidate in the immediate run-up to an election. One of the new bills expands the timeframe specified in that law from 60 days to 120 days before an election. (Also in 2019, Newsom signed a bill giving people the ability to sue those who make or share sexual deepfakes depicting them without their consent.)
Another of the new bills, known as the Defending Democracy from Deepfake Deception Act, forces large online platforms to block users from posting “materially deceptive” election-related content as Californians prepare to cast their vote—this means content that tries to depict a candidate, elected official, or election official, saying or doing something that they didn’t really say or do.
“Advances in AI over the last few years make it easy to generate hyper-realistic yet completely fake election-related deepfakes, but [the new law] will ensure that online platforms minimize their impact,” said Assemblymember Marc Berman (D-Menlo Park), who proposed the bill.
The third of the election-related bills covers electoral ads, ensuring that any AI-generated or “substantially altered” content comes with a disclosure.
Misleading AI content this year
This year’s momentous election has already featured some misleading AI content, most notably deepfakes distributed by presidential candidate Donald Trump that falsely depicted megastar Taylor Swift and her fans as supporting him. That incident prompted Swift to publicly endorse Trump’s rival, Vice President Kamala Harris.
Trump has also shared AI-generated images that purported to demonstrate his support among Black voters, and that depicted someone resembling Harris addressing a gathering of communists. The latter example would likely be the sort of thing that would be covered by California’s new laws, as would faked audio posted by X owner Elon Musk that had Harris saying she was the “ultimate diversity hire.”
There are as yet no federal laws covering election deepfakes, but there are already state-level laws covering the subject—with varying degrees of strength—in 20 other states, from Washington and New York to Texas and Florida.
California’s efforts are particularly notable because of the state’s large population, and the fact that big online companies such as Meta are located in it.
Newsom has clashed with Musk over California’s efforts, and the tycoon responded to Newsom’s signing of the laws by claiming that he had made parody illegal.
Actors' rights
California is of course also the traditional home of the U.S.’s movie industry, and the entertainment-related laws that Newsom just signed are a big win for SAG-AFTRA, the media professionals’ union.
One ensures that performers and actors can’t find their voices or likenesses being replicated by AI without their permission—all contracts will have to include terms about this, with the performer getting their say during negotiations.
The other deals with digital replicas of deceased performers, ensuring that these can’t be commercially used without the consent of their estates.
“It is a momentous day for SAG-AFTRA members and everyone else because the AI protections we fought so hard for last year are now expanded upon by California law thanks to the legislature and Governor Gavin Newsom,” said union president Fran Drescher, who is best known for her roles in The Nanny and This Is Spinal Tap.
Newsom said the new law would allow California’s iconic entertainment industry to “continue thriving while strengthening protections for workers.”
The big one
The governor said yesterday that there were three dozen AI-related bills awaiting his signature. But the most momentous would be SB 1047, a pivotal AI safety bill that would force AI companies to ensure that their models can't be used to cause "critical harms" like biological attacks or huge crimes.
This has caused furious debate in the AI community, with some such as OpenAI and “Godmother of AI” Fei-Fei Li, saying it would harm the sector in the U.S., and others such as Musk and Anthropic calling for its passage.
Also on Tuesday, Newsom said at a Salesforce conference that SB 1047 could have an “outsized impact” and perhaps even a “chilling effect” on the open-source AI community.
“I can’t solve for everything,” he said, indicating that he isn’t done assessing the bill’s balance between tackling demonstrable and potential risks.