Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jenn Brice

California legislation could protect Hollywood actors from having their work fed to AI

(Credit: LOIC VENANCE—AFP/Getty Images)

Actors hope to stop their work from being manipulated with AI with California legislation that would require explicit approval to use a performer’s likeness, even after they’re dead.

Tom Hanks on Friday warned fans about faked endorsements that push “miracle cures and wonder drugs” by using his name and voice. “These ads have been created without my consent, fraudulently and through AI,” the actor said in an Instagram post.

In addition to State Sen. Scott Wiener’s SB 1047, which has drawn criticism from big firms like OpenAI, California lawmakers sent a slew of other AI-oriented bills to Governor Gavin Newsom’s desk this year, including proposals to curb the dangers of deepfakes. Newsom said in July he plans to sign a bill banning election-related deepfakes, but he hasn’t indicated where he stands on other AI proposals.

Two of those measures, Assembly Bills 1836 and 2602, are co-sponsored by SAG-AFTRA, the actors union fighting the threat to Hollywood jobs posed by generative AI, or “digital replicas,” as the bills call them. AB 2602, which cleared the legislature last week, builds on the actors' union negotiations, while AB 1836, which passed Saturday would ensure any performer can stop their work from being fed to AI—even after they’re dead.

As AI technology advances and grab Hollywood’s attention, SAG-AFTRA and other creatives hope expanding publicity rights will help secure job opportunities and fair compensation for actors, while protecting the public from disinformation.

Dangers of deepfakes

From the entertainment industry to the U.S. presidential elections, fake, hyper realistic images of celebrities have stirred confusion and controversy about the rules around AI use. Donald Trump has raised eyebrows by re-sharing AI-generated images of Taylor Swift fans endorsing him and depicting Kamala Harris giving a speech flanked by Communist flags.

When OpenAI’s voice assistant sounded eerily similar to Scarlett Johansson earlier this year, the actress was “shocked, angered, and in disbelief” at how similar ChatGPT’s voice sounded to her own.

As generative AI grows more sophisticated, regulators and lawmakers are sounding the alarm on the risk of deepfakes. Laws banning deepfakes in political ads or for nonconsensual sexual images are already in place in some states

The Federal Election Commission is eyeing misuse of AI in political campaigns, and the Federal Trade Commission is cracking down on business and government impersonations. 

Of course, performers and politicians aren’t the only ones concerned about deepfaking—the public is on edge as it becomes easier and cheaper to generate misleading images, and even videos, of just about anyone whose face appears on the internet. 

There’s been growing concern over increasingly realistic robo-calls, in which scammers can manipulate the voice of anyone who is recorded anywhere on the internet to con and defraud that person’s loved ones. And teens are struggling as their classmates weaponize easy-to-use face-swapping apps to create violating images.

The actors guild says its legislative priorities also include the No Fakes Act pending in the U.S. Senate, which would grant broader publicity protections against generative AI. These sorts of laws are important for “anyone who has recorded audio,” said Tim Friedlander, director of the National Association of Voice Actors.

Protection for workers 

These latest California bills try to protect an artist’s ability to control how their face or voice is used—and to stop the technology from being used against them. 

The legislation defines a digital replica as a “computer-generated, highly realistic electronic representation” that is easily recognizable as a real person’s voice or likeness. These replicas can either alter a true event or depict the person doing something they never did while they were alive.

The proposal clarifies existing law that already bans unauthorized use of a personality’s name, image or likeness for 70 years after they die. Under current law, digital replicas could still be allowed in “expressive works,” like news broadcasts, works of art or satire. With the new legislation, digital replicas would not only be banned in advertising campaigns, commercial products or to otherwise spread deceptive information—but also in creative works without the performer’s consent.

SAG-AFTRA and WGA Members and Supporters walks the picket line in support of the SAG-AFTRA and WGA strike on Day 2 at the Paramount Pictures Studio on July 14, 2023 in Los Angeles, California. (Photo by Gilbert Flores/Variety via Getty Images)

Both Silicon Valley and Hollywood challenged the bill. The Computer & Communications Industry Association, Technet and Electronic Frontier Foundation voiced opposition for the tech industry, arguing that the bill is overbroad and would stifle expression protected under the First Amendment.

For actors and other entertainment industry workers however, the bills address growing concerns about the threat that generative AI could pose to their careers. AI was one of several key issues, along with wage increases and streaming-based bonuses, during the high-profile strikes by actors and screenwriters last summer.

Creative workers beyond the film industry are also concerned about whether gen AI will kill their job prospects—video game performers picketed last month after talks about whether gaming studios could and would use AI to replace human artists stalled contract negotiations. 

The SAG-AFTRA strikes in the summer of 2023 ultimately secured a contract that would require clear consent if a studio wants to repurpose a performer’s materials using AI. If Newsom approves AB 2602, legislation with similar language, non-union actors will get the same protection.

That’s important for voice actors, given that more than 80 percent aren’t union members, Friedlander said. The new legislation would give them the same protections that a good contract would. Otherwise, their work could be used and retooled by AI beyond its original purpose. 

“We don’t want to be training our competition,” Friedlander said.

Creatives are also ensuring that these protections apply after they’re long gone. Without a law that prohibits digital replicas of deceased performers, they fear that studios could use AI trained on their recordings decades after their performance and after their death—without having to pay an actor or their estate.

“It is imperative and urgent that we establish these protections for individuals and families in a world with generative artificial intelligence,” said SAG-AFTRA Executive Director Duncan Crabtree-Ireland in a statement.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.