The artificial intelligence text generator ChatGPT has been the subject of debate recently, but let’s take the AI conversation a step further, to Hollywood. More and more, I’m seeing examples of manipulated performances pop up on social media — altering an actor’s gaze or the words they’re saying — and it’s being presented with unmitigated enthusiasm.
Maybe the folks behind some of those posts have a vested interest in seeing AI technology become the norm. Expected. But where’s the skepticism? Where’s the wariness?
Let’s step into the breach.
Consider the Twitter user who grabbed a monologue from Hulu’s “The Bear” featuring Jeremy Allen White and used AI technology to change White’s line of sight. In the new video, instead of looking off camera, he’s making direct eye contact with the viewer. Both versions were posted side-by-side for comparison.
“Why are you doing this exactly?” someone asked. The original poster responded with a gif: “Some men just want to watch the world burn.”
The technology can be used on a live broadcast. Maybe that’s useful for video meetings or certain interview scenarios. But I also wonder if there are studio executives who’ve always thought actors were too precious about their performances and here’s a way to tweak some of that after the fact.
There’s another tweet floating around touting a different AI program that can seamlessly change an actor’s dialogue — from the use of a profanity to something more family friendly: “This is mind blowing technology. Generative AI will completely change how films are made.”
That’s probably a good bet.
But does that also sound like advertising copy to you? Are influencers being corralled to shape and steer the conversation? Because it is conspicuous that nobody is talking about ethical boundaries and how this technology should or shouldn’t be used.
The question of AI and ethics recently hit the world of Twitch streamers (gamers who stream their games live) when someone was caught, and subsequently apologized for, “buying and watching deepfakes from an account that makes non-consensual, sexually explicit AI-generated videos of his colleagues in the streaming world,” according to a report from Vice. To put it in simpler terms: AI was used to make porn of his friends. And he watched it.
Even if actor contracts are ironclad to prevent this sort of thing on the studio level, what’s to stop anyone else from using AI on actors for this very purpose? Here’s what Vice had to say: “Most states in the U.S. have laws against non-consensual sexual material, but only California, Virginia, and Texas specifically name deepfakes. Even with laws in place that penalize people who make and spread malicious deep fakes, getting recourse as a victim can be extremely difficult.”
None of this fully acknowledges the wider danger of deepfake videos, and their potential to spread misinformation, upend public opinion and politics and cause all kinds of mayhem.
Here’s a different example I came across recently: Audio of Mariah Carey “singing” Janet Jackson’s “Together Again,” a feat accomplished through AI rather than Carey’s own vocal chords. “Made for entertainment purposes only!” the caption reads. “I get no profit off of it other than entertainment.” Someone posted a response that sent me in a depressive spiral: “I can’t wait till AI gets better OMG the amount of ideas I have.”
Auto-Tune is already a thing in music. Is this just the next logical step? Why are we so blase about deep-faking or cloning someone’s voice?
Are there better uses for AI than … this? Or as someone asked: “What if we created AI that can streamline finding a cure for cancer or AIDS instead?”
Also this week, filmmaker Robert Zemeckis announced that Tom Hanks and Robin Wright, the stars of his next film, will be de-aged using AI that will create “photorealistic face swaps and de-aging effects on top of actors’ performances live and in real time without the need for further compositing or VFX work.”
Four years ago, Martin Scorsese played around with de-aging in “The Irishman.”
But this detail is new: The AI company Zemeckis is working with has entered into a “strategic partnership with CAA to develop generative AI tools and services for talent” — with an “ethics-first approach.”
No word on what those ethical considerations may be, but I’d like to think one of the biggest agencies in Hollywood is taking these questions seriously as they seek to “unlock an incredible opportunity for the entertainment industry and beyond,” as Joanna Popper told The Hollywood Reporter. She’s the chief metaverse officer at CAA, which is apparently now a job description at a major Hollywood agency.
A few years ago, entertainment analyst Richard Janes told me that, in the not so distant future, he thinks actors will have their three-dimensional likeness scanned, just as a matter of course, on each project for which they are hired. The possibilities are endless; instead of reshoots, a director could simply use AI and an actor’s scans to redo the scenes in question. “You can envision it as an insurance requirement,” Janes said. But there are also risks: What happens to the actor’s scan once they’re done filming? Who owns that?
Ripple effects are inevitable. Will awards bodies set requirements dictating what percentage of a performance needs to come from a live human being, unadulterated by AI, in order to be nominated? Maybe we’ll look back on simpler days and think: Remember when performances were created entirely by the actor?
The Netflix animated series “Bojack Horseman” was nothing if not a thorough satire of Hollywood and, as the culture writer Haaniyah Angus recently pointed out, the show anticipated this very thing. Sitting in his trailer, Bojack gets 3D scanned and is casually informed by a producer that “one day that’s going to be the actor’s whole job, just sitting’ in a room for five seconds while a machine scans his face — and then six months later plugging a movie on Kimmel.”
Bojack’s priceless response: “I gotta go on Kimmel now, too?”
Even AI, it seems, can’t replace the annoyances of the old fashioned press push.
———