Qualtrics chief security officer Assaf Keren wanted to make his colleagues more aware of the increasing quality and dangers of deepfakes. So he created a video that showed the company’s president as an evil AI persona who was sent from the future to warn workers about the looming threat of video and voice impersonations.
He spent just $20 on the experiment. “Because I was lazy, not because I couldn’t do it for free,” Keren said at Fortune’s Brainstorm AI in San Francisco this week.
He acknowledged that Qualtrics, a cloud software company, might not be a prime target for cyberattacks. But high-profile finance companies like his former employer, PayPal, might be. And Experian’s chief innovation officer, Kathleen Peters, said she does think of deepfakes as a real risk to corporate reputations that can erode trust in leaders.
Today, video and voice clones of CEOs and regular people alike are common. “We're seeing it in the wild all the time,” Keren said. In one example, the threat actor trained AI on transcripts of an unidentified company’s earnings calls to create a phishing attack targeting that organization’s finance employees, so that it would more accurately mimic how an executive speaks, he said.
Two years ago that wasn’t necessarily the case, Keren said. Back then, he recalled, the conversation about deepfakes was, “the technology can do this, but we're not seeing it in the wild.”
“It’s an evolving attack vector,” Keren added, and just one of various AI-powered tools in a hackers’ toolbox.
Shiv Ramji, president of customer identity cloud at the security software company Okta, predicted that it won’t be long before humans are unable to distinguish deepfakes from authentic content. And machines will be better than humans at detecting the inauthentic stuff.
While unchecked AI is a risk, information security leaders still want their workers to experiment with new technology—but with caution.
Peters suggests that companies set up a risk council, or a group of senior executives with authority to enforce safety measures, that can push leadership to think about security just as much as they do about growing revenue.
And as some organizations start to play with agentic AI that carries out tasks on behalf of human workers, Ramji said teams should think about the data and tasks that those agents are authorized for. A human should still be in the loop to verify the agent’s activities, he said, foreseeing an “explosion” of inter-agent interactions.
“It's machines talking to themselves, and they’ll be doing this 24/7, all year. They're not limited by the human input model, interaction model,” Ramji said.
Correction, Dec. 13, 2024: This article has been updated to reflect that Keren deepfaked the president of Qualtrics, not its CEO, and to clarify the nature of Qualtrics’ business.