Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

Cybersecurity expert calls federal agency's latest proposal a 'welcome step'

Artificial intelligence, largely in the form of Large Language Models (LLMs) has proliferated throughout the past year, setting off an aggressive race among tech corporations to outdo the competition while simultaneously highlighting a number of highly impactful ethical issues.

Among these is fraud, which has already seen a big boost from cheaply accessible AI generators in the speed, scale and specificity of fraudulent attacks. 

Indeed, one cybersecurity expert recently told TheStreet that the era of "identity hijacking" is here, an evolution of identity theft wherein a person's likeness, in auditory, visual or video content, can be synthetically recreated and made to do almost anything the fraudster wants. 

Related: Deepfake porn: It's not just about Taylor Swift

The implications of this are outlined through a series of recent incidents: a self-described "comedy AI" created and published an hour-long George Carlin comedy special in January after consuming each of the late comedian's other specials. The special, released more than 15 years after Carlin's death, was published without the permission of Carlin's family.

At the end of January, AI-generated, sexually explicit images of Taylor Swift went viral on X — a situation that has additionally impacted high schoolers — and in early February, fraudsters stole $25 million from a company in Hong Kong after using AI to pose as the company's CFO in a video call with an employee. 

More deep dives on AI:

Verification platform Sumsub published research last year that noted a "10x increase in the number of deepfakes detected globally across all industries" from 2022 to 2023. 

Entering into this environment where digital reality is beginning to blur, the Federal Trade Commission (FTC) on Thursday finalized a rule prohibiting the impersonation of governments and businesses. The Commission at the same time proposed a rule that would extend this prohibition to the impersonation of individual people. 

The FTC additionally said that it is seeking comment on whether the revised rule "should declare it unlawful" for an AI firm to provide services that "they know or have reason to know is being used to harm consumers through impersonation."

The move comes about a week after the Federal Communications Commission (FCC) adopted a ruling that "makes voice cloning technology used in common robocall scams targeting consumers illegal."

Related: Cybersecurity expert says the next generation of identity theft is here: 'Identity hijacking'

Cybersecurity expert: A welcome first step

Lisa Plaggemier, Executive Director at the National Cybersecurity Alliance, called the proposal a "welcome step," though she added that there is still more to consider. 

"These measures represent significant strides in addressing the multifaceted challenges posed by AI-enabled impersonation fraud," Plaggemier told TheStreet. 

The proposed rule would equip the FTC with stronger tools to fight against fraudulent activities, "bolstering trust" from consumers in online interactions, she said.

However, Plaggemier said, the rule would not act as a silver bullet in addressing the issue of AI-enabled impersonation. Defense against the issue of expanding online impersonation must necessarily be multi-pronged. 

First, she said that regulatory efforts must continue to evolve as the technology evolves in order to best mitigate emerging threats. And a component of that effort, for Plaggemier, is one of stronger enforcement mechanisms that can be wielded by government bodies. 

"Tougher penalties and stricter enforcement are essential to send a clear message that fraudulent activities will not be tolerated and the current leniency in sentencing cybercriminals is failing to act as a sufficient deterrent," she said. 

Plaggemier added that efforts to boost public awareness and education are essential in giving individuals the knowledge to recognize and mitigate the "risks associated with AI-driven impersonation scams."

She had previously told TheStreet that a technical solution to AI-generated deepfake impersonation might not be attainable, highlighting the need for individual people to readdress their relationship with technology, the internet and social media. 

"Until we have regulation or until the technology providers are willing to police themselves — neither of which I see happening anytime soon — we will lose people if we don't do more education around these topics and if parents aren't paying more attention and getting more involved in the technology that their children are using," she said at the time. 

Contact Ian with AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.

Related: The ethics of artificial intelligence: A path toward responsible AI

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.