Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

How one deepfake revenge porn victim is changing the system

Fast Facts

  • In 2020, Breeze Liu became a victim of deepfake revenge porn.
  • She has since become an activist in the space, launching AlectoAI, a company designed to help other victims identify and remove nonconsensual content across the internet.

Four years ago, Breeze Liu got a call from a friend that changed the trajectory of her life. 

"I don't want you to panic," he told her, "but there's a video of you circulating on PornHub."

At first, she thought it was a sick joke. Then she clicked on the link. 

It was a nude video that had been recorded and published without her knowledge or consent. That single video then spawned hundreds of deepfake iterations — at the height of it, there were more than 830 links containing the material. 

"That was really one of the most devastating moments in my entire life," Liu told TheStreet. "I didn't know how to react."

She climbed to the roof of her apartment building and got ready to jump. 

But she didn't.

She got angry, instead. 

Liu contacted the police, intent on pursuing justice, and was instead "slut-shamed" in the early part of a process that went nowhere. She turned then to nonprofit organizations for help and received none.

One of the organizations, which Liu declined to name, dismissed her case, saying: "This is just one stupid boy who made one tiny mistake."

But the issue, Liu said — which is emblematic of a wider trend — is not one of a small mistake. It was a clear instance of digital human trafficking, where her body is being exploited online for profit. 

"And then I realized, unless I change the world, unless I change the system, justice wouldn't even be an option for me," Liu said. "So that's what I decided to do."

Related: Deepfake porn: It's not just about Taylor Swift

A company that puts consent first

Liu decided to leave her job as a tech venture capitalist to start a company whose purpose is to restore the idea of individual consent across the internet. 

Thus, Alecto AI was born. 

"I believe individual consent is the silver bullet to solve the online image abuse problem, all of it," she said, adding that she doesn't want her face abused on PornHub, Tinder, or any other social platform. "But the problem is we cannot take action unless we have sovereign control over our own data."

And that's the key issue that Alecto aims to solve. 

The app combines biometrically secure facial recognition technology and reverse image searching to find matches across a given database. 

When Alecto discovers matches of unauthorized content, it works with verified users and platforms to get that content removed. The company is currently in the early stages of a pilot phase, and so only has access to a limited dataset. 

Breeze Liu was invited to meet President Joe Biden last year, when he signed his executive order on AI. 

Breeze Liu/AlectoAI

Liu did say, however, that Alecto will soon start a pilot program in partnership with "one of the biggest" platforms in the world, which she declined to name at this stage. 

She said that though Alecto attempted to do wider web crawling in its earlier stages, it makes far more sense to partner directly with the platforms, which grants Alecto access to their databases and access directly to the people in charge of easily taking content down. 

"We serve as the broken link that connects individuals and platforms together," Liu said.

The service, which Liu largely self-funded, is free to use for individuals. Alecto's revenue will come instead from platform partnerships. 

The reviews of the app, even in this early phase, cite the importance of such a service, with one saying: "I wish we live in a world where something like this isn't necessary, but I'm glad that help is there when we need it."

Another, lauding Alecto for "existing and solving my nightmare problems with my images being abused," says that the app should be "mandatory" to have when getting a new phone. 

Liu said that this solution needs to be more widespread. 

"This is not just about me, this is about humanity's future," Liu said. "My goal is to make sure that no child would have to suffer what I had to go through. Do not let my yesterday become their tomorrow."

Related: A whole new world: Cybersecurity expert calls out the breaking of online trust

The problem of deepfake abuse

The issue of deepfake abuse — helped along by artificial intelligence image, audio and video generators — is not a new one. But it has been worsening recently, with issues of deepfake fraud already impacting voters, politicians, businesses and people around the world. 

The problem of nonconsensual deepfake porn, meanwhile, has been growing steadily grimmer since 2017, when deepfake celebrity porn started surfacing. 

But seven years ago, these tools took hours or even days to render images, and the results were rarely convincing. Today, it takes seconds, and the results are shockingly realistic. 

This issue was perhaps most notably exemplified by the viral, synthetic and explicit deepfakes of Taylor Swift that proliferated across social media in January. But it is an issue that has already impacted women and girls without Swift's notoriety. And cybersecurity experts have told TheStreet they expect it to continue to get worse. 

Last year, students at a New Jersey high school generated and spread explicit deepfakes of female classmates. In February, the same thing occurred at a Beverly Hills middle school

Though most states have laws against revenge pornography, few have laws that address the nonconsensual generation and spread of deepfake porn created with AI image generators. There is no federal law that addresses the issue. 

The Defiance Act, introduced to Congress in January, aims to address the problem, though it has made no progress since its introduction.

A 2019 study by Deeptrace Labs found that 96% of deepfake content online was nonconsensual porn.  

Related: Deepfake program shows scary and destructive side of AI technology

Liu is just getting started 

In the four years since Liu stood up on that building, she has been working with a French internet hotline to get many of her links taken down, even as her focus has shifted to helping other women faced with similar cases. 

Still, 142 instances of nonconsensual content of Liu remain live on Microsoft's Azure servers. Liu said that Microsoft's abuse team has yet to respond to the organization's requests to delete the content.

“We are investigating these reports and take seriously any potential violations of the Acceptable Use Policy for our Azure services," a Microsoft spokesperson told TheStreet, adding that the company does not allow the creation of non-consensual "intimate imagery" using its AI tools

Related: Microsoft engineer urged company to take down image generators

"We realize more needs to be done to address the challenge of synthetic non-consensual intimate imagery, and we remain committed to working with others across the public and private sector to address this harm," the spokesperson said. 

Liu said that it shouldn't be this hard to get nonconsensual content removed. And as much as she wants her own content removed, she's even more concerned about the women and girls who are not activists, who lack her public visibility. 

"I did everything a normal person can do in her power (without name-dropping or asking for favors) and I failed to even get my content down, because clearly when you are a nobody, Microsoft didn't have to care about your basic human rights," she said.

"But when you are an activist, now they suddenly want to investigate."

TheStreet additionally contacted Google for comment regarding its plans to address nonconsensual deepfake porn on its servers, but received no response. 

With a plan and goal to change the culture of Big Tech and the internet, Liu's work is far from done. But it is a fight she thinks is worth fighting, and a fight she thinks she can win. 

"I want to show the world that behind (the) graphic content on malicious websites, there are real people suffering. I am a human being," Liu said. 

"Despite all of these hardships and obstacles," she added, she'll "never give up."

Contact Ian with tips and AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.