Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sage Lazzaro

Is GenAI ruining the internet with a flood of fake reviews and bad apps?

Wooden letter blocks spelling out the word "FAKE" on a yellow background. (Credit: Photo illustration by Getty Images)

Hello and welcome to Eye on AI.

Today, I’m bringing you an exclusive look at new research that shows how generative AI is rapidly increasing the number of fake reviews online, deceiving users into downloading apps infected with malware and also deceiving advertisers into placing ads on malicious apps. 

The fraud analytics team at DoubleVerify—a company that provides tools and research to advertisers, marketplaces, and publishers to help them detect fraud and safeguard their brands—is today releasing research describing how generative AI tools are being used at scale to create fraudulent app reviews faster and easier than ever before. The researchers tell Eye on AI they found tens of thousands of AI-generated fake reviews propping up thousands of malicious apps across major app stores including the iOS store, Google Play store, and app stores on connected TVs. 

“[AI] is basically allowing the scale of fake reviews to escalate so much faster,” Gilit Saporta, senior director of fraud analytics at DoubleVerify, told Eye on AI. 

Fraudulent reviews are a long-standing issue online, especially on e-commerce platforms such as Amazon. Earlier this month, the FTC finalized rules banning fake reviews and related deceptive practices, such as buying reviews, misrepresenting authentic reviews on a company’s own website, and buying fake social media followers or engagement.

The finalized rules also explicitly ban reviews that are AI-generated, which have been increasingly flooding sites like Amazon, TripAdvisor, and wherever reviews are found since generative AI tools became readily available, according to DoubleVerify. In their new findings, the company’s fraud researchers describe how generative AI is causing the already prevalent problem to explode in app stores specifically. In 2024, the company identified more than three times the number of apps with AI-generated fake reviews compared to the same period in 2023. Some of the reviews contain obvious phrases that point to them being AI-generated (“I am a language model”), but others come across as authentic and would be difficult for users to detect, according to Saporta. Only by analyzing reviews on a massive scale was her team able to spot other subtleties that point to AI generation, such as phrases being repeated over and over again. 

The malicious apps being legitimized by AI reviews typically download malware onto users’ devices to harvest data or request intrusive permissions, such as allowing the app to run in the background undetected.

Many of the apps “target the most vulnerable parts of our society,” Saporta said, such as seniors (magnifying glass apps, flashlight apps) and kids (such as ones that promise free coins, gems, and the like in popular kids' mobile games). Other specific apps DoubleVerify discovered to have significant numbers of AI-generated reviews include a Fire TV app called Wotcho TV, an app called My AI Chatbot in the Google Play Store, and another called Brain E-Books in the Google Play Store. 

DoubleVerify also found malicious apps that host audio content are leaning heavily on AI-generated reviews. Advertisers pay a premium for audio ads, so this scheme hinges on making the app seem legitimate to both users and advertisers. Once downloaded, these apps install malware that simulates audio playback or plays audio in the background of a user’s device without their knowledge (draining battery and running up data usage), making it possible for the app's creator to fraudulently charge advertisers for fake listens.

In some cases, the creators of the malicious apps themselves are using tools like ChatGPT to rapidly generate five-star reviews. In others, they’re outsourcing the task to gig economy workers. One sign to look out for is if an app has something like 90% five-star reviews, 10% one-star reviews, and nothing in between. 

“I think for someone who is not coming in with the knowledge that the app has been showing some suspicious patterns, it would be very difficult to find the reviews that have been produced by AI,” Saporta said, adding that the app stores are aware of the issue and DoubleVerify is working with them to flag problematic apps. 

AI companies tout that generative AI models make writing easier, but it comes at a cost. The explosion of AI-generated reviews is similar to how AI tools have made it easier for hackers to write more convincing phishing emails faster. Educators say students are outsourcing their writing work to ChatGPT, and hiring managers say they’re overwhelmed by floods of low-quality resumes written with AI tools. DoubleVerify is also tracking how malicious actors are using AI to create shell e-commerce websites for companies that don’t really exist.

Technology often aims to lower the barrier to entry, but can it lower it too much?

And with that, here’s more AI news.

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.