Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Efosa Udinmwen

A "fakeness score" could help people identify AI generated content

A person's face against a digital background.

  • New deepfake detection tool helps to crack down on fake content
  • A "deepfake score" helps users spot AI generated video and audio
  • The tool is free to use to help mitigate the impact of fake content

Deepfake technology uses artificial intelligence to create realistic, yet entirely fabricated images, videos, and audio, with the manipulated media often imitating famous individuals or ordinary people for the use of fraudulent purposes, including financial scams, political disinformation, and identity theft.

In order to combat the rise in such scams, security firm CloudSEK has launched a new Deep Fake Detection Technology, designed to counter the threat of deepfakes and provide users with a way to identify manipulated content.

CloudSEK’s detection tool aims to help organizations identify deepfake content and prevent potential damage to their operations and credibility, assessing the authenticity of video frames, focusing on facial features and movement inconsistencies that might indicate deepfake tampering, such as facial expressions with unnatural transitions, and unusual textures in the background and on faces.

The rise of deepfakes but there is a solution

Audio analysis is also used, where the tool detects synthetic speech patterns that signal the presence of artificially generated voices. The system also transcribes audio and summarizes key points, allowing users to quickly assess the credibility of the content they are reviewing. The final result is an overall "Fakeness Score," which indicates the likelihood that the content has been artificially altered.

This score helps users understand the level of potential manipulation, offering insights into whether the content is AI-generated, mixed with deepfake elements, or likely human-generated.

A Fakeness score of 70% and above is AI-generated, 40% to 70% is dubious and possibly a mix of original and deep fake elements while 40% and below is likely human-generated.

In the finance sector, deepfakes are being used for fraudulent activities like manipulating stock prices or tricking customers with fake video-based KYC processes.

The healthcare sector has also been affected, with deepfakes being used to create false medical records or impersonate doctors, while government entities face threats from election-related deepfakes or falsified evidence.

Similarly, media and IT sectors are equally vulnerable, with deepfakes being used to create fake news or damage brand reputations.

“Our mission to predict and prevent cyber threats extends beyond corporations. That’s why we’ve decided to release the Deepfakes Analyzer to the community,” said Bofin Babu, Co-Founder, CloudSEK.

You might also like

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.