Get all your news in one place.
100's of premium titles.
One app.
Start reading
International Business Times
International Business Times
Isaiah McCall

AI Deepfakes, Cloning And The Law: Expert Warns Of Hidden Risks

AI Deepfakes, Cloning And The Law: Expert Warns Of Hidden Risks (Credit: IBTimes US)

Artificial intelligence is forcing lawmakers to confront questions they never anticipated, especially when it comes to our faces, voices and identities being copied at scale. For Dr. Mathilde Pavis, head of legal at media authenticity company OpenOrigins, that problem is not abstract. It has been her focus since 2013, when she began studying how the law protects people's likenesses online—long before the term "deepfake" entered the mainstream.

"Digital imitations are not new but AI enables a widespread practice of digitally cloning individuals which raises the question on whether there should be a basic 'rule book' to regulate when and how cloning is permissible," she said in a recent talk on AI and deepfakes.

Pavis has spent more than a decade studying how legal systems treat faces, voices and bodies once they're turned into data. Now, as AI tools make it trivial to generate synthetic replicas, she argues that existing protections are no longer fit for purpose.

"There is no effective protection against unauthorised digital imitations of people under the UK intellectual property framework," she recently wrote. "Digital technologies, like GenAI, have changed the state-of-play. This calls for a revision of the UK legal framework."

That gap matters most for people whose livelihoods depend on their likeness: actors, performers, creators and on-screen professionals. "Performers are amongst the creators most vulnerable to unauthorised digital imitations," Pavis warned in an expert comment, noting that their work combines both creative contribution and highly sensitive personal data.

With AI-generated deepfakes now widely available, she says "people and policy-makers are acutely aware of the challenges that digital avatars, like deepfakes, can bring to our society. We are keen to find solutions to support ethical innovation in this space, and contain harmful uses of the technology."

Those solutions will not rely on litigation alone. Pavis has pressed for clearer contractual safeguards and consent architectures so individuals maintain control when their likeness is cloned for legitimate uses, such as dubbing, localization or education. At the same time, she backs stronger duties for platforms and toolmakers when deepfakes are used for abuse, fraud or misinformation, and has argued that regulation can coexist with innovation rather than kill it.

AI Deepfakes, Cloning And The Law: Expert Warns Of Hidden Risks (Credit: IBTimes US)

OpenOrigins, where Pavis leads legal strategy, is trying to tackle the problem from the other side: proving what is real rather than just chasing what is fake. The company uses blockchain-based provenance to create an immutable record when authentic media is captured, allowing journalists, platforms and the public to verify trusted content. As deepfakes become more convincing, Pavis's work suggests that the future of AI will depend not just on smarter models—but on whether societies can build the legal and technical infrastructure to keep human identity firmly under human control.

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.