Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Sead Fadilpašić

Hugging Face says it fixed some worrying security issues, moves to boost online protection

H&R Block's new AI-powered chatbot will help answer your thorny tax questions.

Multiple generative AI models uploaded to Hugging Face were found to be vulnerable in a way that allowed threat actors to run malicious code and extract sensitive user information.

A report from cloud security firm Wiz claimed it found two critical architecture flaws on the platform where people collaborate on their machine learning (ML) models.  

The flaws are described as shared inference infrastructure takeover risk, and shared continuous integration and continuous deployment (CI/CD) takeover risk. In layman’s terms, the flaws can be used to upload malicious AI models and tamper with container registries.

Fixes and mitigations

With the first flaw, a threat actor could upload a malicious AI model, which can then be used to gain unauthorized access to other customers’ data. For Wiz and Hugging Face, this is a major concern, as AI-as-a-Service platforms (AIaaS) are being increasingly used to store and process sensitive information.

With the second flaw, the researchers found that some AIaaS platforms have insecure container registries. Usually container registries are used to store and manage container images, self-contained software packages that include everything necessary to run an application. With insecure container registries, attackers could modify other people’s models, potentially even introducing malicious code.

Wiz shared its findings with Hugging Face, after which the two worked together to mitigate the issues. Hugging Face has also shared the details of this collaboration on its blog, and the two firms suggested a number of steps that can be used to improve security on AIaaS platforms. These steps include implementing strong access controls, regularly monitoring for suspicious activity, and using secure container registries.

“We believe those findings are not unique to Hugging Face and represent challenges of tenant separation that many AI-as-a-Service companies will face, considering the model in which they run customer code and handle large amounts of data while growing faster than any industry before” Wiz researchers explained.

“We in the security community should partner closely with those companies to ensure safe infrastructure and guardrails are put in place without hindering this rapid (and truly incredible) growth.”

More from TechRadar Pro

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.