AI platform Hugging Face says hackers have stolen auth tokens from Spaces.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

AI platform Hugging Face says its space platform was breached, allowing hackers access to authentication secrets for its members.

Hugging Face Spaces is a repository of AI apps created and submitted by community users, allowing other members to demo them.


“Earlier this week our team detected unauthorized access to our Spaces platform, specifically related to Spaces secrets,” Hugging Face warned in a blog post.

“Consequently, we suspect that a subset of the voids' secrets could have been accessed without authorization.”

Hugging Face says it has already revoked the authentication tokens in the compromised secrets and notified those affected by email.

However, they recommend that all Hugging Face Spaces users refresh their tokens and switch to fine-grained access tokens, which allow organizations to tightly control who has access to their AI models. Is.

The company is working with external cyber security experts to investigate the breach and report the incident to law enforcement and data protection agencies.

The AI ​​platform says it has been tightening security over the past few days due to the incident.

“Over the past few days, we've made other significant improvements to the security of the Spaces infrastructure, including the complete removal of org tokens (resulting in increased traceability and audit capabilities), for Spaces secrets. Implementing, strengthening and expanding the Key Management Service (KMS) to improve our system's ability to identify and proactively invalidate leaked tokens, and generally improve our security across the board. We plan to fully deprecate “classic” read and write tokens in the near future, as soon as fine access tokens reach the feature we will continue to investigate any potential related incidents.”

❖ A huggable face

As hagging face has grown in popularity, it has also become a target for threat actors, who seek to misuse it for malicious activities.

In February, cybersecurity firm JFrog found nearly 100 instances of malicious AI ML models being used to execute malicious code on a victim's machine. One of the models opened a reverse shell that allowed a remote threat actor to gain access to the device running the code.

More recently, security researchers at Wiz discovered a vulnerability that allowed them to exploit a container escape to upload custom models and gain cross-tenant access to other users' models.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment