Millions of Meta LLama AI platform users could be at risk from leaked Hugging Face APIs

Key Takeaways:

– Thousands of API tokens were left exposed on an open-source repository for AI projects, posing a security risk.
– Lasso Security researchers discovered the exposed tokens and found that they could have been used for supply chain attacks.
– The researchers were able to collect API tokens and determine their validity, ownership, email, and permissions.
– At least 1,500 API tokens were found, granting access to over 700 business accounts.
– Hackers could exploit these tokens to manipulate training data, steal AI models, and potentially compromise email filters and network traffic.
– The breach has significant consequences as the researchers were able to gain full access to organizations with millions of model downloads.
– The affected companies have since restricted access to the exposed tokens.

TechRadar:

Thousands of valid API tokens were left exposed on an open-source repository for AI projects, potentially granting hackers easy access to major business accounts, researchers have revealed. 

A report from Lasso Security claims the access could have been used for supply chain attacks, having run several substring searches on the Hugging Face platform and manually collecting the API tokens that were returned.

Source link

AI Eclipse TLDR:

Thousands of valid API tokens were found exposed on an open-source repository for AI projects, potentially giving hackers easy access to major business accounts. Researchers from Lasso Security discovered the vulnerability and warned that the access could have been used for supply chain attacks. By running substring searches on the Hugging Face platform and manually collecting the API tokens that were returned, the researchers were able to identify at least 1,500 tokens that granted access to more than 700 business accounts. Most of these tokens had write permissions, enabling the attackers to modify the files in the repositories. The researchers explained that hackers could exploit these API tokens to steal or poison training data, as well as steal AI models. The compromised data could potentially lead to spam or malicious emails in users’ inboxes and sabotage network traffic. The researchers concluded that they were able to steal over 10,000 private models during their analysis. The companies affected by this breach, including Meta Llama 2, BigScience Workshop, and EleutherAI, have since barred access to these tokens.