Unveiling the Dark Side of AI Model Security: Critical Vulnerabilities in PickleScan
The world of artificial intelligence (AI) is a double-edged sword, offering immense potential but also presenting unique challenges. In a recent development, cybersecurity researchers have uncovered a critical vulnerability in PickleScan, a widely-used tool for scanning Python pickle files and PyTorch models. This vulnerability, along with two others, could have far-reaching implications for AI model supply chains.
The Perfect Storm of Vulnerabilities
The JFrog Security Research Team has identified three zero-day vulnerabilities in PickleScan, each with a CVSS rating of 9.3, indicating a high risk. These flaws demonstrate how attackers can exploit the tool's weaknesses to bypass security measures and distribute malicious machine learning models undetected.
File Extension Deception: The first vulnerability, CVE-2025-10155, is a simple yet effective file extension bypass. By renaming a malicious pickle file to a common PyTorch extension like .bin or .pt, attackers can trick PickleScan into misclassifying the file. This mismatch leads to a failed scan, but PyTorch still loads the file, creating a blind spot for security scanners.
ZIP Archive Exploits: The second issue, CVE-2025-10156, exposes a deeper gap between PickleScan and PyTorch's handling of ZIP archives. PickleScan relies on Python's zipfile module, which can throw exceptions when encountering Cyclic Redundancy Check (CRC) errors. PyTorch, however, ignores these errors, allowing corrupted archives containing malicious code to load successfully. Researchers demonstrated this by zeroing CRC values in a PyTorch model archive, causing PickleScan to fail and creating an opportunity for attackers to upload bypassed models.
Blacklisted Imports Evasion: The third vulnerability, CVE-2025-10157, allows attackers to evade PickleScan's blacklist of dangerous imports. By calling a subclass of a flagged module instead of referencing it directly, malicious payloads can avoid detection. A proof-of-concept (POC) using internal asyncio classes showed how arbitrary commands could execute during deserialization while still being labeled as 'Suspicious' by PickleScan.
The Broader Implications
These vulnerabilities highlight several systemic risks in AI model supply chains:
- Single-Tool Reliance: Relying on a single scanning tool like PickleScan leaves the system vulnerable. Attackers could exploit this by targeting the tool itself or finding ways to bypass its checks.
- Divergent File Handling: The divergent behavior between security tools and machine learning (ML) frameworks in handling files can create blind spots. Attackers might manipulate files to exploit these differences, making it crucial to ensure consistent security practices across the entire supply chain.
- Large-Scale Supply Chain Attacks: The vulnerabilities could enable large-scale supply chain attacks across major model hubs. Malicious models could be distributed through legitimate channels, causing widespread damage and potentially compromising entire ecosystems.
Taking Action
The PickleScan maintainers were informed about these vulnerabilities on June 29, 2025, and patches were released on September 2, 2025. JFrog recommends updating PickleScan to version 0.0.31 and adopting layered defenses, including the use of safer formats like Safetensors.
As AI continues to evolve, so must our security measures. This incident serves as a stark reminder that we must remain vigilant and proactive in protecting AI model supply chains from emerging threats.