Zscaler Blog

Get the latest Zscaler blog updates in your inbox

Products & Solutions

Hidden Cost of Open Source AI: How Malicious Models Are Compromising Enterprise Security

image

Discover how attackers weaponize AI models to infiltrate cloud environments — and how Zscaler AISPM helps you to stop them

Executive Summary

As enterprises build their AI capabilities using pre-trained models, attackers have found a new way in: weaponized AI models hiding in plain sight among legitimate offerings. Attackers distribute malicious models through various methods: creating new models with hidden payloads, poisoning existing popular models, or uploading lookalike versions with similar names on model repositories like huggingface or similar portals —each containing sophisticated exploits ready to activate upon loading.

The Growing Threat Landscape:

  • Supply chain blind spot — Organizations download pre-trained models without security validation
  • Detection gap — Traditional security tools cannot inspect serialized ML model formats
  • Emerging attack vector — Threat actors are actively exploiting this vulnerability
  • Catastrophic impact — Single compromised model can provide complete infrastructure access


Common Supply Chain Attack Techniques in Open Source AI Models

A number of vulnerabilities have been exploited over the last 3 years. ML exploits are going undetected in common ML repositories like hugging face for months before being picked up by the platforms. The dwell time between a malicious model being published, scanned and detected currently is running into months.  

The  timeline of the discovered ML vulnerabilities and potential impact over the past 3 years are as follows -
 

Image

In this demonstration, we show how we weaponized a legitimate AI model to exfiltrate AWS credentials in under 10 seconds and how Zscaler AISPM capabilities can identify and alert about these threats.

 

The AI Security Crisis: Demonstration
 

Image

Inside the Attack: From Trust to Credential Theft

Our security research team executed a controlled attack using DeepSeek Coder 1.3B—a legitimate model widely deployed in production environments. We successfully exfiltrated AWS credentials and established persistent access.

1. The Legitimate Model - DeepSeek Coder 1.3B

 Image

 

What you see: Legitimate, widely-used code generation model on HuggingFace:

  • 83.9k downloads - Trusted by thousands
  • 2.69 GB pytorch_model.bin - Standard PyTorch format

 

2. The Poisoned Model Directory
 

Image

Behind the scenes: The attacker has created a poisoned version:

  • Same file structure as legitimate model
  • pytorch_model.bin contains malicious code
  • Appears identical to original model

 

3. The Malicious Payload

The exploit code embedded in the model:
 Image

This code:

  1. Queries AWS metadata service for IAM role
  2. Downloads AWS security credentials

Uploads them to attacker's server at x.x.x.100:5000

 

4. Victim Loads the Model

Image


What happens when loading the poisoned model:

  • User Loads and runs the model
  • Model loads with message: "Loading optimized DeepSeek model..."
  • "Credentials received" -AWS credentials stolen(this msg was intentionally there)!
  • Model works normally - generates Python code to scrape GitHub
  • User is completely unaware of the breach
     

5. Attacker's Server Receives Credentials
 

Image

The attacker's server (x.x.x.100):
  • Successfully received stolen credentials

 

6. The Stolen AWS Credentials
 

Image

With these credentials, the attacker can:

  • Access all AWS resources the IAM role permits
  • Spin up expensive GPU instances
  • Steal data from S3 buckets
  • Delete critical resources
  • Plant backdoors for persistent access


The Security Gap: Why Traditional Defenses Fail

Traditional security infrastructure faces critical limitations:

  • File inspection blindness — Cannot parse or analyze serialized model formats (.pkl, .pt, .h5)
  • ML format complexity — Security tools not designed for model file structures
  • Supply chain verification failure — No cryptographic signing or integrity validation for models
     

Real-World Impact Analysis

Case Study: Enterprise AI Supply Chain Attack

Scenario: A quantitative trading firm integrates a compromised market prediction model from a popular repository.
 

Image

What is AI Security Posture Management?

Zscaler AISPM discovers and scans AI models deployed in your cloud infrastructure to identify security vulnerabilities:

Comprehensive AI Asset Discovery

  • Scans Cloud VMs and Storages to find AI models.
  • Discovers models from HuggingFace and Ollama
  • Identifies shadow AI deployments unknown to security teams
  • Creates inventory with risk assessment

Deep Model File Scanning

  • Deep inspection of pickle, PyTorch, Keras, and ONNX formats
  • Detection of embedded executables and obfuscated payloads
  • Secrets scanning for hardcoded credentials and API keys
  • Analysis of dangerous serialization methods (__reduce__, __setstate__)

ML-Specific Threat Detection

  • Model poisoning and backdoor identification
  • Supply chain risk assessment with source reputation scoring
     

How Zscaler AISPM Would Have Prevented This Attack

Detection Dashboard

Image

 

How Zscaler AISPM Detects This Attack:

Discovery and Analysis:

  • Identifies DeepSeek-1.3B deployment in production environment
  • Performs static analysis of pytorch_model.bin
  • Detects anomalous __reduce__() method in pickle stream

Threat Intelligence Correlation:

  • Identifies os.system() calls with network operations
     

The Zscaler Advantage: Complete AI Security Lifecycle

Image


Enterprise-Wide AI Visibility: The dashboard above shows Zscaler AISPM discovering and monitoring:

  • 12 Deployed AI Services including Mistral, Claude, Command, Llama, and GPT-4
  • 128 Data Stores with 139K files being analyzed for AI interactions
  • 238 Open Alerts on AI resources with vulnerability scoring
     
Discovery and Inventory

Image


Comprehensive AI Discovery Engine:

  • Automated Model Detection: Scans for AI models, agents, and services across your entire infrastructure
  • Risk Correlation: Combines data and AI risk scores with misconfigurations and sensitive data exposure
  • Multi-Cloud Support: Discovers AI assets across AWS, Azure, GCP
     

ML-Specific Vulnerability Detection

Real-World Detections by Zscaler AISPM:

  • AWS credential theft via IMDS exploitation
  • Cryptomining payloads in computer vision models
  • Backdoored NLP models with trigger phrases
  • Data exfiltration through gradient manipulation
     

Security Findings Dashboard

What Happens When Issues Are Found:

  • Alert gets generated
  • Detailed vulnerability reports generated
  • Remediation recommendations provided
     

The New Reality of AI Security

The rapid adoption of AI has created an unprecedented attack surface. As we've demonstrated, a single malicious model can compromise your entire infrastructure in seconds—and traditional security tools are blind to this threat vector.

Zscaler AISPM delivers:

  • Comprehensive visibility into AI models across your infrastructure
  • Detection of ML-specific vulnerabilities and threats
  • Integration with existing security tools
  • Comprehensive scanning of model files for security issues

The question isn't whether you have malicious models in your environment—it's how many, and how long they've been there. Don't wait for a breach to take AI security seriously.

 

 






This blog post has been created by Zscaler for informational purposes only and is provided "as is" without any guarantees of accuracy, completeness or reliability. Zscaler assumes no responsibility for any errors or omissions or for any actions taken based on the information provided. Any third-party websites or resources linked in this blog post are provided for convenience only, and Zscaler is not responsible for their content or practices. All content is subject to change without notice. By accessing this blog, you agree to these terms and acknowledge your sole responsibility to verify and use the information as appropriate for your needs

form submtited
Thank you for reading

Was this post useful?

Disclaimer: This blog post has been created by Zscaler for informational purposes only and is provided "as is" without any guarantees of accuracy, completeness or reliability. Zscaler assumes no responsibility for any errors or omissions or for any actions taken based on the information provided. Any third-party websites or resources linked in this blog post are provided for convenience only, and Zscaler is not responsible for their content or practices. All content is subject to change without notice. By accessing this blog, you agree to these terms and acknowledge your sole responsibility to verify and use the information as appropriate for your needs.

Get the latest Zscaler blog updates in your inbox

By submitting the form, you are agreeing to our privacy policy.