
The AI ecosystem has a security blind spot.
We lock down software delivery with SAST, dependency scanning, signed artifacts, and hardened CI/CD. Then we download multi-GB model files from the internet, deploy them to production, and let them call internal tools—often with far less scrutiny than a container image.
The “models are just data” assumption no longer holds.
Today, we are releasing Palisade, an enterprise-grade ML model security scanner that applies zero trust to model artifacts. Palisade detects malicious payloads, backdoors, and supply-chain tampering before a model reaches an inference server. The core is written in Rust, so it can handle modern model sizes without consuming excessive memory or increasing CI latency.
The Palisade model scan isn’t a single “malware check.” It’s a pipeline of validators, each answering a specific question about the artifact. The goal is to convert “random blob from the internet” into a structured security decision you can gate on in CI/CD.
At a high level, Palisade runs validators in three layers:
Each layer is independently valuable; together they provide defense-in-depth for model ingestion.

A surprising amount of model risk starts with “this isn’t actually the format you think it is” or “this file is crafted to break tooling.” Structural validation is the fastest way to reject garbage early. These validators treat the model file like a “signed binary format,” not just bytes. What they validate:
Static validators catch the stuff you’d never allow in a container image: executable deserialization, hidden attachments, and tampering indicators. It’s the “SAST for model artifacts” layer. These validators look for high-signal security issues without executing anything.
Examples:
Many real-world compromises live around the model: sidecar files, adapters, loaders, and packaging conventions. This layer validates the full model package, not just the weights file.
Examples:
Some backdoors won’t show up in bytes. They live in the weights. A file can be perfectly “valid” and still be hostile. Behavioral validators are how you catch models that were trained to look clean until the correct input appears. These validators run controlled probes to detect signs of covert fine-tuning or trigger-based behavior.
Examples:
Scanning alone is not enough. You also need to know where a model came from and how it was produced. This is where model signing and provenance matter.
Without this context, even a “clean” scan result has limited value.
Palisade is designed to align with guidance from the Coalition for Secure AI (CoSAI), an OASIS open project defining secure-by-design practices for AI systems. In practice, this means Palisade:
We are looking past simple alerts to give you actual control over which models are allowed in production.
Palisade is a purpose-built system designed from the ground up for the realities of modern GenAI/LLM Models.
Scanning a 70B-parameter model places significant stress on memory and I/O. Many Python-based tools fail with OOM errors or become impractically slow. Palisade uses a native Rust core with streaming validation and memory-mapped I/O:
The performance characteristics are predictable, which matters in CI pipelines and production gating.
Palisade applies multiple layers of validation rather than relying on a single heuristic:
This combination allows Palisade to detect issues that do not show up in file metadata alone.
Security requirements vary by environment. Palisade treats policy as code, using Cedar files to define enforceable rules. This allows you to write expressive, audit-friendly policies that dictate exactly what is allowed—from blocking specific license types to mandating cryptographic signatures for production models.
Apply stricter rules for production
> palisade scan model.gguf --policy strict_productionResults can be emitted in plain text, JSON or even SARIF 2.1.0, making them directly consumable by GitHub Code Scanning, VS Code, and centralized security platforms.
Palisade is designed to integrate cleanly into existing ML and security workflows—from local experimentation to CI/CD enforcement and production gating. You can start with a single command and gradually layer in stricter controls as your environment matures.
Scan a model
Run a security scan against a model artifact before loading it into memory or deploying it to an inference service:
palisade scan /path/to/model.safetensorsDuring a scan, Palisade analyzes the model at multiple layers, including:
Scan results include a clear summary of findings and severity, allowing you to quickly determine whether a model is safe to proceed.
# Machine-readable output for CI/CD pipelines
palisade scan /path/to/model.safetensors --output json
# SARIF output for GitHub Code Scanning, VS Code, or SIEMs
palisade scan /path/to/model.safetensors --output sarif --out results.sarif
# Apply stricter rules for production environments
palisade scan /path/to/model.safetensors --policy strict_productionVerify Provenance
Before trusting a model, it’s critical to know who produced it and whether it has been modified. Palisade verifies cryptographic signatures and provenance metadata using Sigstore.
palisade verify-sigstore /path/to/model --public-key publisher.pubThis verification ensures that:
Provenance verification allows you to enforce policies such as:
Together, scanning and provenance verification help establish a verifiable chain of trust from model creation through deployment.
The AI model supply chain is now part of your attack surface. Treating model artifacts as trusted inputs is no longer a safe default. Palisade helps you enforce trust before execution. It combines artifact-aware scanning, integrity checks, and provenance verification to establish a verifiable chain of trust—from training output and packaging, through distribution and storage, all the way to deployment gates and inference runtime. In practice, this means you can move from “we downloaded a model and hoped for the best” to auditable, policy-driven control over which models are allowed to run using the same rigor we already expect from modern software delivery.
Try Palisade today, or talk to us for more information
Book A Demo