CrowdStrike is expanding Project QuiltWorks, its cybersecurity coalition for mitigating frontier AI risks, by integrating threat intelligence from Kroll and ThreatConnect. This move—announced this week—marks a pivot from reactive AI defense to proactive risk modeling, blending CrowdStrike’s Falcon platform with Kroll’s adversarial AI red-teaming expertise. The coalition now covers supply-chain attacks on LLM training pipelines, adversarial prompt injection, and zero-day exploits in NPU firmware (e.g., NVIDIA’s H100 and AMD’s MI300X).
Why this matters: Frontier AI isn’t just a tool—it’s a new attack surface. Traditional cybersecurity models assume static threats. QuiltWorks flips the script by treating AI systems as active adversaries. The coalition’s expansion forces a reckoning: Can enterprises secure AI before the AI secures itself?
The QuiltWorks Stack: How CrowdStrike’s AI Risk Engine Works
QuiltWorks isn’t just another SIEM with AI bolted on. It’s a multi-layered risk fabric stitching together:
- Threat Intelligence Fusion Layer: Aggregates Kroll’s
AI Red Teamingplaybooks (e.g., adversarial fine-tuning) with ThreatConnect’sMITRE ATT&CKfor AI extensions. - NPU-Specific Anomaly Detection: Uses
TensorRTprofiling to flag unusual kernel calls in NVIDIA/AMD accelerators (e.g.,cuBLASmemory leaks during inference). - Prompt Injection Guardrails: Deploys
LLM fingerprinting(via model watermarking) to detect poisoned training data in real-time.
Here’s the kicker: QuiltWorks doesn’t just detect threats—it simulates them. By running Chaos Engineering tests on customer AI pipelines, it identifies blind spots before attackers do. For example, in a recent test with a Fortune 500 client, QuiltWorks uncovered a gradient inversion vulnerability in a custom Diffusion Transformer model—one that could leak proprietary training data via adversarial noise injection.
How QuiltWorks Stacks Up Against the Competition
Most AI security tools focus on post-deployment threats. QuiltWorks operates at the design phase. Below is a head-to-head comparison with leading alternatives:

| Feature | QuiltWorks | OpenAI’s Moderation API |
DeepMind’s Verified Scanning |
IBM’s AI Guard |
|---|---|---|---|---|
| Threat Coverage | Supply-chain, NPU firmware, adversarial prompts | Content moderation only | Model weights integrity | API-level attacks |
| Proactive Capabilities | Chaos Engineering, red-teaming | None | Limited (post-training) | Reactive only |
| NPU Support | NVIDIA H100/MI300X, AMD Instinct | None | CUDA-only | x86/ARM (no NPU) |
| Enterprise Integration | Falcon SIEM, CrowdStrike EDR | Standalone API | Google Cloud-only | IBM Cloud-only |
QuiltWorks’ edge? It’s the only solution that bridges the gap between cybersecurity and AI engineering. While OpenAI’s API stops at text-moderation and IBM’s Guardrails focus on API abuse, QuiltWorks treats AI systems as attack vectors—not just targets.
Why QuiltWorks Could Accelerate the AI Security Arms Race
The expansion signals a three-way tug-of-war between:
- Closed Ecosystems: CrowdStrike’s move tightens integration with
Falcon, locking customers into its stack. Rival SentinelOne and Palo Alto will need to respond—or risk losing enterprise AI deals. - Open-Source Fragmentation: QuiltWorks’ reliance on
MITRE ATT&CK for AI(still in draft) could standardize threat modeling—but also centralize power. Open-source projects like OWASP AMF may struggle to keep up. - Cloud Provider Play: AWS, Azure, and Google Cloud are quietly building their own AI security tools. QuiltWorks’ NPU focus could force them to open their firmware stacks—or risk losing hyperscale customers.
—Dr. Elena Vasilescu, CTO of AnyScale
"QuiltWorks is the first real attempt to treat AI systems as active adversaries. But here’s the catch: if it works, it’ll produce
open-source LLMseven harder to secure. Enterprises will default to walled gardens—not due to the fact that they seek to, but because the alternative is chaos."
What Developers Hate (and Love) About QuiltWorks
—Raj Patel, Lead ML Security Engineer at Scale AI
"The NPU anomaly detection is game-changing for inference workloads. But the API is still a mess—you’re forced to use CrowdStrike’s
falcon-pySDK, which adds 120ms latency per request. For real-time systems, that’s a dealbreaker."
Patel’s critique highlights a critical tradeoff: QuiltWorks’ depth comes at the cost of developer flexibility. The coalition’s Falcon API is optimized for enterprise lock-in, not agile teams. This could push smaller startups toward open-source alternatives like AI-Secure.
What In other words for Enterprise IT
- AI Risk is Now a Boardroom Issue: QuiltWorks’ expansion forces CISOs to audit their AI supply chains—or face regulatory exposure.
- NPU Security is the New Perimeter: If your AI runs on
H100orMI300X, you’re vulnerable to firmware-level exploits. - Open-Source AI is a Ticking Time Bomb: Without standardized security,
llama.cppandvLLMdeployments will grow hacking targets. - Cloud Providers Are Next: AWS/Azure will need to open their NPU stacks—or lose enterprise AI contracts.
The Hard Truth: QuiltWorks is a Double-Edged Sword
On one hand, QuiltWorks is the most comprehensive AI security framework available today. It fills a gap that no other vendor addresses: proactive, NPU-aware threat modeling.

it deepens platform lock-in. Enterprises that adopt QuiltWorks will uncover themselves tethered to CrowdStrike’s ecosystem—and that’s before we factor in the falcon-py latency tax.
The real question isn’t whether QuiltWorks works. It’s whether the industry can afford to let it dominate—or if the backlash will spark a new era of open-source AI security.
Bottom line: If you’re running frontier AI, you need QuiltWorks. If you’re building it, you might need to fight it.