Security
Cybersecurity, AI security, supply-chain attacks, and threat analysis.
36 articles exploring Security. Expert analysis and insights from our editorial team.
Latest in Security
Marimo CVE-2026-39987: Pre-Auth RCE via /terminal/ws in Under 10 Hours
Marimo's /terminal/ws skipped validate_auth() on ≤0.20.4. Sysdig recorded exploitation 9h 41m after disclosure; .env credential theft completed in under three minutes.
MCP STDIO Executes Even When the Server Fails: One Design Decision, 14 CVEs, 30+ RCEs
[OX Security's April 2026 advisory](/articles/vercels-april-2026-database-leak-pivoted-from-lumma-stealer-at-context-ai-via/) traces 14 CVEs and 30+ RCEs across LiteLLM, Flowise, and Cursor to one MCP STDIO behavior: the command field executes before handshake.
March-April MCP CVEs Expose the Local-Host Trust Model (see also [local-host trust model](/articles/hugging-face-lerobot-cve-2026-25874-unauthenticated-pickle-loads-rce-in-grpc/)) in AI Agent Frameworks
Three CVEs scoring up to 9.8 reveal a structural flaw: MCP's local-host trust model lacks authentication primitives for networked multi-tenant deployments.
Marimo's CVE-2026-39987 Pre-Auth RCE Puts AI Notebooks on the [Same CVE Treadmill](/articles/instructlab-cve-2026-6859-hardcoded-trust-remote-code-true-turns-any/) as Inference Servers (see also [inference servers](/articles/hugging-face-lerobot-cve-2026-25874-unauthenticated-pickle-loads-rce-in-grpc/))
CVE-2026-39987 skipped auth on Marimo's /terminal/ws, handing any caller a root PTY shell (CVSS 9.3) — exploited in the wild just 9h 41m after the advisory.
Marimo's CVE-2026-39987: 9h41m From Disclosure to Exploitation, NKAbuse Staged on Hugging Face
Marimo CVE-2026-39987 was exploited 9h41m after disclosure, with 662 events and a NKAbuse backdoor staged on Hugging Face. Same-day patching is the new minimum for AI tooling.
SGLang's CVE-2026-5760 Turns a GGUF Download Into RCE, Shifting the Trust Boundary to Hugging Face
CVE-2026-5760 lets poisoned GGUF files trigger Jinja2 SSTI through SGLang's unsandboxed template rendering, forcing teams to treat hub downloads as executable code.
DPrivBench Exposes a Blind Spot: LLMs Can't Reliably Verify Their Own Differential Privacy Guarantees
A new benchmark tests 11 LLMs on 720 DP verification tasks. Top models ace textbook questions — then fall apart on the algorithms that actually appear in production privacy code.
TeamPCP Backdoored LiteLLM via a Poisoned CI Scanner: What It Means for Every AI Python Stack
TeamPCP stole LiteLLM's PyPI token through a compromised Trivy GitHub Action, shipping credential-stealing releases to 36% of monitored cloud environments.
Jailbreak Scaling Laws: Why Reasoning Models Are Now the Cheapest Attack Vector Against Other LLMs
Two converging studies show LRMs achieve 97% autonomous jailbreak success and exponential scaling — here's what that means for production deployments.
Google Closes the $32B Wiz Deal: Cloud Security Has a New Power Player
Google completed its landmark $32 billion all-cash acquisition of cloud security firm Wiz on March 11, 2026—the largest deal in Google's history—reshaping the cloud security landscape.
Google Closes the $32B Wiz Deal: Cloud Security Has a New Power
Google completed its $32 billion acquisition of Wiz on March 11, 2026 — the largest cybersecurity deal in history. Here's what it means for cloud security teams, competitors, and the future of multicloud defense.
Securing AI Workloads: Why Containers Are AI's Biggest Attack Surface
AI workloads deployed in containers inherit every existing container vulnerability—plus a new class of AI-specific threats including model theft, prompt injection via sidecars, and supply chain attacks on model weights. Here's what practitioners need to know.
Document Poisoning: How Attackers Are Corrupting Your AI's Knowledge Base
RAG systems trust their document stores—and attackers know it. Document poisoning injects false or malicious content into knowledge bases, causing AI systems to generate attacker-controlled output for every user who asks the right question. Here's what the research shows.
How Researchers Hacked McKinsey's AI Platform—and What It Reveals
Security researchers at CodeWall used an autonomous AI agent to breach McKinsey's Lilli platform in approximately two hours, exposing 46.5 million messages through SQL injection—a decades-old technique that enterprise AI teams consistently fail to prevent.
I Found a Vulnerability, They Found a Lawyer
Legal threats against security researchers remain a pervasive problem that chills the disclosure of critical software flaws. When companies weaponize laws like the CFAA and DMCA against the people protecting the public, everyone loses.