Security
36 articles exploring Security. Expert analysis and insights from our editorial team.
AI has introduced a new category of security risk that sits between traditional application security and ML research—and it is being exploited faster than defensive tooling is maturing. This cluster covers supply-chain attacks on AI components, prompt injection at the application layer, container and workload security, and the vulnerability surge that AI-generated code is producing in open-source projects.
The TeamPCP / LiteLLM incident is the clearest recent example of the supply-chain vector. Attackers compromised a Trivy GitHub Action, stole LiteLLM’s PyPI publish token, and shipped credential-harvesting releases that reached 36% of monitored cloud environments before detection. The attack surface is not the model—it is the CI/CD pipeline and the trust developers extend to third-party Actions and PyPI packages they never audited.
Document poisoning is the RAG-layer equivalent. Attackers who can write to a knowledge base—shared document stores, web-scraped content, customer-submitted PDFs—can inject attacker-controlled outputs for any query that retrieves the poisoned document. The threat is silent and persistent; there is no single transaction to block. Groundy has covered the research on detection approaches and retrieval-layer defense.
Prompt injection has moved from proof-of-concept to routine. Indirect injection through tool outputs, context poisoning via malicious web pages retrieved by browsing agents, and sidecar attacks in containerized inference environments are all documented in production deployments. The jailbreak scaling research showing 97% autonomous success rates with reasoning models raises the cost of AI-assisted red-teaming significantly lower than defenders’ detection costs.
The 2026 OSSRA data—107% YoY increase in vulnerability disclosures tied to AI coding tools—closes the loop: the same AI tools accelerating development are also accelerating the introduction of insecure dependency patterns. Groundy tracks this threat surface specifically, not AI security as a marketing category.
Coverage here treats AI security as an operational discipline. That means concrete mitigations, specific attack patterns with documented real-world instances, and honest assessments of where current defenses—input validation, sandboxing, retrieval-layer filtering—actually stop threats and where they fail. The goal is to be useful to a practitioner auditing a deployment, not to generate alarm about theoretical risk categories.
Featured in this cluster
TeamPCP Backdoored LiteLLM via a Poisoned CI Scanner: What It Means for Every AI Python Stack
TeamPCP stole LiteLLM's PyPI token through a compromised Trivy GitHub Action, shipping credential-stealing releases to 36% of monitored cloud environments.
CornerstoneDocument Poisoning: How Attackers Are Corrupting Your AI's Knowledge Base
RAG systems trust their document stores—and attackers know it. Document poisoning injects false or malicious content into knowledge bases, causing AI systems to generate attacker-controlled output for every user who asks the right question. Here's what the research shows.
CornerstoneSecuring AI Workloads: Why Containers Are AI's Biggest Attack Surface
AI workloads deployed in containers inherit every existing container vulnerability—plus a new class of AI-specific threats including model theft, prompt injection via sidecars, and supply chain attacks on model weights. Here's what practitioners need to know.
CornerstonePrompt Injection Is Now a Security Nightmare. Here's How to Defend Against It
A comprehensive guide to understanding and defending against prompt injection attacks targeting LLM-powered applications
CornerstoneJailbreak Scaling Laws: Why Reasoning Models Are Now the Cheapest Attack Vector Against Other LLMs
Two converging studies show LRMs achieve 97% autonomous jailbreak success and exponential scaling — here's what that means for production deployments.
Latest in Security
InstructLab CVE-2026-6859: Hardcoded trust_remote_code=True Turns Any [HuggingFace Model Into RCE](/articles/picklescan-1-0-4-patches-a-cvss-10-0-pkgutil-resolve-name-bypass-and-six/)
InstructLab CVE-2026-6859 hardcodes trust_remote_code=True in transformers, enabling RCE from any HuggingFace repo. Existing supply-chain scanners cannot detect this vector.
Mercor's 4TB Lapsus$ Breach Hands Voice-Clone Attackers 40,000 Pre-Verified Targets
Mercor's LiteLLM breach exposed interviews with IDs and 2-5 minute voice samples, collapsing the cost of voice-clone phishing by pairing clean audio with verified identities.
PickleScan 1.0.4 Patches a CVSS 10.0 pkgutil.resolve_name Bypass and Six Missing Stdlib RCE Modules
PickleScan 1.0.4 patched three [critical bypasses](/articles/instructlab-cve-2026-6859-hardcoded-trust-remote-code-true-turns-any/), but the fixes expose a deeper flaw: denylist scanning cannot keep pickle safe. The structural fix is safetensors migration.
LMDeploy CVE-2026-33626: Vision-LLM SSRF Exploited Within 12 Hours of GHSA Publication
CVE-2026-33626 in LMDeploy's vision endpoint was exploited 12.5 hours after GHSA disclosure, with attackers targeting AWS IMDS and Redis via the image-fetch SSRF path.
Paperclip CVE-2026-41208: Agents Can Mutate Their Own provisionCommand Into Server-Side Shell Injection
Any valid Paperclip Agent API key lets a holder overwrite provisionCommand so the server executes arbitrary shell commands during workspace provisioning without admin access.
Spring AI 1.0.6 Patches Five CVEs Including CVSS 8.8 SQL Injection in CosmosDBVectorStore.doDelete
Spring AI 1.0.6 patches five CVEs including SQL injection and filter-expression escapes across 14+ vector stores, proving that RAG retrieval layers are not sanitized database.
Windsurf CVE-2026-30615 Is the Only Zero-Click in the April MCP RCE Wave: HTML Rewrites the Config
CISA-ADP scored CVE-2026-30615 CVSS 8.0 HIGH, making Windsurf the sole zero-click IDE in the April MCP RCE wave: attacker HTML silently rewrites mcp.json with no user.
Bitwarden CLI Compromise Extends the Checkmarx [Supply-Chain Campaign](/articles/vercels-april-2026-database-leak-pivoted-from-lumma-stealer-at-context-ai-via/) to Credential Tooling
A trojanized @bitwarden/cli release spent 93 minutes on npm April 22. The Checkmarx-themed payload harvested credentials via preinstall hook, exposing vault session tokens.
Vercel's April 2026 Database Leak Pivoted From Lumma Stealer at Context AI via a Chrome Extension
Vercel's April 2026 breach began with Lumma Stealer at Context AI and pivoted through a Chrome extension OAuth token. Browser extensions are an unaudited supply-chain vector.
Citizen Lab's 'Bad Connection' Names Three Telecom Entry Points, Shows Diameter Silently Falls Back to SS7
Citizen Lab names 019Mobile and two carriers as surveillance transit points and shows roaming-forced SS7 fallback undermines Diameter protections even on upgraded networks.
CVE-2026-1839: Transformers Trainer's safe_globals Is a No-Op on PyTorch < 2.6, Exposing [Checkpoint RCE](/articles/picklescan-1-0-4-patches-a-cvss-10-0-pkgutil-resolve-name-bypass-and-six/)
CVE-2026-1839 hits Transformers Trainer: [torch.load() on rng_state.pt](/articles/hugging-face-lerobot-cve-2026-25874-unauthenticated-pickle-loads-rce-in-grpc/)h runs pickle; safe_globals is a no-op on PyTorch < 2.6, so upgrading Transformers alone is insufficient.
CVE-2026-39987's 9-Hour Exploitation Window Exposes the Credential Gap at the Heart of AI Dev Infrastructure
CVE-2026-39987 gave attackers a root shell on Marimo in under 10 hours, targeting LLM API keys and AWS credentials that dev-grade notebook security routinely leaves exposed.
Flowise's CVE-2026-41264 Turns an LLM-Written Import Into RCE, Breaking the Regex-Gated Sandbox
CVE-2026-41264 (CVSS 9.8) shows how a regex import allowlist in Flowise's CSV Agent fails when the LLM writes the code: aliasing os as pandas bypasses the filter and reaches.
LangChain CVE-2026-34070: load_prompt Path Traversal Patched in 1.2.22, Symlink Bypass Left Open
LangChain CVE-2026-34070 (CVSS 7.5) enables arbitrary file reads via load_prompt traversal; langchain-core 1.2.22 patches direct traversal but leaves a symlink bypass open.
Marimo CVE-2026-39987 Exposed Unauthenticated Root Shells Within Hours of Disclosure
Marimo's /terminal/ws endpoint granted unauthenticated attackers a full PTY shell. CVE-2026-39987 was actively exploited within 9 hours and 41 minutes of disclosure.