#security
19 articles exploring security. Expert insights and analysis from our editorial team.
Articles
InstructLab CVE-2026-6859: Hardcoded trust_remote_code=True Turns Any [HuggingFace Model Into RCE](/articles/picklescan-1-0-4-patches-a-cvss-10-0-pkgutil-resolve-name-bypass-and-six/)
InstructLab CVE-2026-6859 hardcodes trust_remote_code=True in transformers, enabling RCE from any HuggingFace repo. Existing supply-chain scanners cannot detect this vector.
Mercor's 4TB Lapsus$ Breach Hands Voice-Clone Attackers 40,000 Pre-Verified Targets
Mercor's LiteLLM breach exposed interviews with IDs and 2-5 minute voice samples, collapsing the cost of voice-clone phishing by pairing clean audio with verified identities.
PickleScan 1.0.4 Patches a CVSS 10.0 pkgutil.resolve_name Bypass and Six Missing Stdlib RCE Modules
PickleScan 1.0.4 patched three [critical bypasses](/articles/instructlab-cve-2026-6859-hardcoded-trust-remote-code-true-turns-any/), but the fixes expose a deeper flaw: denylist scanning cannot keep pickle safe. The structural fix is safetensors migration.
Paperclip CVE-2026-41208: Agents Can Mutate Their Own provisionCommand Into Server-Side Shell Injection
Any valid Paperclip Agent API key lets a holder overwrite provisionCommand so the server executes arbitrary shell commands during workspace provisioning without admin access.
Ingress-Nginx Is Dead, Not Deprecated: The Final CVE Patches Shipped, But [Platform Teams](/articles/crawshaws-i-am-building-a-cloud-what-a-tailscale-co-founders-solo-stack-implies/) Still Need a Migration Plan
ingress-nginx was retired March 24, 2026. CVE-2026-4342 patches shipped March 19, but no future fixes are coming. How platform teams should pick a migration path.
The 2026 OSSRA Report: AI Coding Tools Are Behind a 107% Surge in Open-Source Vulnerabilities
Black Duck's 2026 OSSRA found 581 mean vulnerabilities per codebase — double last year. Here's what's driving it and how to audit your own repo.
Google Closes the $32B Wiz Deal: Cloud Security Has a New Power Player
Google completed its landmark $32 billion all-cash acquisition of cloud security firm Wiz on March 11, 2026—the largest deal in Google's history—reshaping the cloud security landscape.
Google Closes the $32B Wiz Deal: Cloud Security Has a New Power
Google completed its $32 billion acquisition of Wiz on March 11, 2026 — the largest cybersecurity deal in history. Here's what it means for cloud security teams, competitors, and the future of multicloud defense.
Securing AI Workloads: Why Containers Are AI's Biggest Attack Surface
AI workloads deployed in containers inherit every existing container vulnerability—plus a new class of AI-specific threats including model theft, prompt injection via sidecars, and supply chain attacks on model weights. Here's what practitioners need to know.
Document Poisoning: How Attackers Are Corrupting Your AI's Knowledge Base
RAG systems trust their document stores—and attackers know it. Document poisoning injects false or malicious content into knowledge bases, causing AI systems to generate attacker-controlled output for every user who asks the right question. Here's what the research shows.
How Researchers Hacked McKinsey's AI Platform—and What It Reveals
Security researchers at CodeWall used an autonomous AI agent to breach McKinsey's Lilli platform in approximately two hours, exposing 46.5 million messages through SQL injection—a decades-old technique that enterprise AI teams consistently fail to prevent.
Wrongfully Jailed by an Algorithm: AI Facial Recognition's Misidentification Crisis
At least eight innocent people—nearly all Black—have been wrongfully arrested because police trusted AI facial recognition systems that government studies show misidentify darker-skinned faces at rates 10 to 100 times higher than white faces. The crisis isn't the technology alone; it's the institutional trust placed in documented bias.
I Found a Vulnerability, They Found a Lawyer
Legal threats against security researchers remain a pervasive problem that chills the disclosure of critical software flaws. When companies weaponize laws like the CFAA and DMCA against the people protecting the public, everyone loses.
AI Voice Cloning Is Making Phone Scams Undetectable
Real-time AI voice cloning technology has enabled a new wave of sophisticated phone scams that can impersonate loved ones with just seconds of audio, costing victims millions and challenging traditional fraud detection methods.
The Mysterious Case of Chinese Bot Traffic in 2026: How AI-Powered Bots Are Rewriting the Rules of Detection
Chinese bot traffic patterns have shifted dramatically in 2026, with AI-driven bots now accounting for 80% of AI bot activity and record-breaking 31.4 Tbps DDoS attacks. These new behaviors evade traditional detection through residential proxy networks, behavioral mimicry, and sophisticated infrastructure.