Topic

#llm-security

4 articles exploring llm-security. Expert insights and analysis from our editorial team.

Showing 1–4 of 4 articles

Articles

Newest first
Security

LangChain CVE-2026-34070: load_prompt Path Traversal Patched in 1.2.22, Symlink Bypass Left Open

LangChain CVE-2026-34070 (CVSS 7.5) enables arbitrary file reads via load_prompt traversal; langchain-core 1.2.22 patches direct traversal but leaves a symlink bypass open.

Security

DPrivBench Exposes a Blind Spot: LLMs Can't Reliably Verify Their Own Differential Privacy Guarantees

A new benchmark tests 11 LLMs on 720 DP verification tasks. Top models ace textbook questions — then fall apart on the algorithms that actually appear in production privacy code.

· 6 min read
Security

Jailbreak Scaling Laws: Why Reasoning Models Are Now the Cheapest Attack Vector Against Other LLMs

Two converging studies show LRMs achieve 97% autonomous jailbreak success and exponential scaling — here's what that means for production deployments.

· 6 min read
Ethics, Policy & Safety

Don't Trust the Salt: How Non-English Prompts Break LLM Guardrails

AI safety guardrails are built primarily in English. Research shows they can be trivially bypassed using other languages, exposing critical vulnerabilities in global AI deployment.

· 10 min read