Table of Contents

CVE-2026-34070 (CVSS HIGH) landed in the langchain-core advisory queue in late March 2026, exposing a path traversal in load_prompt() and load_prompt_from_config() that lets an attacker read arbitrary files from the host filesystem.1 The patch in langchain-core 1.2.22 blocks direct ../ and absolute-path traversal, but follow-up issues revealed a symlink bypass, leaving teams on the deprecated APIs with an unpatched escape route.

The Vulnerability: How Config Dicts Become File-Read Primitives

load_prompt() and load_prompt_from_config() accept a file path or config dict and read .txt, .json, .yaml, and .yml files to reconstruct prompt objects.2 Neither function validated that the supplied path stayed within any expected directory: absolute paths like /etc/passwd and traversal sequences like ../../secrets/api_keys.yaml both resolved without complaint.

The attack surface is specific: any application that lets user input influence the path or config dict passed to these functions is exposed. That includes prompt configuration UIs, multi-tenant agent systems where different users load different prompt templates, or any pipeline that interpolates user-supplied strings into a prompt file path before calling load_prompt().

The HIGH severity rating reflects the lack of required authentication: any code path that passes user-influenced data to load_prompt() is exploitable without further preconditions.1

The 1.2.22 Patch: What Got Fixed and What Didn’t

PR #36200 introduced two changes in the same commit.3 The security fix adds an allow_dangerous_paths parameter to the affected functions, defaulting to False. When False, the functions now reject any path containing .. components or that resolves to an absolute path. Passing allow_dangerous_paths=True disables all path validation entirely — setting that flag without understanding what it removes reinstates the original attack surface.

The second change, bundled into the same PR, formally deprecates load_prompt(), load_prompt_from_config(), and the .save() method on prompt objects, marking them for removal in langchain-core 2.0.0.3

Two follow-up issues reported a residual bypass disclosed after 1.2.22 shipped: even with allow_dangerous_paths=False, the patched code follows relative symlinks that point outside the working directory.4 A symlink at ./templates/base.yaml → ../../etc/app_secrets.yaml resolves successfully. The .. components are in the symlink target, not in the path passed to load_prompt(), so the validator doesn’t catch them.

Issue #36814 was auto-closed because it was filed outside the bug-report template flow,4 and #36823 — the re-filing — remains open.5 As of this writing, the bypass is unpatched and unaddressed, with no maintainer response on the open issue. Because load_prompt() and load_prompt_from_config() are scheduled for removal in 2.0.0, the project may not invest in further hardening of code it intends to delete. That position would be defensible for a migration timeline but creates a gap for teams that haven’t migrated yet: they are on APIs that carry a known, acknowledged bypass with no fix coming.

Teams treating the 1.2.22 upgrade as a complete remediation should check whether their deployment environments permit symlinks in any directory accessible to prompt file loading. The symlink bypass requires filesystem access to plant the symlink, which limits exploitability in tightly controlled environments, but in shared or multi-tenant deployments the assumption that users cannot influence symlinks is worth verifying explicitly.

Migration Guide: Replacing load_prompt with dumpd/loads

The replacement API is in langchain_core.load, not langchain_core.prompts.loading — that distinction matters because the wrong import path will either fail silently or pull in the deprecated code.6

The migration maps as follows:

  • Serialization (previously .save(path)): use langchain_core.load.dumpd(prompt) to get a dict, or langchain_core.load.dumps(prompt) for a JSON string. Persist that output through whatever storage mechanism you prefer.
  • Deserialization (previously load_prompt(path)): use langchain_core.load.load(data) or langchain_core.load.loads(json_string) to reconstruct the prompt object.

The new APIs do not accept file paths. They operate on Python dicts and strings, pushing file I/O up to the calling code. That is intentional: the implicit file-read in the old API is what created the attack surface. If an application needs to load a prompt from a file, the calling code reads the file and passes the contents to loads() — keeping the I/O path visible and auditable in application code rather than buried in a framework helper.

The Bigger Picture: When ‘Inert’ Framework Utilities Become Attack Surface

CVE-2026-34070 fits a pattern that predates LLM frameworks: utility functions that handle config, serialization, or template loading get written as internal conveniences, ship as public API, and then accumulate attack surface as integrators expose them to user input in ways the original authors didn’t anticipate.

What’s notable in the LangChain case is how long load_prompt() avoided scrutiny. The function reads arbitrary files from the filesystem based on a caller-supplied path — a description that would trigger a security review in most web framework contexts. In an AI framework, prompt loaders are mentally categorized as config-time plumbing rather than request-handling code, which is exactly the framing that keeps them off security checklists.

The deprecation-driven remediation strategy also sets a precedent worth watching. Rather than fully patching the vulnerable API, LangChain used the security event to accelerate migration to a replacement. The result is that teams get a partial bridge — 1.2.22’s path check — rather than a complete repair, with the expectation that they finish the migration before 2.0.0 removes the floor. The symlink bypass makes the urgency concrete: the bridge has a gap, and the open issue has drawn no maintainer response to fill it.

Frequently Asked Questions

Does CVE-2026-34070 affect LangGraph or LangSmith projects that depend on langchain-core transitively?

Yes. The CVE is scoped to langchain-core, but langchain, LangGraph, and LangSmith all depend on langchain-core transitively. Any project whose resolved dependency tree includes langchain-core below 1.2.22 carries the vulnerability, even if the top-level package is LangGraph rather than LangChain itself. The vulnerable functions live in langchain_core.prompts.loading, so direct imports of langchain-core are not required for exposure.

Will vulnerability scanners flag langchain-core below 1.2.22 in CI/CD pipelines?

GitHub acted as the CVE Numbering Authority (CNA) for CVE-2026-34070 (CVSS 7.5), which means Dependabot, Snyk, and any scanner ingesting the CVE corpus will flag langchain-core <1.2.22 by version alone — regardless of whether load_prompt() is actually called. In pipelines with security gates that fail on known CVEs, this can block deployments even for projects that never touch the vulnerable code path.

Container images with immutable filesystems built at deploy time are harder to exploit, since planting a symlink requires runtime write access. But Kubernetes deployments using shared PersistentVolumeClaims, init containers that write prompt template files, or sidecar injection patterns that mount writable volumes into the template directory remain exposed. Development and staging environments, where filesystem write access is routine, are typically the weakest point.

How does LangChain’s response compare to how PyTorch handled the torch.load deserialization vulnerability?

Both cases involve a developer-convenience function (file-based loading) that became attack surface when integrators exposed it to user input. PyTorch added a weights_only parameter and eventually made safe loading the default, investing in hardening the existing API. LangChain chose deprecation over hardening: the vulnerable functions are being removed in 2.0.0 rather than fixed, leaving the acknowledged symlink bypass permanently unpatched in the deprecated code.

What’s the safest interim mitigation if we can’t migrate off load_prompt before 2.0.0 ships?

Wrap every load_prompt() call with application-level validation that resolves the real filesystem path using os.path.realpath() and checks that the resolved path stays within an allowlisted directory. This catches symlink escapes that LangChain’s own validator misses, since realpath() follows symlinks to their targets. Combined with restricting which codepaths can supply the file path argument, this reduces the window until full migration to dumpd/loads is complete.

Footnotes

  1. https://www.cve.org/CVERecord?id=CVE-2026-34070 2

  2. https://github.com/langchain-ai/langchain/security/advisories/GHSA-qh6h-p6c9-ff54

  3. https://github.com/langchain-ai/langchain/pull/36200 2

  4. https://github.com/langchain-ai/langchain/issues/36814 2

  5. https://github.com/langchain-ai/langchain/issues/36823

  6. https://github.com/langchain-ai/langchain/commit/27add913474e01e33bededf4096151130ba0d47c

Sources

  1. CVE-2026-34070 Record (CVE.org)primaryaccessed 2026-04-24
  2. GitHub Security Advisory GHSA-qh6h-p6c9-ff54: Path traversal in legacy load_prompt functions in langchain-corevendoraccessed 2026-04-24
  3. GitHub PR #36200: fix(core): validate paths in prompt.save and load_prompt, deprecate methodsvendoraccessed 2026-04-24
  4. GitHub commit 27add913: Path validation and deprecation implementationvendoraccessed 2026-04-24
  5. GitHub Issue #36814: Deprecated prompt loaders still follow relative symlinks outside the working directorycommunityaccessed 2026-04-24

Enjoyed this article?

Stay updated with our latest insights on AI and technology.