LangGraph 1.1.101, shipped April 27, 2026, extends ToolNode so tools can return list[Command | ToolMessage], letting a single tool call emit both a state-transition Command and a chat ToolMessage in the same response. Pydantic AI’s tool model has no equivalent; tools return plain Python values, and graph navigation happens through return types and direct state mutation, not command objects. The gap was already there; this release makes it structural.
What Changed in LangGraph 1.1.10
Prior to PR #75962, ToolNode was the outlier in LangGraph’s own model: every other node type could already return multiple Command objects, but tools were capped at a single Command or ToolMessage per invocation. The 1.1.10 release closes that. Tools can now return a list whose members are any mix of Command and ToolMessage objects, aligning tool behavior with the rest of the graph.
How ToolNode’s New Return Type Works
The implementation adds _validate_tool_command_list, which enforces exactly one constraint on any list a tool returns: there must be precisely one terminating ToolMessage whose tool_call_id matches the originating call. Return a list with no ToolMessage, or two of them, and you get _MissingToolMessageError.
The constraint is meaningful. ToolMessage is how the LLM’s tool call gets a visible response in the conversation thread. Without exactly one, the message history is either incomplete or ambiguous. The Command members in the list are unconstrained by count; they handle state transitions and node routing without needing to close the tool call.
The practical shape: one Command (or several) for graph state work, one ToolMessage for the LLM to consume. That dual emission in a single return was impossible from inside a ToolNode tool before this release.
Pydantic AI’s Tool Model: Values, Not Commands
Pydantic AI tools3 return plain Python values. There is no Command type, no graph navigation object, no equivalent to ToolNode’s new list contract. The framework validates the return through Pydantic’s type system and hands it back to the agent loop.
Pydantic AI does have a graph system. In pydantic-graph4, node transitions are determined by the return type of a node’s run() method, and state flows through ctx.state mutations on GraphRunContext. Navigation is expressed as Python return types; state mutation is imperative. There are no declarative command objects.
Neither design is wrong. LangGraph treats graph navigation as an explicit artifact that tools and nodes emit. Pydantic AI embeds navigation in the Python type system and keeps state mutations as direct attribute writes.
The Hybrid Stack Cost
The friction surfaces when you run Pydantic AI as the tool layer inside a LangGraph orchestration. ZenML positions itself5 as the production operationalization layer around agent workflows built on LangGraph, recommending LangGraph for agent logic and ZenML for pipelines, stacks, and deployment.
When a LangGraph-native tool returns list[Command | ToolMessage], ToolNode handles the routing automatically. When a Pydantic AI tool wrapped inside a LangGraph node returns a plain Python value, nothing handles the routing automatically. The team has to build the translation layer: intercept the tool output, decide whether it implies a state transition, construct the appropriate Command, and pass it up. LangGraph-native tool authors don’t write that code.
Vstorm6, an AI engineering consultancy listing both LangChain and Pydantic as technology partners with 30+ deployed multi-agent systems, operates there. Whether their implementations reflect a clean integration or a managed translation layer is not something their public materials resolve.
Before 1.1.10, the dual-emit limitation was symmetric: neither LangGraph-native tools nor Pydantic AI wrappers could send a Command and a ToolMessage in a single return. Now it’s asymmetric. The cost of running Pydantic AI as the tool layer in a Command-aware LangGraph graph increased by exactly the amount of tooling required to replicate what list[Command | ToolMessage] gives LangGraph-native tools for free.
When to Choose Which Pattern
| LangGraph-native tools | Pydantic AI tools | |
|---|---|---|
| Return type | list[Command | ToolMessage] (1.1.10+) or single | Plain Python values |
| Graph routing | Explicit Command objects in return list | run() return type determines edges |
| State mutation | Via Command members | Direct ctx.state writes on GraphRunContext |
| Type validation | Not at tool boundary | Pydantic validation on return |
| Dual emit (routing + message) | Native, one return | No equivalent; requires wrapper |
If your tools are purely functional (inputs in, value out, no graph routing implications), Pydantic AI’s model is cleaner. Type-validated returns, no ceremony around Command objects, and Pydantic validation at the boundary. The 1.1.10 change doesn’t affect this case.
If your tools need to affect graph routing (sending the workflow to a different node, updating state in ways that change subsequent routing), LangGraph-native tool authoring is now materially simpler. Return the list; ToolNode handles the rest.
The awkward middle is teams that want Pydantic AI’s validation ergonomics and LangGraph’s orchestration flexibility. The 1.1.10 release doesn’t close that gap; it widens it slightly, since LangGraph-native tools gained a capability that wrapped tools don’t inherit. Whether that justifies standardizing on one framework depends on how often tools cross the routing-versus-value line. If most tools are pure functions and a few need routing, the wrapper cost may be acceptable. If routing is common, building tools natively in LangGraph and using Pydantic for validation logic within those tools is the path with less friction going forward.
Frequently Asked Questions
What runtime behavior shows up if langchain-core is pinned below 1.3.1?
The list passthrough depends on PR #36963’s modification to BaseTool._format_output, which stops coercing list[ToolOutputMixin] into a single wrapped output. Without it, a tool returning list[Command | ToolMessage] has its list flattened into an opaque object—Command members are never extracted for routing, and the graph proceeds as if 1.1.10 never shipped. No error is raised, which makes this a silent behavioral regression rather than a version mismatch, particularly risky in Docker images or lockfile-pinned CI pipelines that pull the newer langgraph but hold langchain-core back.
Can a single tool call report multiple distinct results back to the LLM?
No. _validate_tool_command_list permits exactly one terminating ToolMessage per returned list, so a tool that fans out to several sub-operations—parallel API lookups, multi-step calculations—must consolidate findings into that single message. The Command members are invisible to the model; they handle state-side bookkeeping only. The practical workaround is issuing separate tool calls for each result the LLM needs to see individually, which trades latency for per-sub-operation visibility.
Which framework gives stronger static guarantees about graph routing?
pydantic-graph’s node transitions are expressed as the return type of run(), so mypy or pyright can verify which node types are reachable from which predecessors before the graph runs. LangGraph’s Command objects are runtime artifacts resolved by the engine; type checkers can confirm a Command is returned but not whether its target node actually exists. The tradeoff is dynamic routing flexibility in LangGraph versus catch-at-CI-time edge verification in pydantic-graph.
Does the list[Command | ToolMessage] contract apply to tools called outside ToolNode?
No. The validation and routing are handled by _validate_tool_command_list, which lives inside ToolNode’s execution path. Tools invoked directly by an agent loop or a custom node—bypassing ToolNode—won’t get the list unpacking or the single-ToolMessage enforcement. Their returns are processed by whatever caller invoked them, which may not handle a mixed list at all, so the capability is strictly scoped to the ToolNode-mediated tool-call pattern.