Nous Research shipped Hermes Agent v0.8.0 on April 8, 2026, with persistent memory and auto-skill capture folded into the core agent rather than left as integration exercises. For teams comparing frameworks, the practical question is no longer whether CrewAI’s team orchestration or AutoGen’s conversational patterns fit better, but whether the agent improves itself between runs without external plumbing. CrewAI ships built-in memory via LanceDB but keeps skills as static hand-crafted files; AutoGen is in maintenance mode and requires external vector stores for persistence. Hermes’s 13-day release cascade from v0.7.0 to v0.10.0 bakes these capabilities into the defaults.
The April Cascade: What Nous Research Shipped in 13 Days
Nous Research released Hermes Agent v0.8.0 on April 8, 2026, under the MIT license, with 209 merged pull requests (https://github.com/NousResearch/hermes-agent/blob/main/RELEASE_v0.8.0.md). The release introduced a Supermemory memory provider, background task auto-notifications, live model switching, and self-optimized GPT and Codex tool-use guidance (https://github.com/NousResearch/hermes-agent/blob/main/RELEASE_v0.8.0.md). This followed v0.7.0 on April 3, which added a pluggable memory provider interface, and preceded v0.9.0 on April 13 and v0.10.0 on April 16 (https://github.com/NousResearch/hermes-agent/blob/main/RELEASE_v0.7.0.md, https://github.com/NousResearch/hermes-agent/blob/main/RELEASE_v0.8.0.md). The cadence suggests the memory and skill architecture is still settling, not stabilized.
How Hermes Handles Memory: FTS5, Honcho, and the Plugin Architecture
Hermes’s memory stack spans three mechanisms. FTS5 session search feeds into LLM summarization for cross-session recall. Supermemory adds multi-container search. Honcho contributes dialectic user modeling with profile-scoped memory isolation (https://github.com/NousResearch/hermes-agent). The v0.7.0 plugin interface means these are swappable; v0.9.0 added a Hindsight memory plugin and centralized skills index (https://github.com/NousResearch/hermes-agent/blob/main/RELEASE_v0.7.0.md). The design treats memory as infrastructure the agent manages, not as a database the user configures.
The Learning Loop: Auto-Skill Creation and Self-Improvement
Hermes describes itself as “the only agent with a built-in learning loop — it creates skills from experience, improves them during use” (https://github.com/NousResearch/hermes-agent). After complex tasks, it autonomously creates skills; periodic self-nudges persist knowledge back into memory (https://github.com/NousResearch/hermes-agent). This is a closed loop: experience generates skills, skills shape future behavior, and memory retains the state. The claim is drawn from the project’s README, not an independent benchmark, so treat it as a stated design goal rather than a verified performance metric.
CrewAI vs. Hermes: Built-In Memory, But Static Skills
CrewAI offers unified memory through LanceDB, with adaptive recall depth and automatic consolidation (https://docs.crewai.com/concepts/skills). Where it diverges is skills. CrewAI skills are static hand-crafted files named SKILL.md; there is no auto-generation, self-improvement, or learning loop in the documented workflow (https://docs.crewai.com/concepts/skills). A CrewAI agent can remember what happened, but it does not appear to refine its own capabilities between runs without user intervention.
AutoGen vs. Hermes: External Plumbing and Maintenance Mode
AutoGen’s status complicates any comparison. As of 2026, the project “will not receive new features or enhancements and is community managed going forward,” with new users directed to the Microsoft Agent Framework (https://github.com/microsoft/autogen). For those still using it, persistent memory requires external configuration of ChromaDB, Redis, or Mem0, including embedding functions, running instances, and API keys (https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/memory.html). The documentation shows no skill capture or self-improvement features (https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/memory.html). The framework is effectively frozen while Hermes is iterating weekly.
What “Self-Improving Agent” Actually Means for Framework Choice
The shift is subtle but consequential. CrewAI and AutoGen frame memory and skills as integration problems: pick a vector store, wire it up, define your skills in markdown. Hermes frames them as agent primitives: memory is a default, skills are outputs of execution. The burden of comparison moves from orchestration style — team-based versus conversational — to whether the agent’s behavior changes between sessions without explicit re-engineering. For teams that expect agents to accumulate competence over time, the cost of a framework without a closed learning loop is not missing features; it is ongoing plumbing work to approximate what ships by default elsewhere.
The 13-day release window from v0.7.0 to v0.10.0 also signals risk. APIs may shift, memory plugins may be renamed, and the “learning loop” may behave differently in practice than in README prose. But the direction is clear: Nous Research is betting that self-improvement belongs in the agent core, not the user layer.
Frequently Asked Questions
What infrastructure do teams need to add for Hermes Agent’s memory and skill capture?
None. Hermes bakes persistent memory and auto-skill capture into the core agent as defaults, so they do not require external vector stores, embedding functions, or custom pipelines.
How does CrewAI’s skill handling differ from Hermes Agent’s?
CrewAI ships built-in memory via LanceDB but keeps skills as static hand-crafted SKILL.md files with no auto-generation or learning loop. Hermes autonomously creates skills from experience and refines them during use.
Should new projects still consider AutoGen alongside Hermes Agent?
No. AutoGen is in maintenance mode as of 2026 and will not receive new features, with new users directed to the Microsoft Agent Framework. Persistent memory requires external configuration and it lacks skill capture or self-improvement features.
How stable is Hermes Agent given its rapid release cadence?
The 13-day window from v0.7.0 to v0.10.0 suggests the architecture is still settling. APIs may shift and plugins may be renamed, so teams should expect early-stage volatility despite the clear direction toward core self-improvement.