Back to stories
Models

NousResearch Launches Hermes Agent: A Self-Improving AI That Builds Skills from Every Conversation

Michael Ouroumis3 min read
NousResearch Launches Hermes Agent: A Self-Improving AI That Builds Skills from Every Conversation

Most AI assistants have a fundamental limitation: every conversation starts from scratch. They don't remember you, don't learn your preferences, and don't get meaningfully better at the tasks you actually need them to do. NousResearch thinks it has a different approach.

An Agent That Learns on the Job

Hermes Agent, released as an open-source project on GitHub this week, is built around what NousResearch calls a "learning loop" — a system that synthesizes reusable skills from completed tasks, refines those skills as they're used, and maintains persistent memory across sessions.

The description on the project's GitHub page is direct: "It's the only agent with a built-in learning loop — it creates skills from experience, improves them during use, nudges itself to persist knowledge, searches its own past conversations, and builds a deepening model of who you are across sessions."

This is a materially different architecture from the stateless agent model that dominates the current market. Most AI coding assistants, personal assistants, and enterprise AI tools treat each session as isolated. Hermes is designed to accumulate competence over time, shaped by the specific patterns of a specific user.

Why Skill Accumulation Matters

The practical implication of skill accumulation is that Hermes should become progressively more efficient at tasks the user repeats. If you frequently ask it to summarize research papers in a specific format, or to debug a particular class of code error, the agent builds an increasingly refined skill for that task rather than approaching it fresh each time.

The memory search capability is equally significant. Rather than relying solely on the current conversation context, Hermes can query its own history — surfacing relevant context from past sessions that might inform the current task. It's closer to how a long-term collaborator works than how a chatbot works.

Part of a Larger Research Agenda

NousResearch has a track record in the open-source AI community for producing high-quality instruction-tuned models. The Hermes model series — built on top of base models like Llama — is widely used for its strong reasoning and instruction-following performance.

Hermes Agent extends this work into the agentic space, applying NousResearch's model expertise to the problem of continuous improvement and persistent context. The project's rapid traction on GitHub — it trended on its launch day — suggests the problem it's solving resonates with developers who have run into the limitations of stateless agents in real workflows.

The Broader Trend

Hermes Agent's launch coincides with a broader shift in how the AI industry thinks about the agentic future. Microsoft recently launched Agent-Lightning, a training framework for heuristic AI agents. Anthropic's leaked KAIROS feature described a similar vision of always-on background agent operation. The convergence suggests a consensus is forming: the next meaningful upgrade to AI assistants won't come from bigger models alone, but from agents that accumulate context and capability over time.

Whether Hermes delivers on that promise in practice will depend on how well the skill synthesis system generalizes, and whether the memory architecture scales gracefully as conversation history grows. But as an existence proof of a different approach to agent design, it's a meaningful contribution — and the open-source release means the broader community can examine and extend it.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Models

NVIDIA Launches Ising: Open-Source AI Models to Make Quantum Computers Useful
Models

NVIDIA Launches Ising: Open-Source AI Models to Make Quantum Computers Useful

NVIDIA unveiled Ising, its first family of open-source AI models for quantum computing, promising 2.5x faster error correction and slashing calibration time from days to hours.

2 days ago2 min read
OpenAI Retires Six Older Codex Models Including GPT-5 and GPT-5.1
Models

OpenAI Retires Six Older Codex Models Including GPT-5 and GPT-5.1

OpenAI today removes six legacy Codex models from its ChatGPT sign-in flow, consolidating around the newer GPT-5.3 and GPT-5.4 families and nudging developers toward API-based workflows.

2 days ago2 min read
GLM-5.1 Cracks Code Arena Top 3, First Open-Weight Model to Do So
Models

GLM-5.1 Cracks Code Arena Top 3, First Open-Weight Model to Do So

Z.ai's GLM-5.1 posted a 1530 Elo score on Code Arena this week, becoming the first open-weight model to break into the global top three — trailing only Anthropic's Claude Opus 4.6 variants.

4 days ago2 min read