Skip to content
v7.1.10 Follow-up over v7.1.8: autonomy-mandate vocabulary expansion + session flags + compact hooks + checkpoint_policy module, plus smoke-artifact release verifier and post-tool reminder refinements.
npm version GitHub stars npm downloads AGPL-3.0 License

Shared Brain
for AI Agents

A libre/open-source cognitive runtime that gives Claude Code, Codex, Claude Desktop, and other MCP clients the same local brain. Persistent memory, multimodal memory refs, pre-compaction auto-flush, a public claim wiki, readable memory export, inspectable user-state adaptation, enforceable protocol discipline, durable workflow execution, configurable automation, and recovery-aware self-healing on top of 150+ MCP tools.

Free / Libre Open Source (AGPL-3.0) 100% Local — No Cloud
Want the closed-source product experience instead of wiring the open-source Brain yourself? NEXO Desktop is the commercial companion product built on top of NEXO Brain. Ask about NEXO Desktop at info@wazion.com.
$ npx nexo-brain
150+
MCP Tools
23
Tool Categories
768
Vector Dimensions
13
Core Jobs
0
Data Sent Externally

Free/libre software is part of the product, not a footnote

NEXO is not a hosted memory SaaS with a thin MCP layer. It is AGPL software you can inspect, fork, run locally, adapt to your stack, and contribute back to. That matters for trust, for longevity, and for teams that do not want their agent memory trapped behind a vendor wall.

Inspectable by default

The runtime, doctor checks, workflows, prompts, and public Evolution loop all live in the open. You can audit what the system is doing before you trust it with your work.

Forkable and extensible

Plugins, scripts, prompts, skills, and policies are all meant to be adapted. If your workflow is different, NEXO is supposed to bend with it instead of hiding the real behavior.

No cloud lock-in

Your memories, workflows, vectors, and operational history stay on your machine. If you want to stop using NEXO, your data is still local and the system remains understandable.

Five demos that explain NEXO faster than a feature list

If someone asks "what does this actually feel like?", these are the shortest honest answers: install, shared brain across clients, decision-to-outcome loops, runtime discipline beyond memory, and the public Evolution loop.

One runtime, three decisions, much less fragmentation

NEXO is not just an MCP endpoint. The product flow is: install the runtime once, connect the clients you actually use, and let the same local brain drive both interactive sessions and background automation.

1

Install the runtime

npx nexo-brain sets up the local runtime, doctor/update commands, background schedules, embeddings, and the shared-brain MCP layer. This is the primary path.

2

Connect your clients

Claude Code, Codex, and Claude Desktop can all point to the same local NEXO brain. You choose which client nexo chat opens and which backend runs automation.

3

Keep memory and automation aligned

The value is not just persistence. It is one operator memory, one runtime truth, one recovery model, and one set of cognitive tools across your daily workflow.

Connect NEXO Brain the way your stack needs

The primary path is npx nexo-brain: it installs the runtime, configures the shared-brain MCP layer, and wires Claude Code, Codex, and Claude Desktop when available. OpenClaw and ClawHub remain supported alternate entry points built on the same cognitive engine.

Runtime + MCP Server

The main NEXO experience. Install with npx nexo-brain and get the local runtime, shared-brain MCP server, client sync for Claude Code / Codex / Claude Desktop, and the full automation layer.

Primary path

OpenClaw Memory System

Swap OpenClaw's default memory for NEXO's cognitive runtime while staying inside the OpenClaw lifecycle and tool registry. Best when OpenClaw is already your main operating surface.

OpenClaw

ClawHub Skill

Marketplace-friendly install path for people who already browse skills via ClawHub. It is the same NEXO engine underneath, just packaged for a faster discovery and one-click workflow.

Marketplace

NEXO Desktop is the managed product layer

NEXO Brain stays libre and open-source. NEXO Desktop is the separate closed-source Mac product built on top of that runtime for operators who want the guided setup, managed updates, and product UX without living in Terminal.

Product UX, not a wrapper

Desktop is for teams who want onboarding, settings, email routing, automations, backups, and updates inside the app. It consumes the same Brain contracts, but presents them as a closed product surface instead of a repo workflow.

Managed experience

Same Brain, different contract

Desktop does not replace the open runtime. It packages and manages it for non-technical operators while the public Brain package remains installable on its own via npx nexo-brain.

Open core + closed product

Commercial access

If you want the Desktop product for a business deployment, contact info@wazion.com and ask for NEXO Desktop. Public downloads and source in this repo cover NEXO Brain only.

Commercial inquiry

Why Claude Code is still the default recommendation

NEXO is now multi-client by design, but the three clients are not identical surfaces. The honest recommendation today is still Claude Code first, Codex second, Claude Desktop for shared-brain access.

Codex

Now a real first-class option for terminal sessions and headless automation. It shares the same brain, model sync, and managed bootstrap, but still has fewer native hook surfaces than Claude Code.

Supported Terminal Backend

Claude Desktop

Useful as a shared-brain client through MCP, but not the main terminal/automation surface. Best understood as another entry point into the same memory system, not the primary runtime operator.

Shared brain MCP

Everything an AI agent needs to think

Not just storage — a complete cognitive architecture that learns, forgets naturally, detects conflicts, and prevents repeated mistakes.

Atkinson-Shiffrin Memory

Three-store model: Sensory Register captures raw input, STM holds working context with rehearsal, LTM consolidates with semantic vectors. Just like human cognition.

Semantic RAG

Vector search with fastembed (BAAI/bge-base-en-v1.5). Query across all memory stores with cosine similarity. Retrieve what matters, not just what matches.

Metacognitive Guard

Pre-edit checks that inject known errors, real schemas, and blocking rules before your agent writes code. Prevents repeating past mistakes.

Ebbinghaus Decay

Memories naturally fade over time following the Ebbinghaus forgetting curve. Rehearsal strengthens important memories. No manual cleanup needed.

Trust Score

0-100 alignment index that adjusts based on corrections, successes, and proactive actions. Controls internal rigor: low trust = more paranoid checks.

Cognitive Dissonance

Detects when new instructions contradict existing strong memories. Surfaces the conflict and asks for resolution instead of silently overwriting.

Episodic Memory

Change logs, decision records with alternatives and reasoning, session diaries with mental state continuity. Full audit trail of what happened and why.

Plugin System

Hot-reload plugins at runtime. Add new tool categories without restarting the server. Ship your own extensions as Python files.

100% Local

All data stored in local SQLite databases. Vectors computed on-device with ONNX Runtime. Nothing ever leaves your machine. Zero cloud dependencies.

Personality Calibration

5-question onboarding that creates a unique agent personality. Your agent adopts a consistent voice, tone, and behavioral style from day one.

Operational Codex

23 non-negotiable principles every NEXO agent follows. From memory hygiene to error prevention, the codex defines what it means to be a reliable co-operator. See the wiki.

Docker Support

Run NEXO Brain in a container with two commands. Mount your data directory, and the cognitive engine runs isolated and portable across any environment.

Multi-Query Decomposition

Complex questions are automatically split into sub-queries. Each component is retrieved independently, then fused for a higher-quality answer. Improves recall on multi-faceted prompts.

Intelligent Chunking

Adaptive chunking strategy that respects sentence and paragraph boundaries. Produces semantically coherent chunks instead of arbitrary token splits, reducing retrieval noise.

Cross-Encoder Reranking

After initial vector retrieval, a cross-encoder model rescores candidates for precision. The top-k results are reordered by true semantic relevance before being returned to the agent.

Session Summaries

Automatic end-of-session summarization that distills key decisions, errors, and follow-ups into a compact diary entry. The next session starts with full context, not a cold slate.

Hybrid Search

Combined vector + BM25 keyword search via SQLite FTS5. Best of both worlds: semantic understanding for concept-level retrieval plus exact keyword matching for precise lookups.

768-dim Embeddings

Upgraded from 384 to 768 dimensions (BAAI/bge-base-en-v1.5). Doubled semantic precision for richer memory representations — still CPU-only, no GPU required.

Adaptive Decay

Redundancy-aware Ebbinghaus forgetting curve. Unique memories decay 4x slower than duplicates — no information loss in sparse stores, automatic cleanup in dense ones.

Temporal Indexing

Automatic date extraction and temporal query boosting. "When" questions get smarter filtering — memories are ranked not just by relevance but by recency when context demands it.

Auto-Migration

Transparent 384→768 embedding upgrade on first startup. All existing memories are re-embedded automatically with zero user action required — no data loss, no manual steps.

Adaptive Learned Weights

Signal weights learn from real feedback via Ridge regression. 2-week shadow mode validates new weights before promoting. Weight momentum and automatic rollback keep the system stable.

Somatic Markers

Pain memory per file and area. Guard warns on HIGH RISK (>0.5) and CRITICAL (>0.8). Validated recovery on clean checks — the system forgets pain when the problem is fixed.

6-Signal Personality

Vibe, corrections, brevity, topic, tool errors, and git diff. Emergency bypass for urgent sessions. Severity-weighted decay keeps personality calibrated over time without manual resets.

Evolution System

Weekly self-improvement cycle. Analyzes patterns, proposes changes, validates via snapshot/rollback. Circuit breakers and budget caps for safety. See the full procedure.

Runtime CLI

nexo chat opens your configured terminal client, nexo update syncs the runtime, and nexo doctor audits or repairs it. One binary, full control from any terminal.

Personal Scripts

First-class managed scripts with lifecycle tracking, schedule associations, and recovery awareness. 9 MCP tools for create, reconcile, sync, classify, and schedule management.

Startup Preflight

Health checks before every interactive session. Safe migrations, dependency verification, and environment validation ensure the cognitive engine starts clean every time.

Recovery Contracts

Boot and wake catch-up for core and personal jobs. Explicit recovery contracts define what runs after sleep, restart, or missed schedules — no silent failures.

NEXO Brain Architecture Infographic

Click to view full size

The runtime has moved fast without losing the plot

Instead of burying the home page in long release writeups, here is the short version: NEXO now behaves much more like a real shared-brain runtime across clients, with stronger memory surfaces, enforceable protocol discipline, durable execution, and public product surfaces that match what the runtime can actually do.

v7.1.7

Operator-facing email automation finally follows calibration language

v7.1.7 makes email-monitor carry the calibrated operator language through its prompt contract and localizes direct needs_interactive escalation emails, so Spanish operators stop receiving fallback English monitor mail.

v7.0.0

The runtime tree is physically split, not only conceptually

F0.6 makes the runtime structure honest: core, personal, and runtime live as separate buckets, fresh installs land there directly, and transition-aware path helpers let updates survive the migration without pretending the flat legacy tree is still the product.

v6.5.0

Operators can disable noisy automations without surgery

Personal automations gained first-class enable/disable/status controls. The cron wrapper respects the operator flag at every tick, so disabling a script is a reversible product action instead of a manual LaunchAgent intervention.

v6.4.0

Email became a real managed subsystem

Multiple inboxes now live in a proper email_accounts table with credential pointers, interactive and machine-safe setup flows, and a Desktop bridge that lets non-technical operators manage accounts without editing files.

v6.3.0–6.3.1

Guardian moved closer to a real product safety layer

Extended sentiment/entity schemas, labelled rule fixtures, telemetry loops, and the local classifier skeleton landed in 6.3.0; 6.3.1 immediately stripped operator-specific data back out of the public preset so privacy stayed aligned with the product claim.

v5.1.0

Evolution stopped being a slogan and became a loop

By v5.1.0, Evolution, adaptive, cognitive, and skills subsystems were closing under evidence. NEXO was no longer only a memory layer: it had become a runtime that could measure, learn, review, and reuse what actually worked.

Up and running in 60 seconds

One command to install. NEXO Brain sets up the runtime, downloads the embedding model, configures shared brain for detected clients, and lets you choose your terminal client and automation backend.

Terminal
# Install NEXO Brain
$ npx nexo-brain

# The installer can configure Claude Code, Codex, and Claude Desktop
# and lets you choose the terminal client + automation backend.

# Start your configured terminal client:
$ nexo chat

# Verify runtime health:
$ nexo doctor

# Or run with Docker:
$ docker build -t nexo-brain .
$ docker run -v ~/.nexo:/data nexo-brain

# Re-sync clients later if you install a new one:
$ nexo clients sync
Installer preview

What this preview shows

A condensed run of `npx nexo-brain`: language, install path, shared-brain clients, default terminal client, automation backend, and recommended model profiles.

Claude Code recommended Opus 4.6 with 1M Codex optional GPT-5.4 xhigh

Why it matters

People can see the setup experience before they install: NEXO does not just wire MCP. It configures the shared brain, lets you choose the client that `nexo chat` opens, and decides which backend runs background automation.

Current recommendation

Claude Code remains the primary recommended path because it has the deepest hook and headless automation surface. The recommended Claude profile is now Opus 4.6 with 1M context.

Five operator patterns NEXO already fits well

These are public use cases, not fabricated testimonials. They are the operating patterns the current product already serves well and the kinds of real case studies worth capturing next.

Solo coding operator

One developer bouncing between feature work, bugs, and local ops who wants the agent to remember prior decisions and stop repeating the same mistakes.

Claude Code + Codex stack

Teams or individuals using more than one terminal agent surface who want shared memory, one runtime truth, and fewer fractured setup paths.

Release and ops owner

People who need workflows, doctor diagnostics, followups, self-audit, and overnight synthesis to stay connected instead of living in separate scripts and notes.

Local-first agent workflows

Users who care about privacy, inspectability, and not sending operational memory to a hosted third-party memory layer.

Public contributor with review gates

Maintainers and opt-in contributors who want an improvement loop that proposes safely, pauses on open PRs, and routes review back through humans.

Install, browse, and learn

Public package and directory surfaces for the product, plus the main places to read and watch NEXO in public.

Evaluate / decide

Short paths for people deciding whether NEXO is the right runtime, not just browsing package registries.

Install / browse

Core package, codebase, and MCP directory surfaces. ClawHub remains supported as the marketplace install path in the integration section above.

Read / watch

Public explainers, walkthroughs, and launch content for people evaluating the product before they install it.

Everything you need to know

Common questions about NEXO Brain, how it works, and how to get started.

What is NEXO Brain?
NEXO Brain is an open-source cognitive runtime for AI agents. It gives your AI persistent memory across sessions using the Atkinson-Shiffrin memory model, adds a shared brain across Claude Code, Codex, Claude Desktop, and other MCP clients, and includes a runtime CLI, configurable automation backend, unified doctor diagnostics, personal scripts registry, self-healing background jobs, and 150+ MCP tools.
How does NEXO Brain work?
NEXO Brain implements the Atkinson-Shiffrin multi-store memory model from cognitive psychology. Information flows through three stores: Sensory Register (immediate context), Short-Term Memory (session-level working memory), and Long-Term Memory (persistent vector-indexed storage with Ebbinghaus decay). Memories are encoded as vectors, retrieved via RAG, and strengthened or forgotten naturally over time.
What is MCP (Model Context Protocol)?
MCP (Model Context Protocol) is an open standard that lets AI agents connect to external tools and data sources. NEXO Brain exposes its cognitive capabilities as MCP tools, so clients such as Claude Code, Codex, Claude Desktop, and other MCP-compatible tools can share the same local memory system.
How do I install NEXO Brain?
Run npx nexo-brain in your terminal. The installer sets up the local runtime, configures shared brain support for Claude Code, Codex, and Claude Desktop when available, lets you choose your terminal client and automation backend, downloads the embedding model, and creates the SQLite databases.
What is NEXO Evolution?
NEXO Evolution is the review-gated self-improvement loop behind NEXO. It can propose bounded changes from real-world usage, work in isolated public-core checkouts, open Draft PRs, and switch into peer-review mode when its own public proposal is already open. See the full procedure.
Is NEXO Brain free and open-source?
Yes. NEXO Brain is free/libre and open-source software under the AGPL-3.0 license. The complete source code is available on GitHub, so you can inspect it, modify it, fork it, and contribute back.
Does my data leave my machine?
No. NEXO Brain runs 100% locally on your machine. All data is stored in local SQLite databases, and the vector embedding model (ONNX Runtime) runs on your CPU. Zero data is sent to any external server, cloud, or API.
What LLMs and clients does it work with?
NEXO Brain works with MCP-compatible clients and currently has first-class shared brain flows for Claude Code, Codex, and Claude Desktop. Claude Code remains the recommended path because it has the deepest integration and hook support, while Codex is also supported as both terminal client and automation backend.
What vector embedding model does it use?
NEXO Brain uses BAAI/bge-base-en-v1.5, a 768-dimensional embedding model running locally via ONNX Runtime on CPU. No GPU required. The model is downloaded automatically during installation.
How is it different from just using context windows?
Context windows are ephemeral -- they reset every session. NEXO Brain provides persistent memory that survives across sessions, with natural forgetting (Ebbinghaus decay), rehearsal-based strengthening, a metacognitive guard that prevents repeating known errors, and a knowledge graph for entity relationships.
What is the metacognitive guard?
The metacognitive guard (nexo_guard_check) is a pre-action safety system. Before your agent edits code or makes changes, it checks for known errors, blocking rules, and relevant learnings. It prevents the agent from repeating mistakes it has already encountered.
What is trust scoring?
Trust scoring is a 0-100 alignment index that reflects how well the agent aligns with the user's expectations. Corrections lower it, successful proactive actions raise it. When trust is low, the system becomes more cautious. When high, it operates more fluidly.
What is cognitive dissonance detection?
When the agent receives a new instruction that contradicts an existing strong memory, NEXO Brain detects the conflict automatically. It surfaces the contradiction so the user can decide whether it is a permanent change or a one-time exception, preventing silent overwrites of established knowledge.
Can I use NEXO Brain with Docker?
Yes. A Dockerfile is included in the repository for containerized deployments. You can run NEXO Brain in a Docker container for isolated environments, CI/CD pipelines, or server-side agent deployments.
What is the knowledge graph?
The knowledge graph stores typed relationships between entities (people, services, projects, files). It enables neighbor traversal, path finding, and contextual queries like "what is connected to this project?" -- giving the agent structural understanding beyond flat vector search.
How does Ebbinghaus decay work?
Based on Hermann Ebbinghaus's forgetting curve research, memories in NEXO Brain naturally decay over time if not accessed. Each retrieval (rehearsal) strengthens the memory and resets its decay timer. Frequently accessed memories become long-term; unused ones gradually fade -- mimicking how human memory works.
What are somatic markers?
Inspired by Antonio Damasio's somatic marker hypothesis, these are "pain memories" associated with specific files or areas. When the agent encounters repeated errors in a file, the somatic marker increases, making the guard more cautious in that area. It is emotional memory for code.
How does the plugin system work?
NEXO Brain supports hot-reload plugins -- Python files dropped into the plugins/ directory are automatically loaded at startup. You can add, remove, or update plugins at runtime without restarting the server. Each plugin can register new MCP tools.
What is the LoCoMo benchmark score?
On the LoCoMo long-conversation memory benchmark, NEXO Brain achieves an F1 score of 0.588, which is 55% higher than GPT-4 Turbo's 128K context window (0.379). That memory page is now paired with a broader operator runtime pack: 13 checked-in scenarios where the latest matrix scores NEXO full stack at 96.2% versus 42.3% for a static CLAUDE.md baseline.
Can I use NEXO Brain in production?
Yes. NEXO Brain is designed for continuous, production-grade operation. It runs 24/7 with 13 core recovery-aware jobs plus optional helpers, handles concurrent sessions, and includes backup and recovery tooling for runtime continuity.
What are LaunchAgent templates?
NEXO Brain installs 13 core recovery-aware jobs and can also manage optional helper processes such as prevent-sleep and permission helpers depending on platform and configuration. Everything is reconciled through the runtime and customizable via schedule.json.
What is Dashboard v2?
The NEXO Dashboard is an always-on FastAPI-powered web interface at localhost:6174 with 23 modules across multiple pages: overview, operations, calendar, inbox, and CRUD interfaces for managing memories. It includes sidebar navigation, trust score widget, and auto-starts on boot for continuous monitoring.
How do I migrate to v2.0?
Run npx nexo-brain and the installer handles everything automatically. Code and data are cleanly separated, the shared brain is configured once for supported clients, and your runtime keeps self-healing its managed schedules on updates.
What is the nervous system?
The nervous system is the set of 13 core recovery-aware jobs that keep NEXO healthy and learning in the background, plus optional helpers when needed. It covers watchdog, immune, synthesis, catch-up, decay, sleep, deep sleep, self-audit, postmortem, evolution, followup hygiene, auto-close-sessions, and related runtime recovery flows.
Is there a community?
Yes! Find us on GitHub for issues, discussions, and contributions. Follow @NEXOBRAIN on X/Twitter for updates, releases, and development insights. The project welcomes contributions and sponsorships.
Can NEXO contribute improvements back to its own codebase?
Yes. With opt-in contributor mode, NEXO can propose improvements to the public repo via Draft PRs on GitHub. Contributions run in an isolated checkout, never touch your personal data, and pause after one open PR. Enable with nexo contributor on. Read how Evolution works.
Can I create my own MCP tools that survive updates?
Yes. Use nexo_personal_plugin_create to scaffold a personal MCP plugin in NEXO_HOME/plugins/. Personal plugins persist across updates and never become part of the core product.

Why NEXO exists

Francisco Cerdà Puigserver, creator of NEXO Brain

NEXO Brain was born out of necessity. I was building WAzion — a WhatsApp marketing SaaS — as a solo founder, working with AI agents every day. But every time I closed a session, the agent forgot everything. The same corrections, the same mistakes, starting from zero every morning.

I needed an agent that remembered. That learned from its errors. That detected when it was about to repeat a mistake it had already made. That maintained continuity between sessions as if it were the same person.

What started as a personal memory layer for Claude Code evolved into a full cognitive runtime shared across Claude Code, Codex, Claude Desktop, and other MCP workflows — shaped by six months of daily production use, co-created between me and NEXO itself. Every feature was born from a real problem, not a roadmap.

Today NEXO Brain is open source and free for everyone. Because if one solo founder needed this, thousands of developers do too.

Francisco Cerdà Puigserver

Founder of WAzion · Creator of NEXO Brain

Give your agent a mind

Open source, AGPL-3.0 licensed, and built in the open. Install it, stress it, open issues, ship PRs, and help shape how agent memory should work.

Translate this page