Based on the Atkinson-Shiffrin model from cognitive psychology, adapted for AI agents.
Every piece of information flows through three stores, each with its own retention rules and access patterns.
Full architecture: sensory register, STM, LTM, input pipeline, retrieval, and proactive systems
Not just engineering — applied cognitive psychology. Every feature maps to a real research concept.
| Psychological Concept | How NEXO Brain Implements It |
|---|---|
| Atkinson-Shiffrin (1968) | Three memory stores: sensory register, STM, and LTM with distinct retention rules |
| Ebbinghaus Forgetting Curve (1885) | Exponential decay: strength = strength * e^(-lambda * time) |
| Rehearsal Effect | Accessing a memory resets its strength to 1.0 |
| Memory Consolidation | Nightly process promotes frequently-used STM to LTM |
| Prediction Error | Only surprising (novel) information gets stored — redundant input is gated |
| Spreading Activation (Collins & Loftus, 1975) | Retrieving a memory co-activates related memories through an associative graph |
| HyDE (Gao et al., 2022) | Hypothetical document embeddings improve semantic recall |
| Prospective Memory (Einstein & McDaniel, 1990) | Context-triggered intentions fire when cue conditions match |
| Metacognition | Guard system checks past errors before acting |
| Cognitive Dissonance (Festinger, 1957) | Detects and verbalizes conflicts between old and new knowledge |
| Theory of Mind | Models user behavior, preferences, and mood |
| Synaptic Pruning | Automated cleanup of weak, unused memories |
| Associative Memory | Semantic search finds related concepts, not just matching words |
| Memory Reconsolidation | Dreaming process discovers hidden connections during sleep |
The two largest files have been decomposed into focused, testable modules while maintaining full backwards compatibility.
core, fts, schema, sessions, reminders, learnings, credentials, tasks, entities, episodic, evolution. All re-exported via __init__.py — existing imports unchanged.
core, search, ingest, decay, trust, memory. Each module is independently testable. The search pipeline chains: embed → BM25 → temporal boost → KG boost → rerank.
Memories with more Knowledge Graph connections rank higher. Logarithmic boost bridges semantic (vector) and structural (graph) retrieval.
Optional hnswlib integration. Auto-activates at 10,000+ memories for sub-millisecond approximate nearest neighbor search. Graceful fallback to brute-force.
Decompose blob memories into atomic verifiable claims. Each claim has provenance, confidence, and verification status. Contradiction detection across sources.
Migrations, CRUD, cosine similarity, KG boost, graph traversal, temporal boost. Each test uses isolated temp databases — zero interference with production.
Open source, AGPL-3.0 licensed, and built for builders who want their AI to actually remember.