SuperLocalMemory Logo — Local AI Memory Layer
SuperLocalMemory
OPEN RESEARCH • AGPL V3
V3.3 What's New →

Memory for
AI Reliability
Engineering

The memory substrate of the Qualixar AI Reliability Engineering platform. Mathematical foundations, backed by 3 peer-reviewed papers. 74.8% on LoCoMo with data staying local. EU AI Act compliant. Open source AGPL v3.

74.8%
Local-first retrieval
87.7%
Full power (Mode C)
60.4%
Pure zero-LLM
17+
IDE integrations
superlocalmemory
SCROLL
NEW RELEASE

V3.3: The Living Brain Evolves

Adaptive lifecycle. Smart compression. Cognitive consolidation. Memory that grows with you.

Memory That Adapts, Compresses, and Learns

Adaptive Memory Lifecycle

Memories strengthen when used and fade when neglected. The system self-organizes around what matters most to your workflow.

Smart Compression

Up to 32x storage savings. Precision adapts to importance — critical memories stay full-fidelity while cold ones compress automatically.

Cognitive Consolidation

Automatically extracts patterns from related memories and synthesizes higher-level insights. Your knowledge base refines itself over time.

6th Retrieval Channel

Partial queries complete themselves. Start typing a fragment and the system infers your full intent across six parallel retrieval channels.

Pattern Learning

Soft prompts injected into agent context automatically. The system learns your patterns and proactively surfaces relevant knowledge.

100x RAM Reduction

Dramatically lower memory footprint in Mode A and Mode B. Run on resource-constrained machines without sacrificing capability.

Process Health

Automatic orphan cleanup and self-healing. The system detects and resolves inconsistencies without manual intervention.

Open source, AGPL v3. A Qualixar Research Initiative.

The Problem

The Context Persistence Problem

01

Session Reset

No Persistence.

Current AI assistants lack persistent memory across sessions. Context accumulated during a session is discarded at termination.

02

Context Loss

Re-initialization Required.

Domain-specific patterns and decisions require re-initialization each session. Learned preferences do not transfer.

03

Architecture Trade-offs

External Dependencies.

Centralized memory introduces external data dependencies and privacy considerations for sensitive development contexts.

There's a better way ↓
The Architecture

9 Layers Deep

Each layer handles one responsibility. Together, they give your AI persistent, intelligent memory.

Capabilities

Neural Capabilities

Every feature designed to make your AI smarter, faster, and completely private.

Live Demo

See It Think

Three commands. That's all it takes to give your AI persistent memory.

superlocalmemory — demo
LoCoMo Benchmark Results

Measured Performance

Evaluated on the LoCoMo benchmark (Long Conversation Memory). Mode A Retrieval achieves 74.8% — the highest score reported without cloud dependency.

0.0%
Mode A — Local Retrieval
Data stays on your machine
0.0%
Mode C — Full Power
Cloud LLM at every layer
0.0%
Pure Zero-LLM
No LLM at any stage
0.0pp
Math Layer Gain
Avg improvement

LoCoMo Benchmark: Competitive Landscape

Full comparison →
EverMemOS (SOTA) 92.3%
MemMachine 91.7%
SLM V3 — Mode C (cloud LLM at every layer) 87.7%
SLM V3 — Mode A Retrieval (data stays local) 74.8%
SLM V3 — Mode A Raw (pure zero-LLM) 60.4%
Mem0 ($24M funded) ~58-66%

Mode A Retrieval (74.8%) is the highest score achieved without cloud dependency during retrieval.
Mode A Raw (60.4%) uses no LLM at any stage — a first in the field.
All other systems require cloud LLM for core operations.

Ecosystem

Everywhere You Code

One memory layer. Every IDE and AI tool you use.

Claude Code
Cursor
VS Code
Windsurf
Neovim
Vim
JetBrains
Zed
Continue
Cline
Roo Code
ChatGPT
Perplexity
Gemini CLI
OpenAI Codex
Copilot
Any MCP Client
COMMON QUESTIONS

Frequently Asked Questions

What is SuperLocalMemory?

+
SuperLocalMemory V3.3 is the first agent memory system with mathematical guarantees and adaptive lifecycle management. Memories strengthen when used and fade when neglected. Smart compression delivers up to 32x storage savings. Cognitive consolidation auto-extracts patterns from related memories. 74.8% on LoCoMo with data staying local — highest local-first score. 87.7% in full power mode. Open source under AGPL v3.

Which AI tools does SuperLocalMemory work with?

+
SuperLocalMemory integrates with 17+ tools including Claude Code, Cursor, VS Code Copilot, Windsurf, ChatGPT Desktop, Perplexity, Continue.dev, Zed, and more via the Model Context Protocol (MCP).

Is it open source?

+
Yes. SuperLocalMemory is published under the GNU Affero General Public License v3.0 as part of our open research initiative. The source code, documentation, and research papers are publicly available.

How does the local-first approach differ?

+
Local-first architecture keeps all data on user infrastructure. This contrasts with cloud-hosted approaches that require external data transit. See our Research Landscape page for a detailed comparison of approaches.

How do I install it?

+
Install via npm (npm install -g superlocalmemory), then run slm setup. Choose your operating mode: Mode A (zero cloud), Mode B (local Ollama), or Mode C (cloud LLM). All Python dependencies install automatically.

Does SuperLocalMemory send data externally?

+
No. The architecture is fully local. All storage uses on-device SQLite databases with no external network calls or telemetry. Optional cloud backup (V3.4.10+) lets you sync to Google Drive or GitHub — you control when and where.

Does SuperLocalMemory work with CI/CD pipelines and agent frameworks?

+
Yes. SuperLocalMemory is the only AI memory system with both MCP (for IDE integration) and an agent-native CLI with structured JSON output. Every data-returning command supports --json for a consistent envelope with success status, data payload, and next_actions guidance. Works with GitHub Actions, n8n, Docker, shell scripts, OpenClaw, Codex, Goose, nanobot, and any tool that can call a CLI command.

What is the --json flag?

+
The --json flag enables structured JSON output on all data-returning CLI commands (26 total including recall, remember, list, status, health, trace, forget, delete, update, mode, profile, connect, and more). The response follows a consistent envelope: {success, command, version, data, next_actions}. This makes SuperLocalMemory agent-native — AI agents can parse the output reliably without text scraping. Use with jq for powerful shell pipelines: slm recall 'auth' --json | jq '.data.results[0].content'
Quick Start

Installation

getting started
# Install (one command — everything included) $ npm install -g superlocalmemory
# Setup — choose your mode (A/B/C) $ slm setup
# Store your first memory $ slm remember "Alice works at Google as Staff Engineer"
# Recall it later $ slm recall "What does Alice do?"
✓ V3.3 engine active. Mode A. 528 facts indexed. 6-channel retrieval. Math layers: Active.

AGPL v3 • Local-first architecture

Community

Community & Contributions

GitHub Repository
EL2
Open Source
OSS
Open Source