SuperLocalMemory Logo — Local AI Memory Layer
SuperLocalMemory
Adaptive Skills

Skills That Learn From Experience

Your AI skills are static. SLM changes that. Track which skills work, which fail, and watch them improve — session by session, automatically.

The Pipeline

How Skill Evolution Works

Zero-LLM, zero-cloud. Pure local statistical analysis that gets smarter every session.

1. Observe

Enriched hook captures every tool call with input, output, session context, and project path. Secret scrubbing built-in. Zero cost.

2. Analyze

SkillPerformanceMiner builds execution traces, computes outcome heuristics, tracks per-skill metrics. Runs during consolidation — no latency impact.

3. Evolve

Soft prompts route to high-performing skills. Behavioral assertions inform future sessions. Skill entities in Entity Explorer show the full picture.

Integrations

IDE Compatibility

The backend is IDE-agnostic. Any client can POST tool events. The shipped hook currently supports Claude Code.

IDE Status Integration
Claude Code Supported Auto-registered via slm init
Any IDE API Available POST to /api/v3/tool-event
Cursor Planned Adapter in development
Windsurf Planned Adapter in development
VS Code / JetBrains Planned Extension adapter
Enhanced Observations

Works With Everything Claude Code

Everything Claude Code (ECC) is the most popular plugin for Claude Code with continuous learning, instinct-based pattern detection, and deep observation capabilities.

SLM integrates directly with ECC observations for richer skill tracking. One command imports all your ECC data into SLM's skill performance pipeline.

ECC is not required — SLM is fully self-sufficient. This integration is an optional enhancement for users who want both systems working together.

# Import ECC observations into SLM

$ slm ingest --source ecc

Ingested: 11,327 events from ecc

Files scanned: 15 projects

# Preview without writing

$ slm ingest --source ecc --dry-run

Would ingest: 11,327 events (dry run)

Foundations

Research-Driven Design

SLM's skill evolution architecture draws from cutting-edge academic research in self-evolving agent systems.

EvoSkills

Co-evolutionary verification with information isolation. +30pp improvement from blind verification.

arXiv:2604.01687

OpenSpace

3-trigger self-evolving skill engine. Anti-loop guards. Version DAG model. MIT license.

github.com/HKUDS/OpenSpace

SkillsBench

86-task benchmark: self-generated skills provide zero benefit without verification. Focused skills outperform.

arXiv:2602.12670

SoK: Agent Skills

Four-axis taxonomy of agentic skills. Skills and MCP are orthogonal layers. 26.1% vulnerability rate.

arXiv:2602.12430

Start Tracking Skill Performance Today

Install SLM. Your skills start learning from the very first session.

Open source under AGPL v3 — A Qualixar Research Initiative