Installation Guide
Three ways to install. Same powerful memory system. Works on macOS, Windows, and Linux.
Choose Your Install Method
npm is recommended — one command installs everything including Python dependencies
npm Install
macOS, Windows, Linux — requires Node.js 14+ and Python 3.11+
Install globally via npm
npm install -g superlocalmemory Auto-installs Python dependencies (numpy, scipy, networkx, sentence-transformers, torch).
Run setup wizard
slm setup Choose Mode A (zero-cloud), B (local Ollama), or C (cloud LLM). Mode A is the default.
Pre-download embedding model (optional)
slm warmup Downloads nomic-embed-text-v1.5 (~500MB). If you skip this, it downloads on first use.
Verify installation
slm status pip Install
Requires Python 3.11+
pip install superlocalmemory Then run slm setup and slm warmup as above.
Git Clone
For development or air-gapped environments
git clone https://github.com/qualixar/superlocalmemory.gitcd superlocalmemorypip install -e .Then run slm setup and slm warmup.
Connect Your IDE
SuperLocalMemory works with 17+ AI tools via MCP
Auto-Configure (Recommended)
slm connect # Configure all detected IDEsslm connect --list # See which IDEs are configuredManual MCP Config
Add this to your IDE's MCP configuration file:
{
"mcpServers": {
"superlocalmemory": {
"command": "slm",
"args": ["mcp"]
}
}
} Supported IDEs: Claude Code, Cursor, VS Code Copilot, Windsurf, Continue, Cody, ChatGPT Desktop, Gemini CLI, JetBrains, Zed, and more. 35 MCP tools available.
Upgrading from V2?
V3 is a complete architectural reinvention. Your data is preserved.
npm install -g superlocalmemory # Installs V3slm migrate # Migrate V2 dataslm setup # Configure V3 modeslm warmup # Download embedding model Before upgrading: V3 uses a new mathematical engine, retrieval pipeline, and storage schema. A backup of your V2 database is created automatically. You can rollback with slm migrate --rollback.
Full migration guide: Migration from V2
What Gets Installed
| Component | Size | When |
|---|---|---|
| Core libraries (numpy, scipy, networkx) | ~50MB | During install |
| Search engine (sentence-transformers, torch) | ~200MB | During install |
| Embedding model (nomic-embed-text-v1.5, 768d) | ~500MB | First use or slm warmup |
Resource usage: ~500-800MB RAM peak during model load, ~20-50MB steady state. CPU-only — no GPU required. Runs on 2 vCPUs + 4GB RAM.
Troubleshooting
slm: command not found
Make sure npm global bin or Python scripts directory is in your PATH. Run npm bin -g to check.
Embedding model fails to download
Check internet connection. Run slm warmup manually. If behind a proxy, set HTTP_PROXY and HTTPS_PROXY.
Python dependency errors
The installer prints exact fix commands. BM25 keyword search works even without embeddings — you're never fully blocked.