Setup
Install, point at your content, compile. Under 20 seconds for 1,000 documents.
1. Install
The install script downloads the binary, embedding model, and auto-installs the Claude Code plugin. Works on any folder of markdown files — ADRs, docs, Obsidian vaults, Readwise exports. For PDFs, use docling to convert first.
2. Compile your concept graph
Point Enzyme at your content. It scans structure — tags, links, folders — and compiles a concept graph your agent can query in ~8ms.
$ enzyme init
Steer with a guide
A guide is a short description of how your content is organized — folder purposes, key tags, domain context. It shapes which catalysts Enzyme surfaces from each entity. Not required, but it makes the results noticeably better.
â–¶ How the compile step works
- 1 Structure reading — Scans tags, links, and folders. Incomplete structure is treated as signal, not noise.
- 2 Graph extraction — Entities and connections are mapped. Long notes become clusters; scattered captures reveal threads.
- 3 Catalysts surface — Cross-cutting themes emerge as pre-computed questions your agent can search through.
Sends excerpts to AI for analysis. Your raw files stay local. See privacy. For a deeper look, see what catalysts are.
3. Connect your agent
Two ways to connect Enzyme to your AI client. Pick one.
The plugin gives Claude a skill for querying your concept graph.
Invoke with /enzyme inside Claude Code.
If you used the install script, the plugin is already
installed — skip this.
Try asking:
- > what's the recurring tension in how we approach system design across our ADRs?
- > I keep returning to the same three ideas across my highlights and journals — what are they?
- > apply my vault's lens to this repo — where does my thinking already overlap?
Questions
Do I need to organize first? ▼
No. Enzyme works with whatever structure exists — half-used tags, links that go nowhere, inconsistent folders. Incomplete organization is still signal. Even 20-30 documents is enough to compile a useful graph.
What gets sent to the cloud? ▼
During enzyme init, excerpts are sent to an LLM for catalyst generation. Your
raw files stay on your device. Nothing is stored on Enzyme's
servers.
If you bring your own API key, excerpts go directly to the provider you configure. If you use the free tier, excerpts are sent to OpenRouter (a US-based model provider) using a shared API key that Enzyme provides. In both cases, Enzyme does not proxy, store, or log any of your content. See privacy for details.
Using local or self-hosted models ▼
Set all three environment variables to use a custom provider. If only some are set, Enzyme falls back to the free tier.
export OPENAI_API_KEY="sk-..."
export OPENAI_BASE_URL="https://api.openai.com/v1"
export OPENAI_MODEL="gpt-4o"
enzyme init
For local servers (LM Studio, Ollama, vLLM) that don't require authentication, use a placeholder key:
export OPENAI_API_KEY="not-needed"
export OPENAI_BASE_URL="http://localhost:1234/v1"
export OPENAI_MODEL="qwen/qwen3-8b"
enzyme init
# Ollama
export OPENAI_API_KEY="not-needed"
export OPENAI_BASE_URL="http://localhost:11434/v1"
export OPENAI_MODEL="qwen3:8b"
enzyme init
When your config is active, enzyme init prints the resolved model, base URL, and a masked key so you
can confirm it's using your provider.
Go deeper
enzyme petri | jq '.petri_entities[] | .entity' See what entities Enzyme found, check activity levels, explore catalysts
Pre-computed questions that cut across your content — connecting documents that keyword and vector search miss.
Templates and scripts to describe your content structure before compiling