An LSP for your notes
The context stack you already have
When you open Claude Code in a project, a stack of markdown files gets concatenated into the agent’s context. Your personal ~/.claude/CLAUDE.md, your project-level CLAUDE.md, any .claude/rules/*.md files, skill descriptions from installed plugins. If you have an agents.md (the cross-agent standard that works with Cursor, Codex, Copilot, etc.), that goes in too.
Each layer adds something. The personal CLAUDE.md carries your preferences across projects. The project CLAUDE.md describes the codebase: build instructions, conventions, where things live. Rules files handle specifics. Skills load on demand with richer context. Most people who use Claude Code seriously have put real time into tuning this stack.
All of these layers share one property: they’re session-scoped. The agent reads them at session start, follows the instructions, and forgets them when the session ends. They tell the agent how to behave. They don’t tell the agent what’s in the files.
The grep problem
I have an Obsidian vault with about 7,000 notes. A few stable folders that don’t change much, and a rotating set of fifteen or so tags that I use to bookmark ideas as I capture them — #storytelling, #taste, #hospitality, whatever thread I’m pulling on that week. My CLAUDE.md is 200 lines. When I ask the agent to find something in the vault, it has two options: grep for a keyword I provide, or read files one at a time. If I ask for something conceptual — “what have I written about risk tolerance” — the agent has to guess at grep terms. Maybe it tries risk, then tolerance, then decision, then starts reading files hoping to stumble into the right neighborhood. It’s a smart agent guessing blind.
The CLAUDE.md doesn’t help here. It tells the agent how to search (which commands to call, which directories to look in). It can’t tell the agent what 7,000 notes are about. The agent has no map of the territory. Every session starts from the same place: the agent knows the rules, but has to rediscover the content.
This is fine for codebases. Code has structure — function names, imports, type signatures. You can grep a codebase and land in roughly the right place. An LSP gives the agent even more: symbols, definitions, references, a live map of what exists where. The agent doesn’t have to guess at grep terms for code because the LSP already knows the shape of the project.
Personal notes don’t have that. No function signatures, no type system, no imports. A note about a hiking trip and a note about quitting a job might be about the same underlying tension, but nothing in the file structure or the text makes that connection greppable. The agent would have to read both notes and make the connection itself, and it would have to know to read those two notes in the first place.
What a conceptual index does
Enzyme generates something like an LSP for a knowledge vault. For each tag, link, and folder in the vault, it produces catalysts — thematic questions derived from the actual note content. These get embedded as vectors with cosine similarity precomputed against every chunk in the index. The catalysts update on a temporal decay curve: what you’ve been writing this week weighs more than what you wrote six months ago.
When the agent searches, it searches against these catalyst vectors. It doesn’t have to guess at grep terms because the index already holds a conceptual map of what’s in the vault and how the pieces relate. The map shifts as you write more, because the catalysts regenerate weighted toward recent material.
The practical difference: I opened a session with a vague question about product philosophy. The agent searched and pulled back five notes from months apart. One was about watching a friend evaluate gas station coffee on a road trip in Iceland. Another was about design voice in my own product. No grep term connects those. The index connected them because both notes sit near catalysts about accumulated discernment and how taste shows up in decisions — and those catalysts existed because the notes under my #taste and #craft tags had been generating questions in that space.
A separate session: I asked about risk tolerance in early-stage decisions. The engine surfaced a meeting note from nine months ago where I’d described someone quitting her job. I hadn’t tagged it with anything related to risk. But the language in that note was thematically close to what I was working through. The catalysts had picked up on the resonance because my recent writing was tilting the index toward that territory.
The CLAUDE.md told the agent how to call the search tool. The conceptual index is what the search tool actually knows.
The guide shapes the lens
Enzyme has a config called a guide. It looks like a CLAUDE.md — prose in a markdown file describing what you care about, which tags matter, what threads are active. But instead of entering the session context, it enters catalyst generation. It shapes the questions the index produces, which changes what search considers relevant.
The guide isn’t the point. The point is that the vault has a live conceptual map the agent can query, and that map updates as you write more. The guide is one input into how the map gets generated. Your notes are the primary input. The temporal weighting is another. The guide adds a lens — it tells the engine what you’re paying attention to right now, which changes how it reads the notes that are already there.
Without a guide, the catalysts are still useful. They surface connections from the note content itself. The guide sharpens them toward your current thinking. A vault with two years of journaling and zero tags still produces a conceptual map that the agent can search. The map gets better with a guide, but it exists without one.
A vault with nothing to grep
The tagging post described a vault with two years of daily journaling, zero tags, zero wikilinks, just dated files. No LSP for code would help here. grep would require knowing exactly what you’re looking for. The CLAUDE.md for this vault would be almost empty.
Enzyme still produced a conceptual map. A journal entry from a hiking trip used language about navigating unfamiliar terrain that rhymed with how the writer had been thinking about early business decisions. The catalysts caught that because the temporal weighting made recent entries the lens for the whole vault, and the recent entries were about entrepreneurship. The hiking note from a year and a half ago got pulled into a thematic neighborhood it was never filed in.
That’s what a static file can’t do. A CLAUDE.md describes the vault as it was when you wrote the description. A conceptual index regenerates from the vault as it is now, weighted toward what you’ve been writing lately. The map is alive in a way the instruction file isn’t.
Try it
Enzyme indexes Obsidian vaults using temporal decay and catalyst-mediated semantic search. The engine is free — unlimited vaults, all commands, no account. If you’ve been relying on your CLAUDE.md to make search work and finding that the agent still guesses at grep terms, the setup page has what you need.
Enzyme ships with a default guide that works out of the box. But the guide that knows your threads, your tags, the tensions you keep returning to — that one takes sitting with your vault. We offer guide development as a service: we read your archive, identify which entities carry weight, and configure the lens so the conceptual map reflects how you actually think about your material. Two people with the same vault and different guides get different catalysts. The tuning is where the craft lives.