For product teams
You built the capture.
Your users can't find what's in it.
Your users save, capture, and bookmark. They never see what it adds up to. Search works for known queries. The thematic structure of someone's accumulated material stays invisible. Enzyme turns any content corpus into a profile of what someone's been thinking about.
Spotify Wrapped, but continuous and for any content type. Built by the ML lead behind Spotify's AI Playlists, Smart Shuffle, and DJ — the same recommendation-system thinking, applied to how products understand their users' accumulated content.
Runs on your infra.
Engine
11MB binary + 23MB embedding model (int4 ONNX). Runs on 4 CPU threads — no GPU required. Each user's index is a SQLite file + embeddings binary, a few MB per user.
Queries
100% local after init. Semantic search in ~50ms. Trending entities in ~1ms (SQLite). No API calls at query time. No per-query cost.
Indexing
~15 seconds for 1,000 documents. One LLM call per entity for catalyst generation — $0.01–0.10 per user per refresh. BYOK or bulk key.
Privacy
User data stays on your servers. No data leaves your infrastructure at query time. Your users hear "your data never touches a third-party server" — because it's true.
Most memory infrastructure is cloud-hosted — your users' data on someone else's servers, per-query pricing that compresses your margins at scale. Enzyme ships as a library. Queries are 100% local. The only cloud cost is a lightweight LLM call during indexing — cents per user, through your own API key.
Hosted service or embedded library.
Most memory infrastructure is a cloud service you call over the network. Enzyme is an embedded library — the same pattern that made SQLite the most deployed database in the world. No server, no container, no managed service. It runs where your code runs.
| Cloud memory service | Enzyme | |
|---|---|---|
| Footprint | Hosted infrastructure | 34MB total (11MB binary + 23MB model) |
| Query cost | Per-query pricing | $0 — queries are local |
| Data residency | Third-party servers | Your servers, your users' devices |
| Latency | Network-dependent | ~50ms semantic search, ~1ms entity lookup |
| Setup | API integration + managed service | Embed the library, run on your infra |
SQLite isn't less capable than Postgres — it's a different architecture for a different deployment model. Enzyme applies that same principle to semantic indexing. Embedded, zero-config, runs everywhere.
Beyond keyword matching
A user with 3.5 years of plain-text journal entries — no tags, no links, no structure — asked about a direction they'd been considering. Enzyme surfaced a hiking trip entry from two and a half years earlier as the origin point:
"I tucked it away for two and a half years. It just came back to me."
A video editor and animator with 10,000+ notes spanning nearly two decades — journals, projects, research into mythology and depth psychology:
"Enzyme is catalyzing wondrous reunions with lost or forgotten threads, making incredible sweeping connections across all walks of my life. It's my ally in animating this way of living."
Your team can build basic semantic search in a week. Embeddings, cosine similarity, SQLite. What takes months is what comes after: the preprocessing pipeline that builds a thematic profile over the full corpus before anyone asks a question, so your product can show users connections they didn't know to search for.
That's what Enzyme is. Entity extraction, thematic clustering, precomputed similarity, and a domain configuration layer that tunes profiling to your product's content. 2–6 months of engineering and ongoing maintenance, or a library you integrate in a week.
Your users save things. Enzyme tells them what it means.
Enzyme reads the shape of your content and configures profiling automatically. Configuration happens at the domain level — one setup covers all your users.
Voice · Meeting apps
"Here's what your conversations have been about this quarter."
Reading · Highlight apps
"Here's the thread running through your highlights this month."
Bookmark · Save apps
"Here's what your saves say you've been thinking about."
Companion · Chat apps
"Here's how your conversations have evolved over time."
Journal · Notes apps
"Here's a connection between something you wrote today and something from eight months ago."
Each one is the same engine: ingest corpus, extract entities, generate thematic catalysts, build the map. You built the capture. We built the layer that makes capture meaningful.
How it ships
Enzyme integrates as an SDK or an installable skill, depending on your product's architecture. You run init and refresh on your infra, serve queries from your infra. Works as a library your team embeds, or as a standalone tool your users install themselves.
What you get
- ✓ Full engine — semantic search, trending entities, project lens onto external content
- ✓ Domain configuration for your content shape — auto-generated, refinable per use case
- ✓ Multiple catalyst profiles — dialectical, synthesis, relational, operational, strategic, reflective
- ✓ Ongoing engine updates and new catalyst architectures
For teams evaluating memory infrastructure
If you're comparing cloud memory services, here's what's different about Enzyme: your users' data never touches a third-party server — because there is no third-party server. Query costs don't scale with usage — because queries are local computation, not API calls. And your margins don't compress at scale — at 10K users doing 50 queries/day, the difference between $0/query and $0.001/query is $15K/month.
Every AI product starts with hosted memory. The ones that scale bring it in-house.
Try it on your content
Send us a sample of your users' content. We'll show you what Enzyme sees in it — the thematic profile, the unexpected connections, the mirror your users don't have yet. Works out of the box with text. If your content is in another format, we'll help you set up. Free to evaluate. If it's sharper than what you have, we talk integration.
Free for individuals. GitHub