Sinapt and the Queryable Company

275+ SaaS apps per enterprise. 40-60% of RAG never ships. Your knowledge is everywhere and queryable nowhere. Every AI agent rebuilds context from scratch. Companies that win the AI era will stop paying the knowledge tax. Sinapt is the bet on how.

Sinapt and the Queryable Company — AI

The average large enterprise runs 275+ SaaS applications. Between forty and sixty percent of RAG implementations never reach production. Seventy percent of the ones that do still lack systematic evaluation. Your knowledge is everywhere and queryable nowhere. Every AI agent your team uses rebuilds context from scratch, every session, every prompt, every time.

Call it the knowledge tax. You pay it in re-explaining yourself to ChatGPT for the third time today. You pay it in the engineer who can't remember which Slack channel had the production decision. You pay it in the agent that hallucinates because the actual answer was in a Granola transcript nobody indexed. You pay it in every onboarding that takes three months because most of the company isn't written down anywhere a human, let alone a model, can find.

Companies that win the AI era will be the ones who stop paying it.

The thesis: in the next five years, the dividing line between companies that pull ahead and companies that fall behind will be whether their entire institutional knowledge — meetings, code, chat, docs, tickets, files, email — is queryable by AI as a single layer. The losers will keep re-explaining context to every model, every session, every teammate. The winners will operate at the speed of the agent that already knows.

This is the bet behind Sinapt — the knowledge base I'm building, agent-first, for the AI-native era.

This is the new "indexed by Google" moment

In 2002, the question for every company was "are you indexed by Google?" The companies that were became findable. The companies that weren't disappeared from the new economy. There was no middle ground.

In 2026, the equivalent question is "is your company queryable by AI?" Not your public company — that's already trivially true if you have a website. Your internal company. The institutional context that lives in 30 different SaaS apps, 10 years of Slack history, 4 different doc systems, every Zoom recording nobody tagged, every code review thread, every ticket comment, every email chain.

If your team's AI agents can pull from that layer in one query, you have a company. If they can't, you have a federation of disconnected silos that happens to send each other email.

Why Notion / Confluence / Glean won't get you there

Look closely at what Notion, Confluence, Glean, and the dozen "wikis-with-AI-bolted-on" actually optimize for. The answer: humans browsing pages. The page is the unit. The browser is the surface. The user is a human eyeball. Everything else — search, AI features, integrations — is layered on top of a substrate that was designed before AI agents existed.

When an agent tries to use one of these tools, it's a second-class citizen reaching into a UI that wasn't built for it. Either through a screen-scraping integration that breaks every time the vendor ships a redesign, or through an "AI feature" that the vendor optimized for marketing demos rather than for production agent loops.

This is the wrong end of the architecture. Agent-first knowledge infrastructure starts from the opposite assumption: the agent is the primary user. The web UI exists for occasional human admin — billing, permissions, curation — not for the daily flow. The daily flow is your agents calling Sinapt directly via MCP, REST, or CLI. The web is a supporting surface, not the product.

Dimension Notion / Confluence / Glean / Wikis-with-AI Sinapt
Primary user Humans browsing pages Agents querying directly (MCP, REST, CLI)
Web UI role The product Occasional admin surface — not the daily flow
Source format Proprietary blocks/databases. Vendor lock-in by design. Plain markdown. Export everything as .md. Open-source MIT CLI.
What it indexes Pages users wrote in the tool itself Where knowledge lives: meetings, code, chat, docs, tickets, files, email. Connector framework.
Agent attach Reach into HTTP UIs through scrapers or "AI features" bolted on First-class MCP server. claude --mcp sinapt. Persistent across sessions and machines.
Training data policy Varies. Often opaque. Never. Customer KB content not used for model training, by us or upstream providers.
Exit story Vendor-bound Open. Plain markdown source + open-source CLI = independent of whether Sinapt exists tomorrow.

This inversion is not cosmetic. It changes everything downstream: the data model, the API surface, the export story, the auth model, the integration framework, the way queries are scoped. A tool built human-first cannot be retrofitted agent-first by adding an MCP endpoint. The substrate has to be different from the start.

What "queryable by AI" actually means

Most people read "queryable" and picture a search box. That's not what this means. A queryable company has these properties at the same time:

  1. Knowledge indexed where it lives. Not migrated to a new tool. Not "imported into the wiki when someone has time." Indexed in place — meetings (Granola, Otter, Read.ai, Loom), code (GitHub, GitLab, local repos), chat (Slack, Discord), docs (Notion, Drive, Confluence, Obsidian), tickets (Linear, Jira), files (S3, local FS), email (Gmail). The connector framework is the product, not a "list of integrations" tab.
  2. One query layer over all of it. Your agent doesn't ask seven different APIs and try to merge the results. It asks one substrate that already merged them. Hybrid search across vector + keyword + structure. Per-collection permissions, so the recommendation engine and the support chatbot see different scopes of the same backend.
  3. Persistent, not session-bound. Context survives across sessions, machines, teammates. Stop re-explaining yourself when you switch from your laptop to your phone, from this morning's session to this afternoon's, from your IDE to your colleague's IDE. The knowledge is not something you carry — it's something the layer carries for you.
  4. Agent-native interfaces. MCP server for AI agents. REST API for tools. CLI for developers. Web UI for humans doing occasional admin. Three primary interfaces, one substrate, agents as the assumed default caller.
  5. Yours. Plain markdown source format. Open-source CLI under MIT license. Exportable any time. Never used as model training data — not by us, not by upstream model providers. The exit story is honest: your data and tooling are independent of whether the vendor exists tomorrow.

That's the spec. There's no shortcut. Any tool that gives you fewer than five of these things is selling you a feature, not infrastructure.

The knowledge tax compounds

Let me get specific about the cost of the alternative.

If you're a 50-person engineering team and every developer spends 30 minutes a day re-explaining context to an AI assistant — pasting the relevant files, summarizing the architectural decision, restating constraints, repeating the team conventions — that's 25 person-hours per day burned to context reconstruction. Twelve thousand five hundred hours per year. At $100/hour fully loaded, $1.25M annually that you're paying just to re-explain yourselves to your own tools.

That's the team that's trying to use AI seriously. The team that isn't trying gets the inverse problem: their agents make confidently wrong decisions because they can't reach the actual context, and someone has to clean up after every one. That cost is harder to quantify but easy to recognize.

Both teams are paying the knowledge tax. One in time, the other in correctness. The Gartner number is 30 to 70 percent productivity gain in knowledge-heavy workflows after proper agentic RAG deployment. That's the gap. That's what's at stake. That's the difference between teams that compound and teams that don't.

Why I'm building Sinapt myself

I've spent 1,282 documented hours in Claude Code across ten-plus active repos since early 2025. I've felt the knowledge tax personally, in every session where I had to re-explain my own architecture to my own AI before any work could happen. I've watched the agentic-knowledge-base market grow up around incumbents whose stack was wrong from the substrate.

Glean is at $7.2B. Notion is bolting AI onto a wiki paradigm. Atlas, Confluence, Document360, the dozen others — all of them are running the same playbook: human-first surface, AI-second integration, vendor-locked source format. The whole category is sitting at the wrong end of the architecture.

Sinapt is the bet that there's room for a clean stack — built agent-first, vendor-neutral by design, two decades of engineering judgment applied to a problem the AI-native era created. The manifesto is here, the landing page is here, and the architectural decisions are being published on this blog as I make them.

The roadmap is honest: Phase 1 (architecture lock-in — infrastructure, search engine, contract surfaces), Phase 2 (proof of concept), Phase 3 (MVP with team support, integrations, billing), Phase 4 (public launch). No deadlines. Each phase ships when it's ready. The Sinapt Cockpit — the unified driving surface for Claude Code, Codex, and your existing CLIs — comes after the knowledge base has paying customers.

Backed by Petreski LLC. US legal entity. Wyoming. EIN 32-0778812. Real recourse, real entity, two decades behind every decision. Self-hostable enterprise tier on the roadmap. Open-source CLI under MIT license. Plain markdown substrate. Even the exit story is honest.

The deeper signal

Watch the signals. Gartner declared in mid-2025 that "context engineering is in, prompt engineering is out", predicting the discipline will appear in 80% of AI tools by 2028. LinkedIn published their internal numbers: 20% increase in AI coding adoption and 70% drop in issue triage time after deploying an agentic knowledge base. The MCP protocol went from "Anthropic feature" to "de facto industry standard" in less than twelve months. Anthropic just shipped Workload Identity Federation, putting AI auth on the same tier as the rest of your cloud.

Every one of these is a separate signal pointing at the same future: AI agents are becoming first-class infrastructure consumers, and the layer they consume from is going to be a category-defining business. The companies that build their internal knowledge as agent-queryable infrastructure starting now will look, in retrospect, like they saw the future. The companies that defer it will look like they were trying to build a 2026 business on a 2018 information architecture.

The teams that wait will not catch up. The compounding effect is the whole point: an agent that can query the company gets smarter every day, because every meeting, commit, and Slack thread becomes new context. An agent that can't gets the same level of helpful as it was on day one, forever.

Where to go from here

If any of this resonates — the knowledge tax you're paying, the gap between what your agents could do and what they actually do, the suspicion that the wiki paradigm isn't going to scale into the next era — go read the rest at sinapt.ai. The full thesis is there. The architecture, the integrations, the security model, the roadmap. Plus the alpha-drop list — single email when there's something to try, no marketing, no newsletter, just the signal.

If you want the longer-form thinking: the Sinapt manifesto walks through the two-products structure (knowledge base + cockpit). The Cockpit explains the unified-driving-surface pattern. The Knowledge Equation is the architectural prior on why context architecture matters more than prompt cleverness. The Knowledge Base That Builds Itself is the operational pattern Sinapt makes shippable for everyone.

If you're a team — engineering, product, ops, anyone whose context is fragmented across more than three SaaS tools — the question to sit with is not "do we need a knowledge base." You already have eight of them. The question is: "is our institutional context queryable as one layer, or are we paying the knowledge tax forever?"

Sinapt is the bet that the answer can be yes.

→ sinapt.ai


Related Reading

💬
Working with a team that wants to adopt AI-native workflows at scale? I help engineering teams build this capability — workflow design, knowledge architecture, team training, and embedded engineering. → AI-Native Engineering Consulting