Manifesto

Agentic Understanding
Should Be Open, Not Opaque

We're building toward a future where humans and AI share understanding. Not just information.

The Problem

AI is generating more knowledge than ever. And we're losing it faster than ever.

Every conversation with an AI produces insights, decisions, and context. But when the chat ends, it's gone. Locked in proprietary “memory” systems you can't inspect. Scattered across a dozen tools. Impossible to share with your team or your other agents.

The irony is stark: we have the most powerful knowledge tools in human history, and we're still copying and pasting between them. Still re-explaining the same context in every new session. Still watching valuable insights evaporate into the void.

Most AI memory is a black box. You put things in, you hope the right things come out. You can't see what's there, edit what's wrong, or share what's valuable. That's not memory. That's a lottery.

What We Believe

  • Context should be open and inspectable. If an AI knows something about you or your project, you should be able to see it, edit it, and decide who else gets access.
  • Structure outlasts models. The AI landscape shifts monthly. Your context shouldn't be locked to any single model, platform, or vendor. It should be portable, durable, and yours.
  • Humans stay in control. As AI gets more autonomous, oversight becomes more critical. Not less. You should see what agents are doing, approve what they change, and reverse what they get wrong.
  • Shared understanding beats isolated intelligence. The best work happens when teams and their tools operate from the same context. Real-time, multiplayer, in sync.

If this resonates, we'd love to build with you.

Manifesto | Handoff