The Real Lock-In Isn't the Model. It's the Memory.

Everyone's arguing about which AI model is best. The more interesting question is which one remembers you.

I ran OpenClaw myself last year and the coverage kept missing why it blew people away. It wasn't the model. It was two moves most vendors missed. First, it took the scary terminal-window version of an AI agent and tucked it inside Slack and Signal, the tools people already had open. Users stopped going to the AI. It was just there. Second, it layered in real memory and scheduled tasks. It remembered what you told it last week. It checked in on its own. It ran your Monday morning routine without you asking.

After six months of that, switching costs aren't about data export. The agent has your vocabulary, your clients, your running jokes, the shape of how you actually work. That's the lock-in nobody puts a price on during a demo.

Here's what makes this week different. Anthropic accidentally pushed half a million lines of Claude Code to a public registry, and buried in the leak is an internal project called Conway: an always-on agent with its own extension format, browser control, and connections to your other tools. It's not on any roadmap. It's the same OpenClaw pattern, built deliberately, by the vendor with the biggest checkbook. (Nous Research shipped Hermes, an MIT-licensed version of the same idea this week too — both ends of the market arriving at the same conclusion: the memory is the product.)

If you're evaluating an AI tool this quarter, the question isn't "how smart is it." It's "what does it remember about us, where does that memory live, and what happens to it if we leave." If the salesperson can't answer cleanly, you're not buying a tool. You're renting a relationship.