Nishant Soni from NonBioS has seen roughly 1,000 OpenClaw deployments pass through his infrastructure. His conclusion after all of that? Zero legitimate production use cases. The AI agent framework that Jensen Huang called "the operating system for personal AI" works technically. It installs, runs, connects to WhatsApp and Discord, talks to Claude and GPT, executes shell commands. But its memory is unreliable, and the worst part is you never know when it will break.
The problem won't get patched because it's not a single bug. The agent runs as a persistent, always-on assistant, but context fills up and things get forgotten. Sometimes the important things. Soni gives a concrete example: you ask OpenClaw to send an email about a birthday party. Three people confirmed, one declined. OpenClaw sends the update but loses track of who declined. Wrong information goes out to everyone, and you didn't catch it because the whole point of an autonomous agent is that you're not checking every output. Soni has spent a year working on this exact problem at NonBioS. He calls their approach "Strategic Forgetting," actively choosing what the agent should drop and when rather than trying to hold onto everything. He says flatly that keeping an AI agent coherent over long task horizons is the hardest engineering problem in the entire space. A file-based memory map won't solve it. Community feedback backs up the diagnosis, highlighting a critical vulnerability in the framework's pairing permissions.
The only use case Soni found that genuinely works is daily news summaries. That's it. A personalized morning briefing sent to WhatsApp. Nice, but Zapier plus any LLM API already handles this. So does ChatGPT's scheduled tasks feature. You don't need a 250,000-star GitHub project with root access to your environment for a news digest. OpenClaw is a fascinating experiment if you want to learn about AI agents and why context management matters. Just don't expect it to run your business.