Pranab Sarkar got frustrated with vector databases that just store stuff without managing it. So he built YantrikDB, a Rust-based memory engine that actually forgets things, merges similar memories, and flags contradictions. It's a shift from the dump-everything-and-hope approach that most AI agents use today. Version 0.5.11 just dropped after a 42-task hardening sprint focused on lock safety and failover resilience.

Real memory isn't a search engine. That's the core insight. YantrikDB applies temporal decay so less relevant information fades over time based on configurable half-life. Semantic consolidation collapses redundant fragments into canonical entries, so "CEO is Jane Smith" and "company head: J. Smith" merge into one fact. Contradiction detection catches when your agent has stored conflicting information, like recording two different CEOs for the same company.

Benchmarks shared by Sarkar show 99.9% token savings at 5,000 memories compared to file-based approaches like CLAUDE.md, while precision actually improves as the database grows.

You can run it three ways: embedded as a library in Python or Rust, as a networked server with clustering support, or as an MCP server that plugs directly into tools like Claude Code, Cursor, and Windsurf. MCP integration means agents auto-recall context and auto-detect contradictions without extra prompting. Production metrics from a 2-core cluster show recall p99 at 190ms, dropping to around 5ms with pre-computed embeddings.

It works. But not everyone's sold.

Hackers on HN questioned whether extracting rigid facts from prose loses important nuance and context. Sarkar's position is firm: at scale, you need transactional logic to manage state and resolve conflicts. File-based memory falls apart past a few hundred entries. YantrikDB is betting that structured memory hygiene beats context stuffing, even if it means sacrificing some conversational richness.