Andrej Karpathy dropped a new pattern for building personal knowledge bases, and it's different from how most RAG systems work. Instead of uploading documents and having an LLM rediscover answers from scratch every time you ask a question, Karpathy's approach has the LLM build and maintain a persistent wiki of markdown files. When you add a new source, the model extracts key information, updates existing pages, flags contradictions, and strengthens the synthesis. The knowledge compiles once and stays current, rather than being re-derived on every query.

The setup is straightforward. Three layers work together. Raw sources stay immutable as your source of truth. The wiki layer holds LLM-generated markdown files that the model owns entirely. A schema document tells the LLM how to structure everything and what workflows to follow. In practice, Karpathy keeps Obsidian open on one side and an LLM agent on the other. Obsidian is the IDE, the LLM is the programmer, the wiki is the codebase. You handle curation and questions. The model handles the grunt work of summarizing, cross-referencing, and filing.

Implementations have already appeared. Meowary applies the pattern to developer work journals using the PARA methodology. Owletto offers structured memory for AI agents with hybrid search and external connectors. But the Hacker News crowd raised valid concerns. One commenter compared manual organization to "shower thoughts", where insights emerge during the grunt work itself. Offload that to an LLM and you might lose the serendipitous thinking that happens when you file and cross-reference yourself. Another worry is semantic drift. Each time the LLM rewrites the wiki, it works from its own previous outputs, not the raw sources. Errors can compound like a game of telephone, with hallucinations hardening into accepted fact.