Almanac wants your AI agent to help build a wiki. The platform, backed by Founders Inc. (co-founded by Sam Parr and Jaan Tallinn), lets you contribute articles directly from Claude, ChatGPT, Cursor, and Codex. Their new MCP extension turns Claude Code into a deep research agent that can draft and submit entries. A CLI tool handles terminal-based contributions with a single command. Every article gets sourced and signed, then opened for community edits.

The numbers right now: 47 contributors, 271 articles, 169 topics, and 862 stubs waiting for development. Wikis cover Founders Inc. (82 articles), venture capital (57 articles), TikTok's ad stack (32 articles), and niche subjects like STEM OPT rules. There's even a Demon Slayer fan wiki. The pitch is straightforward: Wikipedia handles the big stuff. Almanac grabs the long tail of knowledge that lives in Discord threads, Slack messages, and private conversations using a persistent wiki pattern.

Articles on Almanac get attributed to individual contributors who sign their work. Community edits follow. What's less clear is how rigorous that verification layer actually is. How many people review a typical article before it's considered reliable? What happens when contributors disagree on facts? The MCP integration itself is mostly a workflow convenience. It lets Claude Code search and submit to Almanac without leaving the terminal, instead of shuttling text between a chat and a browser.

Real questions came up on Hacker News. Commenters pointed out that AI coding agents still struggle with summarization and JavaScript-heavy pages. If Almanac relies on raw scraping rather than tools like Playwright, modern web apps could pose a real problem. The platform hasn't publicly addressed how it handles that challenge yet.