Poolside just dropped two models from its Laguna family. Laguna M.1 is the big one: 225 billion total parameters with 23 billion activated, trained from scratch on 30 trillion tokens across 6,144 NVIDIA Hopper GPUs. It scores 46.9% on SWE-bench Pro, a benchmark that tests whether AI can fix real GitHub issues, and 40.7% on Terminal-Bench 2.0.

Laguna XS.2 is smaller at 33 billion total with 3 billion activated. But it holds its own. It hits 44.5% on SWE-bench Pro. That's competitive with models carrying far more activated parameters.

XS.2 ships under Apache 2.0. Poolside's first open-weight release. Until now they've mostly served government and public sector clients with air-gapped deployment requirements. Both models are free to use for a limited time through Poolside's API and on OpenRouter.

Poolside's stance on agents is blunt. Code is the universal interface. Tool calling through structured APIs is a transitional pattern, they argue. An agent that writes and runs code can compose actions, parallelize work, and build ad-hoc systems on the fly. They're releasing their agent framework, an Agent Client Protocol server, the same one used internally for reinforcement learning training and evaluation.

The company raised a $126 million Series A led by Bain Capital Ventures in June 2024, with Felicis Ventures participating, at roughly a $500 million valuation. Serious capital for roughly 60 researchers. The bet: purpose-built coding models beat general-purpose LLMs at actual software engineering work [using coding agents](/news/2026-04-08-zechner-pi-earendil), and code becomes the lingua franca for how AI agents get things done.