Most AI agent projects never make it to production. MIT research found that 95% of AI pilots fail to deliver ROI. Cyrus Radfar, who has shipped AI products for over a decade, says we've been looking in the wrong place. Blame the codebases, not the models. Agents keep failing because they're working with code that has hidden dependencies, mutable state, and side effects that aren't declared in function signatures. An agent sees a function that takes a list and returns a list. It writes tests, they pass, and then everything breaks in production because the function secretly depends on a global config and a database singleton. The agent had no way to know.
Radfar's solution is functional programming, formalized into two frameworks called SUPER and SPIRALS. SUPER is five principles: side effects at the edge, uncoupled logic, pure functions, explicit data flow, and replaceable by value. The goal is code where an agent can modify any function by reading only that function and its type signature. No hidden state to trace, no global config to discover. His article includes refactoring examples in Python, TypeScript, Go, and Rust showing how to transform problematic code into something an agent can work with safely.
The Hacker News discussion raised some useful extensions. One commenter noted that Test-Driven Development provides similar constraints for keeping LLM behavior deterministic. Another pointed out that Clojure programmers have been writing this way for years. Someone asked about formal specifications, suggesting tools like QuickCheck or clojure.spec could generate tests to validate agent outputs. The pattern Radfar describes is familiar to anyone who's worked on agent projects: impressive demo, promising pilot, gradual degradation, debugging nightmare, abandoned project. Making codebases agent-friendly might be what separates agents that stay in the lab from agents that actually ship. The Invisible Blast Radius Breaking Your AI Agents