Cyrus Radfar has shipped AI products for over a decade, and he keeps seeing the same pattern. Impressive demo, promising pilot, gradual degradation, debugging nightmare, project abandoned. MIT research found that 95% of AI pilots fail to deliver ROI. Radfar argues the culprit isn't model limitations. It's the codebases. When an agent writes code into a tangle of mutable state and hidden dependencies, it produces output that breaks in ways nobody can predict or debug.

The core issue is that agents start fresh every session. They don't have the mental model a human developer builds over months of working in a codebase. A function might look simple, taking a list and returning a list, but fail in production because it depends on a global config object or database singleton that wasn't declared anywhere in its signature. Radfar calls this an "invisible blast radius." His fix is functional programming, formalized into two frameworks called SUPER (five code principles) and SPIRALS (a seven-step process loop). Pure functions, explicit data flow, side effects at the boundaries. The kind of thing functional programmers have advocated since the 1980s.

The Hacker News crowd had mixed reactions. One commenter pointed out that the SUPER principles basically describe how you'd naturally write Clojure. Another suggested Test-Driven Development has similar benefits, since any constraint that produces functional code helps make LLMs more deterministic according to Truss CTO Ken Kantzer. The argument has merit, though Radfar is promoting his own frameworks here. Still, the core insight lands: if you want agents to write reliable code, give them a codebase where the scope of breakage is bounded and dependencies are visible.