The independence axiom is the load-bearing assumption in expected utility theory — the constraint that forces agent preferences to be linear in probabilities and locks virtually every mainstream AI reward model into expected utility maximization. A LessWrong post by Ihor Kendiukhov, curated by the site's editors this month, argues it should be treated as optional.

Kendiukhov's framing is deliberate. The independence axiom, he contends, occupies the same structural role as Euclid's parallel postulate in geometry. In the 19th century, János Bolyai and Nikolai Lobachevsky showed that dropping the parallel postulate produces not contradiction but a different, equally coherent geometry — one that ultimately proved more physically accurate than Euclid's flat-space model. The same move, Kendiukhov argues, is available in decision theory. Drop independence and the three remaining von Neumann-Morgenstern axioms stay intact. By Debreu's representation theorem, you still get a well-defined preference functional and a consistent choice ordering. Nothing breaks.

What changes is the space of viable agent designs. Independence is the axiom that mandates taking expectations: once it goes, agents are no longer required to evaluate outcomes by weighting utilities against probabilities. Kendiukhov points to ergodicity economics, developed by physicist Ole Peters at the London Mathematical Laboratory, as one principled replacement. Peters's framework derives an agent's evaluation function from the actual dynamics of the stochastic process it inhabits rather than postulating a utility function and taking its expectation. In non-ergodic environments, where time averages diverge from ensemble averages, expected utility maximization can systematically mislead. Real-world sequential decision-making is frequently non-ergodic.

The post also ties into LessWrong's own updateless decision theory (UDT) research, arguing that UDT reaches the same conclusion from a different direction: the most reflectively stable long-run agents may be precisely those that violate independence. The technical obstacle is a 1988 result by economist Peter Hammond linking independence to sequential planning consistency. Kendiukhov treats that result not as a settled barrier but as a specific, answerable challenge — the one alignment researchers need to engage with if they want coherent non-expected-utility agent designs. The piece draws on Mas-Colell, Whinston, and Green's Microeconomic Theory and David Kreps's Notes on the Theory of Choice throughout.

Hammond's consistency result is where the real debate will land. Kendiukhov's post lays out resolute and sophisticated choice frameworks as candidate solutions. If either can satisfy Hammond's constraint in the sequential case, the argument for treating independence as mandatory in AI agent design loses its firmest technical footing.