Lance Fortnow, writing on Computational Complexity, makes a point that sounds wrong at first but clicks when you think about it. Machine learning works because it doesn't have to be right. He borrows the idea from networking. Jim Kurose once said "The Internet works so well because it doesn't have to." The IP layer makes zero promises about delivery. Complete failure satisfies the protocol. TCP just retries when IP fails, and even TCP can give up and tell the layer above it couldn't deliver.

The same principle applies to neural networks. **Interactive visualizations help explain how these systems function, particularly the softmax function, which turns raw outputs into probabilities and never assigns zero probability to anything. It always leaves a tiny chance for every possibility. When a problem is genuinely hard, the model spreads probability across several options instead of forcing one answer. Fortnow argues this is a feature. By letting models be wrong sometimes, you give them flexibility to be right more often on problems that matter.

The comments pushed back in useful ways. One commenter pointed out that current AI systems burn through massive datacenter compute to solve math problems a decent undergraduate could handle directly. They suggested tracking failed paths explored as an intelligence metric. Fewer dead ends means more efficient reasoning. Fortnow also noted he doesn't think explainability is worth the capability trade-offs in most cases.

But flexibility has limits when you're driving a car or diagnosing a patient. Internet packets can be retransmitted. A misclassified stop sign can't. Standards like ISO 26262 for automotive and DO-178C for aerospace demand deterministic guarantees that probabilistic models can't provide. Approaches like conformal prediction try to build uncertainty bounds around model outputs, but the tension between flexibility and safety isn't going away. The real question is where "good enough" stops being good enough.