Wharton Business School researchers Steven Shaw and Gideon Nave ran a study that should make anyone who uses AI assistants nervous. They gave 1,372 people a test with access to a chatbot, then deliberately had the AI give wrong answers. People accepted those wrong answers 80% of the time. Worse, they felt 11.7% more confident in their responses than people who worked without AI, even though the AI was feeding them bad information.
The researchers call this "cognitive surrender", and they argue we're seeing the emergence of what they call "System 3" thinking. Daniel Kahneman's famous framework describes System 1 as fast, intuitive processing and System 2 as slow, deliberative reasoning. System 3 is what happens when we offload cognition to AI entirely, treating it as an external processing resource that we integrate into decisions with, as the authors write, "minimal friction or skepticism." The efficiency gain is real. So is the vulnerability.
This isn't entirely new territory. Aviation researchers have studied "automation bias" for decades, documenting how pilots get complacent with autopilot systems. The 2009 Air France Flight 447 crash happened partly because the crew couldn't respond correctly when automation failed. Doctors show similar patterns with clinical decision support systems. The difference now is scale. When a pilot surrenders cognition to automation, one plane is at risk. When millions of knowledge workers surrender cognition to chatbots, the scope of potential failure expands dramatically. We're already seeing professionals delegate meaningful decisions to AI tools without learning anything in the process.