College instructor Scott K. Johnson has taught Earth science for years. He used to love it. Now he calls the job "mostly miserable." A College Board survey found 84 percent of high school students have used generative AI for schoolwork. Volume isn't the issue. Cheating itself has changed. Instructors spend huge chunks of time acting as detectives, sorting through what Johnson describes as 256 shades of gray to figure out whether a student actually learned something or just laundered an LLM's output. Even his engaged students might not be what they seem.
The core issue is friction. Learning requires struggle. Working through a problem, failing, and trying again is where understanding happens. As Hank Green yelled at OpenAI CEO Sam Altman in a recent video: "The friction matters, Sam!" Using an LLM to write your essay is like driving a forklift into a weight room. The weights get lifted, but nothing is accomplished. His own data backs this up. A critical thinking question he's used since 2019 saw success rates jump from one in three to over half after ChatGPT appeared. ChatGPT's phrasing started showing up in student answers. That's what changed.
Detection tools like Turnitin and GPTZero try to separate human writing from machine output using statistical metrics such as perplexity and burstiness. They're unreliable. Studies show they disproportionately flag writing by non-native English speakers. Meanwhile, a whole market of evasion tools exists to help students beat the detectors. StealthGPT. Undetectable.ai. HideGPT. OpenAI killed its own AI text classifier in July 2023 because it couldn't get the accuracy right. The company that makes ChatGPT can't reliably detect its own output. Instructors don't stand a chance.
The stakes are specific and concrete. Asynchronous online classes serve students with physical disabilities, rural students, and working parents. Those courses can't easily switch to oral exams or supervised handwritten tests. Even for in-person classes, anti-cheating concessions often reduce pedagogical quality. One Hacker News commenter argued the real problem is the expectation that education could be scaled through online methods at all. For genuinely curious students, LLMs might actually help by making tutoring more accessible. But for everyone else, the system is breaking, echoing the PhD Trap: AI Agents vs Real Understanding. No fix has emerged yet.