A blog post by purplesyringa asks an uncomfortable question: what if LLMs make programming less accessible, not more? The piece, triggered by reports of Mythos, a private LLM that can allegedly find zero-day vulnerabilities, argues we're sleepwalking back toward computing's mainframe era. Back then, a compiler like Watcom C/C++ cost $1,000 (roughly $2,500 today). Only big companies and universities could afford serious tools. Hobbyists were shut out.
The author knows what they're talking about. They learned to code on discarded hardware, starting with QBasic on an actual DOS machine in the early 2010s. No English skills, no documentation, just trial and error. Later came PHP tutorials, C++ from blog posts, web development from free Russian-language sites. The whole education cost exactly $0 and ran on a Pentium II machine running Windows XP beyond its intended lifetime. GCC, Apache, GitLab CI/CD, Heroku's free tier. The free software ecosystem made it possible for a kid with no money and no connections to become a professional developer.
LLMs don't work like that. You can't run a useful coding agent on ancient hardware and wait longer for results. Without a decent GPU and enough RAM, local models are unusable, period. Cloud APIs cost money every month. The author points out that for someone in their childhood situation, the difference between $0 and $1 was the difference between possible and impossible. A middle schooler can learn Python on a family iPad. They can't learn vibecoding.
The counterargument exists. Open-source models like Llama 3 (8B) and Mistral 7B can run on consumer gaming GPUs with 8GB of VRAM thanks to quantization techniques like GPTQ and AWQ. Tools like Ollama and LM Studio make local deployment almost painless. Apple Silicon machines with unified memory can handle even larger models. But that still requires hardware many people don't have. The barrier is lower than the mainframe era, sure. It's just not zero.