Orhun Parmaksız tried letting OpenAI's Codex run the show while building cargo-tree-tui, a Rust terminal interface. It didn't go well. He felt lost and confused, staring at code he didn't write and couldn't fully explain. So he shifted to reviewing every AI-generated commit line by line. That worked better, until it didn't. Constant code review is tedious. Now he uses AI for boring tasks and writes the fun parts himself. It's a practical middle ground that more developers are finding as the initial hype around AI-assisted approaches meets reality.

AI-assisted projects are flooding into open source. Parmaksız admits he can't keep up with new tools anymore, and he suspects the lowered barrier to entry is why. Some of these projects work fine. Others feel shaky, built by people who may not understand what they're shipping. He doesn't blame developers for using AI. He thinks they need to own the quality of what they release. A Hacker News commenter put it plainly: writing code is rarely the bottleneck. The thinking and design matter more than speed.

Then there's the licensing mess. The U.S. Copyright Office says purely AI-generated code can't be copyrighted, which could dump it straight into the public domain. But the training data behind these tools is under legal fire. Class-action lawsuits like Doe v. GitHub allege that training on GPL-licensed repos without attribution amounts to piracy. If an AI model trained on GPL code spits out similar code and you put it in a proprietary project, you might be legally forced to open source everything. The Software Freedom Conservancy has criticized GitHub Copilot specifically for stripping license attributions. The developer who accepts that code still holds the bag for compliance.