Zerx Lab dropped OpenWarp, a community fork of the Warp terminal that does something Warp itself won't yet do: let you plug in whatever AI provider you want. DeepSeek, Ollama, OpenAI, Anthropic, local models running through LM Studio. The LocalLLM project offers similar configurations for local AI inference.
The fork uses minijinja templates for system prompts, so your context variables like working directory and language get injected on the fly. That's a genuine improvement over hardcoding prompts.
The project keeps Warp's full UX intact, including blocks, workflows, and keybindings. It adds native multi-language support for Chinese, English, Japanese, and Spanish. Licensed AGPL-3.0/MIT, matching Warp's upstream.
Warp founder Zach Lloyd responded to the Hacker News thread, saying the official team is actively exploring bring-your-own-model support. That's the real story here. A community fork forced the conversation. Some commenters griped about the trademark implications of keeping "Warp" in the name, and fair enough, that's a legitimate concern. But the demand is clearly real. People want local model support and they want it now, not on a product roadmap, just like Ownscribe.
OpenWarp is still early, not yet at a formal release. You have to clone and build from source. For developers already running local models through Ollama or paying for DeepSeek's API, having terminal-native access without routing through Warp's cloud is the whole point.