The math keeps producing the same tool. A $20 ChatGPT Plus subscription or a ~$200 Pro plan delivers effectively unlimited frontier model access; the same compute billed through OpenAI's API, at $1.75 per million input tokens and $14.00 per million output for GPT-5.2, can run an order of magnitude higher. EvanZhouDev's openai-oauth is the latest attempt to bridge that gap — an unofficial CLI tool and Vercel AI SDK provider that routes OpenAI API calls through the same OAuth tokens used by OpenAI's own Codex CLI.
Running via `npx openai-oauth` spins up an OpenAI-compatible REST proxy at localhost that tunnels requests to `chatgpt.com/backend-api/codex/responses`, pre-authenticated using credentials stored by the official Codex CLI at `~/.codex/auth.json`. The result is a drop-in replacement for a paid API key, operating within a ChatGPT account's Codex rate limits and supporting chat completions, streaming, tool calls, and reasoning traces. The project ships as a monorepo with a companion `openai-oauth-provider` package for Vercel AI SDK users; TypeScript developers can swap in `createOpenAIOAuth()` in place of the standard OpenAI provider with minimal code changes.
This has happened before, twice at scale. acheong08/ChatGPT — distributed as the PyPI package revChatGPT — accumulated over 10,900 GitHub stars before going inactive. Then came gpt4free, which reverse-engineered third-party sites proxying GPT-4. The economic incentive behind both is identical to the one behind openai-oauth: OpenAI's API billing for frontier compute prices high enough relative to consumer subscriptions that developers keep finding the engineering effort worth it. <a href="/news/2026-03-15-readingisfun-epub-reader-multi-agent-auth">The same dynamic has emerged with ReadingIsFun</a>, which reuses OAuth tokens from Copilot, Gemini, and Codex subscriptions to avoid API costs. The Codex CLI free tier sharpens the frustration further; community forum posts document Pro-tier subscribers exhausting their weekly Codex allowance in a single afternoon.
The pricing gap looks even wider against alternatives. xAI's Grok 4.1 runs $0.20 per million input tokens and $0.50 per million output — roughly one-ninth and one-twenty-eighth of GPT-5.2 pricing respectively. Groq offers a free tier at 14,400 requests per day. OpenRouter aggregates over 100 providers, including 30-plus free models, under a single endpoint. Ollama and LM Studio eliminate per-token costs entirely. In that context, openai-oauth reads less as a novel exploit than as a pressure gauge on how developers rate OpenAI's pricing fairness.
Longevity is the central question. Hacker News commenters responding to the Show HN post were broadly skeptical: the dominant view is that OpenAI will detect traffic patterns on the Codex endpoint that don't match the behavioral fingerprint of the official CLI, then revoke or restrict the OAuth client ID the proxy depends on. The project's own documentation acknowledges it is unofficial, unsupported, and carries clear Terms of Service risk, cautioning users against running it as a hosted service or pooling tokens. Configuration options — including overrides for the OAuth client ID, token URL, and upstream base URL — suggest EvanZhouDev anticipated the need to adapt quickly as OpenAI adjusts its backend. The window is widely expected to be short; the only open question is whether it closes in days or months.