OpenAI's moved Codex to token-based pricing for ChatGPT Business and new Enterprise customers, ditching the old per-message model. The switch happened April 2, 2026. Credits now map to what you actually consume: input tokens, cached input tokens, and output tokens. Pricing varies by model. GPT-5.4 runs 62.5 credits per million input tokens and 375 per million output tokens, while GPT-5.1-Codex-mini comes in at 6.25 and 50 respectively.
The token-based approach gives you clearer visibility into where your credits go. Cached input tokens cost roughly 10% of the standard input rate, which rewards efficient prompting and reuse. Fast mode doubles your credit consumption. OpenAI estimates the average developer burns $100-200 monthly, though that swings wildly based on which models you pick and how heavily you lean on automation.
Plus and Pro users, along with existing Enterprise and Edu plans, stay on the legacy per-message rate card for now. They'll email Enterprise admins with migration timelines in the coming weeks. The old model charged roughly 7 credits per message for local tasks on GPT-5.4 and 34 credits for cloud tasks.
The shift exposes what code generation actually costs at the token level. GitHub Copilot charges a flat monthly fee for unlimited use, which means Microsoft's either subsidizing heavy users or absorbing margin pressure as usage scales. OpenAI's token-based model passes those costs through directly. Whether that's better for you depends entirely on your usage pattern.