TL;DR
- Pricing Change: OpenAI replaced fixed per-seat Codex licenses with pay-as-you-go token billing for ChatGPT Business and Enterprise plans on April 3, 2026.
- Lower Base Cost: The standard ChatGPT Business seat price dropped from $25 to $20 per month, with dedicated Codex-only seats billed by token consumption.
- Rapid Growth: Codex usage has grown sixfold since January 2026, with more than two million developers using the tool weekly, according to OpenAI.
- Competitive Context: OpenAI is targeting GitHub Copilot and Claude Code, with Codex generating just over $1 billion in annualized revenue by January 2026.
- Enterprise Incentive: Eligible workspaces receive up to $500 in promotional credits per Codex-only member, with up to five members per workspace.
OpenAI is replacing fixed per-seat Codex licenses with pay-as-you-go pricing across its ChatGPT Business and Enterprise plans on April 3, while cutting the base Business subscription from $25 to $20 per month. That move runs counter to competitors such as GitHub Copilot and Cursor, which still charge per seat.
More than two million developers now use Codex weekly, according to OpenAI, and over nine million paying business users rely on ChatGPT for work. Codex Business and Enterprise usage has grown sixfold since January 2026, and token-based billing now positions OpenAI directly against Copilot and Cursor.
How the New Pricing Works
Under the restructured model, Business and Enterprise customers choose between two seat types. Standard ChatGPT Business seats, now at the reduced monthly rate, continue to include Codex with a usage cap. Teams requiring heavier Codex access can add dedicated Codex-only seats, which bill based on token consumption without rate limits.
Additionally, admins can enable free Codex access across their workspace without purchasing additional licenses, paying only for actual usage. “This model gives organizations a simpler way to support that motion inside a managed workspace,” OpenAI stated.
For teams that exceed the included cap on standard seats, OpenAI offers pay-per-token access beyond the bundled allowance. Standard Business seats retain their Codex usage limits for organizations that need broad ChatGPT access without heavy coding workloads.
As a result, enterprises gain granular visibility into how individual teams and projects use the coding tool. OpenAI said the structure provides clearer cost tracking across budgets, workflows, and teams. That level of detail is simply unavailable under flat per-seat models.
To accelerate adoption, OpenAI is offering eligible Business workspaces promotional credits per Codex-only member, up to $500 per team, for a limited time. Each workspace can onboard up to five Codex-only members at this rate, giving engineering teams a low-risk entry point before committing to ongoing token-based billing.
Moreover, Enterprise customers receive additional flexibility through credit pools that can be allocated across departments. Rather than budgeting per developer, IT administrators can set spending limits at the team or project level, distributing Codex access based on actual demand rather than anticipated headcount. For organizations already running ChatGPT Enterprise, adding Codex-only seats requires no contract changes: just an admin toggle and a billing adjustment.
Competitive Positioning and Growth
OpenAI views Anthropic with Claude Code as its biggest competitor in AI coding tools. In a March 2026 interview, Altman described the market opportunity as enormous, calling it “one of these rare multitrillion-dollar markets,” he told WIRED.
He also acknowledged competitive pressure: “First to market is worth a lot. We had that with ChatGPT.”
Despite Codex’s rapid growth, OpenAI is playing catch-up on revenue. WIRED reported that Claude Code accounts for nearly a fifth of Anthropic’s business, generating more than $2.5 billion in annualized revenue. Codex starts at $200/month compared to roughly half that for Claude Code’s Max plan, making the pricing shift a strategic tool to close that revenue gap.
Meanwhile, according to GitHub, Copilot reached 20 million cumulative users by July 2025, with 90% of Fortune 500 companies adopting the tool. Copilot’s deep integration with Visual Studio Code and GitHub’s pull request workflow gives it an entrenched position that pricing alone may not dislodge. OpenAI’s bet is that removing the per-seat barrier will attract enterprises currently running small Copilot pilots but reluctant to expand their spend.
Building on this, sixfold usage growth since January suggests that demand for Codex within existing Business accounts is outpacing what fixed seat allocations can accommodate. Switching to token-based billing removes the ceiling on that growth while generating revenue from every additional task enterprises route through the tool.
In contrast to Copilot and Cursor’s per-seat Ultra plan, which lock customers into fixed pricing, OpenAI is letting enterprises scale Codex usage without proportional seat costs. Companies including Notion, Ramp, and other early adopters are already using Codex in their engineering workflows, says OpenAI.
Notably, Notion co-founder Simon Last offered one reason for choosing Codex over alternatives. He told WIRED that Claude Code “just lies to me” and claims to be working when it is not.
Pricing flexibility also reflects a broader enterprise pivot at OpenAI. The company recently shut down its Sora video generator to redirect compute resources toward enterprise and coding tools. Enterprise revenue now accounts for over 40% of OpenAI’s business and monthly revenue has crossed $2 billion per month.
OpenAI recently raised a record $122 billion, valuing the company at $852 billion post-money. That substantial new capital gives the company room to absorb short-term margin compression from usage-based pricing in exchange for faster enterprise adoption.
Enterprise Outlook
OpenAI first introduced a model called Codex in 2021 to power GitHub Copilot. Since then, it has evolved into an autonomous desktop coding agent with applications for macOS and Windows, plugins, and automations. In March 2026, OpenAI launched a plugin marketplace for Codex with enterprise controls; in April 2025, it released Codex CLI under the Apache 2.0 open-source license.
Subsequently, that progression from open-source CLI to Codex’s transition to a commercial enterprise service with Slack integration and a developer SDK set the stage for today’s pricing restructure. According to a person with direct knowledge cited by WIRED, Codex was bringing in just over $1 billion in annualized revenue by the end of January 2026. Codex usage also grew 50% in a single week during its February 2026 desktop launch, underscoring the velocity of developer interest.
Moreover, real-world adoption is already demonstrating the tool’s value for cross-team engineering. Cisco Meraki tech lead Tres Wong-Godfrey described how Codex handled a complex handoff:
“I needed to update a codebase owned by another team for a feature launch. […] With Codex I was able to hand off the refactoring and test generation work and instead focus on other priorities. This produced fully tested, high-quality code that I could quickly hand off and ensure the feature launch stayed on schedule without adding new risk.”
Tres Wong-Godfrey, Tech Lead at Cisco Meraki (via OpenAI)
Building on that real-world evidence, OpenAI says the new pay-as-you-go pricing is aimed at enterprise adoption. By decoupling Codex from fixed seat costs, the company is targeting organizations that want to experiment with AI coding tools without committing to company-wide licenses. IT teams evaluating Codex can now provision access for individual projects, measure token consumption against productivity gains, and scale up only when the tool proves its value in production workflows.
As a result, for engineering leaders weighing Codex against Copilot or Claude Code, the calculus now shifts from per-developer licensing negotiations to a simpler question: how much coding work does the team routinely offload to AI? Usage-based billing rewards organizations that integrate Codex deeply into their development pipelines while penalizing those that provision seats and leave them idle.
However, whether the usage-based approach can close the gap with Claude Code and Copilot will depend on whether billing flexibility translates to long-term platform stickiness. Codex’s autonomous capabilities, including the ability to run multi-file refactors and generate test suites without developer supervision, give it a differentiated pitch for enterprise buyers who need more than autocomplete.
For now, OpenAI is wagering that enterprises will choose a model where costs scale with actual use rather than headcount, and the promotional credits provide a window to prove that bet before teams lock in their tooling choices for the year ahead.

