Back to stories
Models

OpenAI Launches GPT-5 Turbo — 3x Faster, Half the Cost

Michael Ouroumis2 min read
OpenAI Launches GPT-5 Turbo — 3x Faster, Half the Cost

OpenAI has released GPT-5 Turbo, a leaner version of its flagship model that trades a small amount of peak capability for dramatically better speed and pricing. The model is aimed squarely at production developers who need frontier-quality responses without the latency and cost of the full GPT-5.

Speed and Pricing

The numbers are straightforward. GPT-5 Turbo delivers responses three times faster than GPT-5, with a time-to-first-token of under 200 milliseconds. API pricing drops to $2.50 per million input tokens and $10 per million output tokens — half the cost of standard GPT-5.

OpenAI achieved this through model distillation, a technique where a smaller model is trained to replicate the behavior of the larger one. The company says GPT-5 Turbo uses roughly 40% fewer parameters than GPT-5 while retaining most of its capabilities.

"The full GPT-5 is our research flagship. GPT-5 Turbo is what you ship to production," said Sam Altman during the announcement livestream.

Benchmark Performance

On standard benchmarks, GPT-5 Turbo scores within 2-3% of the full GPT-5 across reasoning, coding, math, and general knowledge tasks. On MMLU-Pro, it scores 89.1% compared to GPT-5's 91.4%. On HumanEval coding benchmarks, it achieves 93.2% versus 95.1%.

Where the gap widens is on the most demanding multi-step reasoning problems. On complex mathematical proofs and extended agentic coding tasks, the full GPT-5 maintains a more noticeable edge. For the vast majority of production use cases — customer support, content generation, data extraction, code completion — the difference is negligible.

Context Window

GPT-5 Turbo ships with a 256,000-token context window, matching the full GPT-5. OpenAI says there is no degradation in long-context retrieval accuracy, which was a common complaint with earlier Turbo variants.

Developer Reaction

The developer community has responded positively. Many teams had been using GPT-5 in development but switching to cheaper models for production due to cost constraints. GPT-5 Turbo eliminates that trade-off.

"We were spending $40K a month on GPT-5 API calls," said a startup CTO on X. "GPT-5 Turbo cuts that in half with no visible quality drop. This is what we were waiting for."

Competitive Pressure

The release puts pressure on Anthropic and Google, both of which charge premium rates for their flagship models. Anthropic's Claude Opus is priced at $15/$75 per million tokens, while Google's Gemini 3.1 Ultra sits at $12/$60. GPT-5 Turbo undercuts both significantly while claiming comparable performance.

OpenAI also announced that GPT-5 Turbo will replace GPT-4o as the default model in ChatGPT Free within the next two weeks, giving hundreds of millions of users access to near-frontier performance at no cost.

More in Models

Microsoft Releases Phi-4-Reasoning-Vision-15B: A Small Model That Knows When to Think
Models

Microsoft Releases Phi-4-Reasoning-Vision-15B: A Small Model That Knows When to Think

Microsoft open-sources Phi-4-reasoning-vision-15B, a compact 15B-parameter multimodal model that selectively activates chain-of-thought reasoning and rivals models many times its size.

8 hours ago2 min read
Anthropic Releases Claude Opus 4.6 — Its Most Capable Agentic Coding Model
Models

Anthropic Releases Claude Opus 4.6 — Its Most Capable Agentic Coding Model

Anthropic launches Claude Opus 4.6, a frontier model purpose-built for autonomous coding agents that can plan, execute, and debug multi-file projects with minimal human oversight.

1 day ago2 min read
Meta Releases Llama 4 Maverick With 400B Parameters Under Open Weights
Models

Meta Releases Llama 4 Maverick With 400B Parameters Under Open Weights

Meta releases Llama 4 Maverick, a 400-billion parameter mixture-of-experts model under its open weights license, matching GPT-5 on key benchmarks and reigniting the open-source AI debate.

1 day ago2 min read