GPT-5.3-Codex-Spark Now 30% Faster — Over 1,200 Tokens/Second
OpenAI engineer Thibault Sottiaux revealed that GPT-5.3-Codex-Spark has been made approximately 30% faster, now serving at over 1,200 tokens per second.
Significant for code generation tasks — faster inference directly improves the response experience for code completion and real-time assistance. The latest progress in OpenAI's ongoing optimization of Codex series inference efficiency.
A quote from Thibault Sottiaux
Simon Willison’s Weblog
Sponsored by: Teleport — Secure, Govern, and Operate AI at Engineering Scale. Learn more
We’ve made GPT-5.3-Codex-Spark about 30% faster. It is now serving at over 1200 tokens per second.
— Thibault Sottiaux, OpenAI
Posted 21st February 2026 at 1:30 am
Adding TILs, releases, museums, tools and research to my blog - 20th February 2026
Two new Showboat tools: Chartroom and datasette-showboat - 17th February 2026
Deep Blue - 15th February 2026
This is a quotation collected by Simon Willison, posted on 21st February 2026.
In-Depth Analysis and Industry Outlook
From a broader perspective, this development reflects the accelerating trend of AI technology transitioning from laboratories to industrial applications. Industry analysts widely agree that 2026 will be a pivotal year for AI commercialization. On the technical front, large model inference efficiency continues to improve while deployment costs decline, enabling more SMEs to access advanced AI capabilities. On the market front, enterprise expectations for AI investment returns are shifting from long-term strategic value to short-term quantifiable gains.
However, the rapid proliferation of AI also brings new challenges: increasing complexity of data privacy protection, growing demands for AI decision transparency, and difficulties in cross-border AI governance coordination. Regulatory authorities across multiple countries are closely monitoring these developments, attempting to balance innovation promotion with risk prevention. For investors, identifying AI companies with truly sustainable competitive advantages has become increasingly critical as the market transitions from hype to value validation.
From a supply chain perspective, the upstream infrastructure layer is experiencing consolidation and restructuring, with leading companies expanding competitive barriers through vertical integration. The midstream platform layer sees a flourishing open-source ecosystem that lowers barriers to AI application development. The downstream application layer shows accelerating AI penetration across traditional industries including finance, healthcare, education, and manufacturing.
Additionally, talent competition has become a critical bottleneck for AI industry development. The global war for top AI researchers is intensifying, with governments worldwide introducing policies to attract AI talent. Industry-academia collaborative innovation models are being promoted globally, with the potential to accelerate the industrialization of AI technology.