A new version of OpenAI’s Codex is powered by a new dedicated chip

OpenAI launched GPT-5.3-Codex-Spark, a lightweight coding tool powered by Cerebras' dedicated chip for faster inference and real-time collaboration. This marks a significant partnership with Cerebras and emphasizes low latency for a better user experience in rapid prototyping tasks.
Key Points
- OpenAI released a new version of its Codex tool, GPT-5.3-Codex-Spark.
- This new model is lighter and designed for faster inference than previous versions.
- Powered by Cerebras' Wafer Scale Engine 3, which has 4 trillion transistors.
- OpenAI's partnership with Cerebras involves a multi-year agreement worth over $10 billion.
- The aim is to improve real-time collaboration and rapid iteration on coding tasks.
- Codex-Spark targets low latency workflows, making it suitable for productivity.
- Research preview is currently available for ChatGPT Pro users in the Codex app.
Relevance
- The collaboration between OpenAI and Cerebras reflects a trend in the tech industry towards specialized hardware for machine learning.
- Cerebras recently raised $1 billion, indicating a strong emphasis on AI solutions.
- By 2025, AI efficiency and real-time processing will remain key trends in IT, with dedicated hardware becoming more prevalent.
The launch of GPT-5.3-Codex-Spark represents a significant advancement in AI development, merging specialized hardware and optimized software to enhance speed and user experience in coding tasks.
