We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

Back to home

OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips

Source

Ars Technica

Published

TL;DR

AI Generated

OpenAI has introduced the GPT-5.3-Codex-Spark coding model, which runs on Cerebras chips instead of Nvidia hardware, achieving over 1,000 tokens per second, significantly faster than its predecessor. This model is optimized for speed and tailored specifically for coding tasks, outperforming older versions on software engineering benchmarks. The Codex-Spark model is available to ChatGPT Pro subscribers and is focused on text-only coding tasks. OpenAI is expanding API access to select partners, building on the success of the full GPT-5.3-Codex model launched earlier.

OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips - Tech News Aggregator