Back to home

Articles tagged with "AI, Accelerators, Processors"

Sambanova introduces new AI accelerator, partners with Intel to deploy Xeon CPUs for inferencing and agentic workloads — Sambanova claims SN50 chip is three times more efficient than Nvidia B200

Sambanova introduces new AI accelerator, partners with Intel to deploy Xeon CPUs for inferencing and agentic workloads — Sambanova claims SN50 chip is three times more efficient than Nvidia B200

SambaNova has introduced its SN50 AI processor for agentic inference, partnering with Intel to deploy Xeon CPUs for inferencing workloads. The SN50 chip is claimed to be three times more efficient than Nvidia's B200. SambaNova focuses on AI inference with the SN50 accelerator, emphasizing low latency and power consumption. The company's collaboration with Intel aims to offer Xeon-based AI systems to enterprises and governments, targeting large-scale AI inference infrastructure. Additionally, SambaNova secured $350 million in Series E funding to expand its manufacturing and cloud capacity.

Tom's Hardware
MIT’s Survey On Accelerators and Processors for Inference, With Peak Performance And Power Comparisons

MIT’s Survey On Accelerators and Processors for Inference, With Peak Performance And Power Comparisons

MIT Lincoln Laboratory Supercomputing Center published a technical paper titled "Lincoln AI Computing Survey (LAICS) and Trends," focusing on AI accelerators and processors for inference. The paper updates a survey of AI accelerators and processors over the past seven years, highlighting commercial accelerators with peak performance and power consumption data. The paper analyzes trends, market segments, and new computing architectures, presenting the information in scatter graphs. It also includes descriptions of new accelerators added to the survey and offers insights into the evolving landscape of AI computing technologies.

SemiEngineering
Inside the AI accelerator arms race: AMD, Nvidia, and hyperscalers commit to annual releases through the decade

Inside the AI accelerator arms race: AMD, Nvidia, and hyperscalers commit to annual releases through the decade

The AI industry is witnessing an acceleration in product releases, with AMD, Nvidia, and major hyperscalers committing to annual launches of new AI accelerators through the late 2020s. Companies like Amazon, Google, Meta, Intel, Microsoft, and Nvidia are all working on next-generation processors to cater to AI and HPC workloads. AMD's upcoming Instinct MI400-series processors, set to debut in the second half of 2026, will focus on AI and supercomputing applications, featuring advanced technologies like HBM4 memory and UALink connectivity. Meanwhile, Nvidia is gearing up to introduce its Rubin GPUs for AI in late 2026, with plans for subsequent releases like Rubin CPX and Rubin Ultra in 2027. OpenAI is also venturing into custom AI processors, with a reported $10 billion investment in new hardware.

Tom's Hardware

No more articles to load

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.