AMD announces MI350P PCIe AI accelerator card with 144GB of HBM3E — roughly 40% faster in FP16 and FP8 theoretical compute compared to Nvidia's H200 NVL competitor
Source
Published
TL;DR
AI GeneratedAMD has introduced the MI350P PCIe AI accelerator card with 144GB of HBM3E memory, offering a significant boost in performance compared to Nvidia's H200 NVL competitor. The card features 128 CUs and a fanless cooling solution within a 600W power envelope, which can be adjusted to 450W for compatibility with various server configurations. It is based on AMD's CDNA4 architecture, built on TSMC's 3nm and 6nm FinFET process, and supports MXFP6 and MXFP4 for lower-precision operations. The MI350P is designed for AI workloads, providing an estimated 2,299 TFLOPs and 4,600 peak TFLOPs of performance using MXFP4, positioning it as a strong competitor in the PCIe AI accelerator market.