Back to home
Technology

MFMIS FeTFETs For Energy-Efficient, Scalable CIM Hardware Accelerators (Seoul National University)

Source

SemiEngineering

Published

TL;DR

AI Generated

Researchers at Seoul National University have published a technical paper on the impact of random phase distribution on ferroelectric tunnel field-effect transistors (FeTFETs) for compute-in-memory applications. FeTFETs exhibit device-to-device variation, which can be mitigated by a metal–FE–metal–insulator–semiconductor (MFMIS) structure. This structure helps equalize channel potential and ensure uniform electrical characteristics. System-level simulations show that MFMIS FeTFETs offer energy-efficient and scalable solutions for compute-in-memory hardware, achieving binary neural network accuracy comparable to dual-FeFETs with superior energy and area efficiency.

Read Full Article

Similar Articles

SemiEngineering

Panel-Level Packaging’s Second Wave Meets Engineering Reality

Panel-level packaging is gaining traction due to economic pressures and the increasing size of AI accelerators and HPC packages. Glass substrates are being explored to address warpage and dimensional stability issues, but they introduce new failure modes that require material solutions. Challenges in panel-level processing include materials and process integration, not just packaging problems. The industry is moving towards panels driven by economic and technological shifts, but solving these challenges requires a holistic approach.

SemiEngineering
SemiEngineering

Inside the AI Accelerator: Essential IP Design Solutions: eBook

The eBook delves into how advanced IP, high-speed interconnects, memory interfaces, and multi-die architectures are utilized in next-gen AI accelerators to surpass single-chip limitations. It highlights the role of optical links in enhancing bandwidth and security IP in safeguarding AI data without compromising performance. The eBook also covers how technologies like UALink, PCIe, CXL, and Ultra Ethernet support scaling AI architectures, integrating compute, memory, and accelerators, and enhancing bandwidth density through optical I/O. The focus is on unlocking AI performance at scale and ensuring data security across accelerators.

SemiEngineering
Nvidia updates data center roadmap with Rosa CPU and stacked Feynman GPUs — optical NVLink, Groq LPUs with NVFP4, and NVLink also on deck

Nvidia updates data center roadmap with Rosa CPU and stacked Feynman GPUs — optical NVLink, Groq LPUs with NVFP4, and NVLink also on deck

Nvidia unveiled updates to its data center roadmap at the GPU Technology Conference, introducing the Rosa CPU and stacked Feynman GPUs. The new GPUs will utilize die stacking and custom HBM memory, alongside the Rosa CPUs not previously mentioned in the roadmap. The roadmap includes plans for various processors like the Groq LP30 and BlueField-4 this year, with future updates in 2027 and 2028 introducing advanced AI accelerators, LPUs, and optical NVLink switches. Nvidia aims to enhance performance and scalability with these new architectures and components, positioning itself competitively in the data center market.

Tom's Hardware
Sambanova introduces new AI accelerator, partners with Intel to deploy Xeon CPUs for inferencing and agentic workloads — Sambanova claims SN50 chip is three times more efficient than Nvidia B200

Sambanova introduces new AI accelerator, partners with Intel to deploy Xeon CPUs for inferencing and agentic workloads — Sambanova claims SN50 chip is three times more efficient than Nvidia B200

SambaNova has introduced its SN50 AI processor for agentic inference, partnering with Intel to deploy Xeon CPUs for inferencing workloads. The SN50 chip is claimed to be three times more efficient than Nvidia's B200. SambaNova focuses on AI inference with the SN50 accelerator, emphasizing low latency and power consumption. The company's collaboration with Intel aims to offer Xeon-based AI systems to enterprises and governments, targeting large-scale AI inference infrastructure. Additionally, SambaNova secured $350 million in Series E funding to expand its manufacturing and cloud capacity.

Tom's Hardware

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.