DeepSeek tests “sparse attention” to slash AI processing costs
Source
Ars Technica
Published
TL;DR
AI GeneratedDeepSeek, a Chinese AI company facing export restrictions on advanced AI chips, has developed "DeepSeek Sparse Attention" (DSA) to enhance processing efficiency in its latest language model, DeepSeek-V3.2-Exp. This technique, similar to sparse transformers used by OpenAI and Google Research, aims to reduce computational costs. DeepSeek claims its implementation achieves "fine-grained sparse attention" and has cut API prices by 50%. The company's focus on optimizing performance with limited resources highlights the ongoing efforts to enhance AI models while managing processing costs.