Ant Group, a major breakthrough in AI, its large model can be efficiently trained on low-performance devices using domestic GPUs
2025-03-24 14:34:41

Recently, the Ling team of Ant Group published a technical achievement paper. The paper shows that Ant Group has launched two MoE large language models of different sizes - Ling-Lite and Ling-Plus. The former has a parameter scale of 16.8 billion (2.75 billion activation parameters), and the Plus base model has a parameter scale of up to 290 billion (28.8 billion activation parameters). The performance of both models has reached the industry-leading level. In addition to the self-developed large model with leading performance, the biggest breakthrough of this technical paper is that it proposes a series of innovative methods to improve the efficiency and accessibility of AI development in resource-constrained environments. Experiments show that its 300 billion parameter MoE (mixed expert) large model can be efficiently trained on low-performance devices using domestic GPUs, and its performance is comparable to dense models and MoE models of the same scale that use NVIDIA chips entirely.
Email Subscription
Newsletters and emails are now available! Delivered on time, every weekday, to keep you up to date with North American business news.
ASIA TECH WIRE

Grasp technology trends

Download