Gradient releases Echo-2 distributed RL framework, improving AI research efficiency by more than 10 times.

PANews reported on February 12 that Gradient, a distributed AI lab, today released the Echo-2 distributed reinforcement learning framework. Echo-2 reduces the cost of training 30B+ modules to approximately $425/9.5 hours per session by decoupling the Learner and Actor and using asynchronous RL (bounded staleness). Its three-plane architecture supports plug-and-play functionality, and Lattica can distribute 60GB+ weights in minutes. The paper claims that using Parallax to schedule distributed RTX5090s for training Qwen3-8B modules is 36% cheaper than centralized A100s and does not diverge.

Share to:

Author: PA一线

This content is for market information only and is not investment advice.

Follow PANews official accounts, navigate bull and bear markets together