Gradient, a distributed AI lab, today released Echo-2, a distributed reinforcement learning framework ( arxiv.org/pdf/2602.02192), aiming to break down the barriers to training efficiency in AI research. By completely decoupling the Learner and Actor at the architectural level, Echo-2 drastically reduces the post-training cost of a 30B model from $4,500 to $425. This translates to over 10 times the research throughput within the same budget.
This framework utilizes in-memory computation separation technology for asynchronous training (Async RL), offloading massive sampling computational power to unstable GPU instances and heterogeneous GPUs based on Parallax. Combined with breakthroughs in bounded stagnation, instance fault-tolerant scheduling, and the self-developed Lattica communication protocol, it significantly improves training efficiency while maintaining model accuracy. Alongside the framework's release, Gradient will also soon launch the RLaaS platform Logits, driving AI research from a paradigm of "capital accumulation" to "efficiency iteration." Logits is now open for reservations by students and researchers worldwide (logits.dev).
About Gradient
Gradient is an AI lab dedicated to building distributed infrastructure, focusing on the distributed training, service, and deployment of cutting-edge large-scale models. Backed by top-tier investment institutions, Gradient is building an open and efficient future of intelligence.

