PANews reported on April 30 that according to the community and Hugging Face pages, DeepSeek has open-sourced a new model, DeepSeek-Prover-V2-671B, focusing on mathematical theorem proving tasks. The model is based on a mixture of experts (MoE) architecture and uses the Lean 4 framework for formal reasoning training. The parameter scale is 671B, and it combines reinforcement learning with large-scale synthetic data to significantly improve the automated proof capability. The model has been launched on Hugging Face and supports local deployment and commercial use.
DeepSeek releases 671 billion parameter open source model, focusing on mathematical theorem proof
Share to:
Follow PANews official accounts, let's navigate bull and bear markets together
Recommended Reading



AI Agent: The Journey to Web3 Intelligence
The AI Agen innovation wave is sweeping the world. How will it take root in Web3? Let’s embark on this intelligent journey together

Pioneer's View: Crypto Celebrity Interviews
Exclusive interviews with crypto celebrities, sharing unique observations and insights

PAData: Web3 in Data
Data analysis and visualization reporting of industry hot spots

Memecoin Supercycle: The hype around attention tokenization
From joke culture to the trillion-dollar race, Memecoin has become an integral part of the crypto market. In this Memecoin super cycle, how can we seize the opportunity?