DeepSeek releases 671 billion parameter open source model, focusing on mathematical theorem proof

PA一线
PA一线04/30/2025, 09:57 AM

PANews reported on April 30 that according to the community and Hugging Face pages, DeepSeek has open-sourced a new model, DeepSeek-Prover-V2-671B, focusing on mathematical theorem proving tasks. The model is based on a mixture of experts (MoE) architecture and uses the Lean 4 framework for formal reasoning training. The parameter scale is 671B, and it combines reinforcement learning with large-scale synthetic data to significantly improve the automated proof capability. The model has been launched on Hugging Face and supports local deployment and commercial use.

Share to:

Author: PA一线

This content is provided for informational purposes only and does not constitute investment advice.

Follow PANews official accounts, let's navigate bull and bear markets together