DeepSeek releases 671 billion parameter open source model, focusing on mathematical theorem proof

PANews reported on April 30 that according to the community and Hugging Face pages, DeepSeek has open-sourced a new model, DeepSeek-Prover-V2-671B, focusing on mathematical theorem proving tasks. The model is based on a mixture of experts (MoE) architecture and uses the Lean 4 framework for formal reasoning training. The parameter scale is 671B, and it combines reinforcement learning with large-scale synthetic data to significantly improve the automated proof capability. The model has been launched on Hugging Face and supports local deployment and commercial use.

Share to:

Author: PA一线

This content is for informational purposes only and does not constitute investment advice.

Follow PANews official accounts, navigate bull and bear markets together
Recommended Reading
1 hour ago
2 hour ago
3 hour ago
3 hour ago
4 hour ago
4 hour ago

Popular Articles

Industry News
Market Trends
Curated Readings

Curated Series

App内阅读