Gonka discloses its PoC mechanism and model evolution direction: aligning with real computing power and ensuring continuous participation of multi-level GPUs.

PANews reported on January 19th that Gonka, a decentralized AI computing network, recently explained its phased adjustments to the Proof-of-Concept (PoC) mechanism and model operation method in a community AMA. The adjustments mainly include: using the same large model for both PoC and inference; changing the PoC activation method from delayed switching to near real-time triggering; and optimizing the calculation method for computing power weights to better reflect the actual computational costs of different models and hardware.

Co-founder David stated that the aforementioned adjustments were not aimed at short-term output or individual participants, but rather a necessary evolution of the consensus and verification structure as the network's computing power rapidly expands. The aim is to improve the stability and security of the network under high load conditions, laying the foundation for supporting larger-scale AI workloads in the future.

Regarding the issue raised in community discussions about the high token output of smaller models at the current stage, the team pointed out that there are significant differences in the actual computing power consumption for the same number of tokens across models of different sizes. As the network evolves towards higher computing power density and more complex tasks, Gonka is gradually guiding computing power weights to align with actual computing costs to avoid long-term imbalances in the computing power structure that could affect the overall scalability of the network.

Under the latest Proof-of-Concept (PoC) mechanism, the network has reduced PoC activation time to less than 5 seconds, minimizing computational waste caused by model switching and waiting, and enabling a higher proportion of GPU resources to be used for effective AI computing. Simultaneously, by unifying model operation, the system overhead of nodes switching between consensus and inference is reduced, improving overall computational efficiency. The team also emphasizes that single-card and small-to-medium-scale GPUs can continuously earn rewards and participate in governance through mining pool collaboration, flexible participation on an Epoch-by-Epoch basis, and inference tasks.

Share to:

Author: PA一线

This content is for informational purposes only and does not constitute investment advice.

Follow PANews official accounts, navigate bull and bear markets together
Recommended Reading
8 minute ago
8 hour ago
9 hour ago
10 hour ago
12 hour ago
12 hour ago

Popular Articles

Industry News
Market Trends
Curated Readings

Curated Series

App内阅读