When computing power becomes infrastructure: Further reflections on the decentralized path of AI

The AI industry's competitive focus is shifting from model capabilities to the control and allocation of computing power, which is becoming a concentrated structural power. Gonka proposes a decentralized protocol to redefine computing power as open infrastructure, addressing the bottlenecks of centralized cloud services.

  • The Real Bottleneck: The limitation for AI development is no longer model capability but the availability, cost, and opaque pricing of computing power, which is controlled by a few entities due to chip manufacturing, energy, capital, and geopolitics.
  • Infrastructure Protocol Logic: Inspired by systems like Bitcoin that coordinate global physical resources, Gonka's protocol aims to transform GPU resources into verifiable AI computational "work," with incentives based on genuine contributions.
  • Starting with AI Inference: The initial focus is on inference (rather than training) because it represents the most pressing real-world computational bottleneck, where the limitations of centralized services are most apparent and it is suitable for testing decentralized efficiency.
  • Ensuring Genuine Computation: The network embeds verification into the computation itself, using tasks that cannot be pre-calculated or reused, making fraud economically unviable and rewarding consistently honest nodes.
  • Solving a Different Problem: Gonka does not aim to replace major tech companies but to address the open infrastructure layer they struggle with—creating a transparent market and protocol for direct negotiation between hardware providers and developers.
  • The Value of Computing Power: As global inference demand grows, stable and scalable computing power supply will remain scarce. The value lies in coordinating these resources, not necessarily owning the models.
  • A Long-Term Strategic Issue: Decentralized computing power is crucial because as AI becomes a core capability, its control expands from commerce to geopolitics. An open infrastructure layer is historically necessary for truly productive technological waves.
  • Two Possible Futures: The path forward could lead to computing power concentrated in few hands, making AI a closed capability, or to an open-protocol model where value flows to genuine contributors. Gonka advocates for the latter, seeking to redesign AI's foundational computing infrastructure.
Summary

Author| Gonka.ai

In previous articles, we have repeatedly mentioned a judgment: the AI ​​industry is undergoing a structural shift - the focus of competition is shifting from model capabilities to the control and allocation of computing power.

Models can be replicated, and algorithms can be caught up with, but the way computing power is produced, allocated, and controlled is rapidly concentrating and gradually determining who can truly participate in the next stage of AI competition.

This is not an emotional judgment, but the result of long-term observation of the evolution of industry, technology and infrastructure.

In this article, we go beyond this judgment and add a perspective that is often overlooked but extremely crucial: the AI ​​computing power problem is essentially an infrastructure protocol problem, rather than a simple technology or product problem.


I. The real bottleneck of AI is no longer at the model level.

In today's AI industry, a fact that is repeatedly overlooked is that what limits the development of AI is no longer model capability, but the availability of computing power.

A common characteristic of current mainstream AI systems is that models, computing power, interfaces, and pricing power are highly coupled in the hands of a group of centralized entities. This is not a "choice" of any particular company or country, but rather a natural consequence of capital-intensive industries lacking open and coordinated mechanisms.

When computing power is packaged and sold as "cloud services," the decision-making power naturally concentrates in the following areas:

  • Chip manufacturing capabilities

  • Energy and data center scale

  • Capital structure and geopolitical advantages

This has led to computing power gradually evolving from a "resource" into a form of structural power. As a result, computing power has become expensive and its pricing is highly opaque, subject to geopolitical, energy, and export controls, making it highly unfriendly to developers and small and medium-sized teams.

The production, deployment, and scheduling of advanced GPUs are highly concentrated in the hands of a few hardware manufacturers and hyperscale cloud service providers, impacting not only startups but also the AI ​​competitiveness of entire regions and countries. For many developers, computing power has evolved from a "technical resource" into a "barrier to entry." The issue is not just about price, but about the availability of long-term, predictable computing power, whether one is locked into a single technology and supply chain, and whether one can participate in the underlying computing economy itself.

If AI becomes a general-purpose basic capability, then the mechanism for the production and allocation of computing power should not remain in a highly closed state for a long time.


II. From Bitcoin to AI: The Common Logic of Infrastructure Protocols

We mention Bitcoin not to discuss its price or financial attributes, but because it is one of the few protocol systems that have truly succeeded in coordinating global physical resources.

Bitcoin has never solved just the "accounting" problem, but rather three more fundamental problems:

  1. How to motivate strangers to continuously invest real-world resources

  2. How can we verify that these resources have indeed been invested and generated work?

  3. How to maintain long-term system stability without a central controller?

It transforms hardware and energy into verifiable "contributions" within the protocol in an extremely simple yet unbypassable way.

AI computing power is moving towards a position remarkably similar to that of energy and computing power in the past.

When a capability is fundamental and scarce enough, what it ultimately needs is not a more sophisticated commercial packaging, but a protocol layer that can coordinate resources in the long term.

In the Gonka network:

  • "Work" is defined as verifiable AI computing itself.

  • Incentives and governance rights come from genuine computing power contributions, not from capital or narratives.

  • GPU resources are used for meaningful AI work as much as possible, rather than for abstract security consumption.

This is an attempt to redefine computing power as "open infrastructure".


3. Why start with AI inference instead of training?

We chose to start with AI inference not because training is unimportant, but because inference has become the most pressing computational bottleneck in the real world.

As AI moves from experimental to production environments, the cost, stability, and predictability of continuous inference are becoming real concerns for developers. And it is precisely at this stage that the limitations of centralized cloud services become most apparent.

From a network design perspective, inference has several key characteristics:

  • Continuous and measurable workload

  • Efficiency optimization more suitable for decentralized environments

  • This can truly verify whether the computing power verification and incentive mechanism is valid.

Training is certainly important, and we plan to introduce training capabilities in the future, having already allocated some of our network revenue to support long-term training needs. However, the infrastructure must first be tested and operational in real-world scenarios.


V. Decentralized computing power: How to avoid "fake computation"?

A common question is: In a decentralized environment, how can we ensure that nodes are actually performing AI computations and not fabricating results?

Our answer is to embed the verification logic into the computation itself, so that the impact comes from continuous and genuine computational contributions.

The network requires nodes to perform inference tasks on a large, randomly initialized Transformer model through short computational sprints. These tasks include:

  • Unable to pre-calculate

  • Historical results cannot be reused

  • The cost is higher than the cost of falsification

The network does not perform a full review of every calculation. Instead, it continuously checks and dynamically increases the strength of verification to make fraud economically impossible. Nodes that consistently submit correct results will naturally gain higher participation and influence.


VI. Competing with centralized giants, or solving problems at different levels?

We are not trying to “replace” OpenAI, Google, or Microsoft.

Large tech companies have an advantage in building efficient AI stacks within closed systems. However, this model inherently brings about:

  • Access Restricted

  • Lack of pricing transparency

  • Capabilities concentrated in a few entities

We focus on the levels that these systems struggle to cover: open, verifiable, and infrastructure-level computing power coordination.

It is not a service, but a market and protocol that allows hardware providers and developers to directly negotiate on computing efficiency and realism.


7. Will computing power be "commodified"? Where will its value go?

Many people believe that as the cost of inference decreases, the value will ultimately concentrate at the model layer. However, this judgment often overlooks a crucial premise:

Computing power is not a commodity with an unlimited supply.

Computing power is limited by:

  • Chip manufacturing capabilities

  • Energy and Geographical Distribution

  • Infrastructure coordination efficiency

As the demand for inference continues to grow globally, what will truly be scarce is a stable, predictable, and scalable supply of computing power. Whoever can coordinate these resources will control the structural value.

What we're trying to do isn't to own the model, but to enable more participants to directly engage in the computing power economy itself, rather than just being "paying users."


8. Why is decentralized computing power a long-term issue?

Our judgment is not based on theory, but on real-world experience in building AI systems in centralized environments.

When AI becomes a core capability, computing power decisions are no longer just technical issues, but strategic ones. This concentration is expanding from the commercial level to the geopolitical and sovereignty level.

If AI is the new infrastructure, then the way computing power is coordinated will determine the openness of future innovation.

Historically, every technological wave that truly unleashes productivity has ultimately required an open infrastructure layer. AI will be no exception.


Conclusion: Two Paths to the Future

We are heading toward one of two possible futures:

  • Computing power is continuously concentrated in the hands of a few companies and countries, making AI a closed capability.

  • Alternatively, global computing power can be coordinated through open protocols, allowing value to flow to genuine contributors.

Gonka doesn't claim to be the answer, but we know which side we're on.

If AI will profoundly change the world, then the computing infrastructure that supports it also deserves to be redesigned.

Share to:

Author: Gonka

Opinions belong to the column author and do not represent PANews.

This content is not investment advice.

Image source: Gonka. If there is any infringement, please contact the author for removal.

Follow PANews official accounts, navigate bull and bear markets together