Whoever controls computing power implicitly controls the future of AI: Anastasia, co-founder of the Gonka protocol

  • The article discusses the centralization of AI compute power as a critical issue in the AI industry, with physical limits to data centers hindering innovation and creating risks like barriers to entry and systemic fragility.
  • Gonka protocol introduces a decentralized network that uses time-bound security mechanisms, selective verification, and contribution-based rewards to enhance compute efficiency and accessibility.
  • It addresses key efficiency metrics such as speed, cost transparency, GPU utilization, and incentive design, while maintaining scalability and accessibility for diverse participants.
  • The urgency of decentralizing compute power is emphasized to prevent market consolidation, limit innovation, and support AI agents through transparent pricing and reliability.
  • Gonka's flexible architecture adapts to global regulatory challenges, balancing network openness with compliance to foster a self-regulating AI compute economy.
Summary

Key Takeaways: Training large-scale models requires building or upgrading data centers. However, centralized infrastructure is now facing hard physical limits. To enhance infrastructure capabilities, AI is being used to create larger-scale and intelligent outputs. However, control over computing power is becoming a critical power node in the AI ​​industry. This is where Gonka comes in. The Gonka protocol is a permissionless global network that anyone can join, requesting programmatic routing among distributed participants. In an exclusive interview with Analytics Insight , Gonka co-founder and senior product manager Anastasia Matveeva discusses how they innovate in how computing power is acquired to build a more controllable and secure AI ecosystem.

Q: Public discussion about AI often focuses on the centralization of models, but pays less attention to the centralization of computing power. Why is control of computing power becoming a key power node in the AI ​​industry? What risks does this concentration pose to innovation and the market as a whole?

A: Public discussion often focuses on the model because the model is visible. But the real power lies at a deeper level—computing power, which is the foundational layer that determines who can build, deploy, and scale AI systems.

The control of computing power has become critical for both economic and physical reasons. The main bottleneck for modern AI is no longer algorithms, but the ability to acquire GPUs, electricity, and data center capacity.

Training large-scale models increasingly requires the construction or upgrading of data centers. However, centralized infrastructure is encountering physical limits: energy density, thermal constraints, and the maximum power capacity that a single location can support. The industry is exploring extreme solutions—redesigning chips, cooling systems, and new energy sources.

This concentration has led to systemic consequences.

First, it creates structural barriers to innovation. Access to computing power becomes an infrastructure privilege, rather than a competition based on ability. Small teams, independent researchers, and even entire regions are excluded by price, shrinking the space for experimentation and leading to conservative innovation.

Secondly, the centralization of computing power reinforces the "rent extraction" model. AI has the potential to create "abundance"—intelligence is inherently replicable—but this abundance is artificially suppressed when the underlying infrastructure is scarce and controlled. The market shifts towards subscription models, lock-in effects, and pricing power, rather than cost reductions and widespread accessibility.

Third, it introduces systemic vulnerability. When advanced computing power is concentrated in the hands of a few operators and geographical locations, regulatory, political, or physical disturbances can impact the entire AI ecosystem. Dependence becomes structural, not optional.

More importantly, computing power is not neutral. Whoever controls computing power implicitly determines what is feasible, permissible, and economically sustainable. When this control is centralized, AI governance will be formed by default, not by design.

The risk is not just monopolies, but a long-term distortion of the trajectory of AI development: fewer builders, lower application diversity, slower hardware innovation, and infrastructure that cannot match the ambitions of the next generation of models.

Therefore, computing power must be regarded as a fundamental infrastructure—an architecture that can scale at both the economic and physical levels, which is crucial to the future of AI.

Q: Many AI computing platforms—whether centralized or decentralized—claim to be highly efficient. What are the truly important metrics when evaluating the efficiency of AI computing systems? In what ways do these models typically encounter practical limitations?

A: Computing efficiency is often used as a marketing concept. In reality, only a few specific metrics are truly important, covering user-side performance, provider operational efficiency, and the incentive structure that governs both.

For users, efficiency means speed and cost transparency.

Speed ​​refers to latency under real-world demand. Centralized hubs typically have an advantage due to their physical co-location. However, decentralized architectures can achieve similar performance if the blockchain serves only as a security layer and does not participate in the real-time execution path. As long as requests are processed off-chain, the protocol itself does not add latency.

Cost transparency is equally crucial. While "cost per token" is a common KPI, model integrity often lacks transparency. In centralized environments, the product can be a black box. During peak periods, providers may adjust model configurations to maintain profits; these changes are often invisible but can impact output quality. True efficiency requires pricing to reflect consistent computational accuracy.

For providers, efficiency is a balance between GPU utilization and flexibility.

Centralized operators perform well in terms of utilization, with GPUs running at near full capacity in a co-located environment, but they lack elasticity and incur idle costs during periods of low demand.

Decentralized networks sacrifice utilization to some extent in exchange for flexibility, but must minimize consensus and verification overhead so that computing power can be redistributed among different workloads as needed.

The most crucial element is incentive design.

When benefits are tied to faster, cheaper, and verifiable AI workloads, optimization becomes structural. Participants are incentivized to improve hardware efficiency, reduce latency, and experiment with dedicated chips.

Conversely, if rewards or governance weights are primarily linked to capital holdings, optimization will deviate from infrastructure performance, and inefficiencies will become entrenched.

In Gonka, efficiency is embedded in the protocol layer: almost 100% of computing power is used for real AI workloads (primarily inference). Rewards and governance weights are based on measured computing power contributions, not capital holdings.

True efficiency only occurs when most computing power is used for real-world tasks, proven contributions are incentivized, and internal overhead does not grow uncontrollably with network size.

Q: Is it possible for decentralized AI computing networks to dedicate most of their computing power to real AI workloads, rather than maintaining the network itself? What are the key architectural choices?

A: Yes, that's possible—but only if overhead is viewed as a core architectural constraint, rather than an inevitable byproduct of decentralization.

Most decentralized computing networks dedicate significant resources to maintaining consensus and security, rather than AI workloads. This is because the separation of productive tasks and security mechanisms leads to redundant computation.

To ensure that the majority of computing power is used for real-world AI tasks, several key principles are required:

First, security and measurement mechanisms must be "time-bound," not continuously running. Proof mechanisms should be focused on defined, short cycles, rather than continuously consuming resources. In Gonka , this is achieved through Sprints (structured, time-bound cycles). Outside of these cycles, hardware resources can be used for real AI workloads.

Second, verification is reduced by selectively and dynamically adjusting based on reputation, rather than performing a complete replication verification for every task. New participants' work may be 100% verified; as reputation is established, the verification rate can be reduced to approximately 1%. The overall verification computing power can be controlled to below approximately 10%, while maintaining security.

Participants who attempt to cheat will not receive a reward, thus making cheating economically unsustainable.

Third, rewards and governance weights must be linked to verified computing power contributions, not capital holdings.

Decentralized computing power can truly serve real-world workloads when consensus is lightweight, verification is adaptive, and incentives are aligned with productive computing.

Q: Decentralized AI computing networks typically emphasize open participation, but infrastructure requirements can create high barriers to entry. How can such systems maintain accessibility for participants with vastly different computing power levels while scaling up?

A: While decentralized networks aim to lower the barrier to entry for AI infrastructure, their long-term survival still requires competition with centralized providers and meeting real-world needs. Hardware constraints ultimately boil down to one core requirement: the ability to support models that truly have market demand.

To achieve scalability while maintaining accessibility, several principles are crucial.

First, there is permissionless infrastructure access. Any GPU owner—whether a single-device operator or a large data center—should be able to join the network without approval processes or centralized gatekeeping. This eliminates structural barriers to entry.

Secondly, there's the issue of proportional rewards and influence based on proven computing power. In a model weighted by computing power, higher computational contributions naturally lead to a larger share of tasks, rewards, and governance weight. This doesn't make small participants completely equal to large participants—nor should it. The key is consistent rules: influence is determined by actual computational contribution, not by capital, delegation mechanisms, or financial leverage.

Third is the role of pools. In systems with real-world infrastructure requirements, resource aggregation naturally occurs. Pools allow smaller participants to consolidate resources, reduce volatility, and participate in larger-scale workloads.

However, the architecture must avoid granting structural advantages to large computing pools or incentivizing excessive concentration of influence. Computing pools should exist as coordination tools, not as mechanisms for re-centralization.

Ultimately, scaling up decentralized AI computing networks should not mean raising the barriers to entry. It should mean increasing overall computing capacity while maintaining neutrality, transparency, and consistent participation rules, while preserving the real economic value the network creates for users. Open access, proportional economic mechanisms, and controlled levels of centralization determine whether a system remains decentralized as it grows.

Q: Why is the issue of decentralized AI computing power particularly urgent at this moment? If this problem is not solved in the next few years, what do you think will be the long-term consequences for the industry?

A: This urgency reflects that AI is moving from the experimental stage to the infrastructure stage.

As mentioned earlier, computing power has become a physical bottleneck. Scalability is increasingly constrained not only by capital but also by energy, power density, and data center limitations. Furthermore, access to advanced GPUs and hyperscale infrastructure is influenced by long-term contracts, corporate centralization, and national strategic priorities.

This combination exacerbates structural asymmetry. Those controlling large-scale infrastructure continue to consolidate their advantages, while barriers to entry for smaller teams and emerging regions continue to rise. The risk lies not only in market concentration but also in the widening of the global computing power gap.

If this trend continues, innovation will rely more on infrastructure access than on the ideas themselves. The AI ​​market may solidify into a rent-based model where intelligence is accessed under conditions set by a few dominant providers.

Therefore, decentralized computing power is not an ideological debate. It is a response to visible structural constraints—and a choice that will shape the long-term architecture of the AI ​​industry.

Q: AI agents are increasingly booking GPU resources autonomously. How does Gonka's architecture support seamless integration of a self-regulating AI computing economy?

A: The rise of agent-based AI means that systems are making increasingly autonomous decisions—including acquiring computing resources. In this model, computing power becomes a core asset in economic interactions between agents.

Such an ecosystem requires programmatic access, transparent economic mechanisms, and reliability.

First, integration must be seamless. Gonka provides an OpenAI-compatible API, enabling most AI agents to integrate without changing their architecture or workflow.

Secondly, the economics of computing power must be transparent and system-driven. Pricing should be dynamically adjusted based on network load, rather than fixed through contracts. In the early stages of the network, inference costs are designed to be significantly lower than those of centralized providers because participants are compensated not only through user fees but also through rewards in a Bitcoin-like issuance mechanism that are proportional to available computing power capacity.

This architecture enables AI agents operating within budget to efficiently perform workloads. Pricing parameters will remain subject to community governance as the network evolves.

Third, reliability is enhanced at the protocol level. In centralized environments, reliability comes from authentication and service level agreements. In decentralized infrastructure, reliability is supported by open-source code, third-party audits, and on-chain measurable proofs of computational completion and network performance.

These elements collectively enable AI agents to request computing power and allocate budgets within a transparent framework. In this way, Gonka provides the infrastructure foundation for a self-regulating AI computing economy, allowing agents not only to perform tasks but also to dynamically optimize the resources they depend on.

Q: Regulatory uncertainty surrounding decentralized technologies is escalating. How does Gonka proactively address data sovereignty and AI governance compliance issues in a fragmented global market?

A: In the context of decentralized computing power, the main challenge lies in striking a balance between the openness of the network and the diverse and evolving jurisdictional requirements.

Gonka is a permissionless global network—anyone can join, and requests are programmatically routed among distributed participants. Currently, users have no deterministic control over the geographical location where their requests are processed. This may be a limitation for use cases with strict data residency or regional processing requirements.

However, from a privacy perspective, this architecture reduces data centralization. Each request is processed by randomly selected participants and routed independently, thus preventing the accumulation of complete user history. To date, this model has covered most real-world use cases while allowing for network scaling.

As networks grow and market demands become more defined, governance mechanisms allow participants to propose and vote on architectural changes to support specific regulatory requirements. These changes may include: dedicated subnets with additional participation criteria, operational constraints specific to certain jurisdictions, or hardware-level safeguards for enterprise workloads, such as Trusted Execution Environments (TEEs).

Decentralization does not eliminate compliance obligations. It provides architectural flexibility. Gonka is designed to allow the network to evolve according to regulatory and market demands, rather than being locked into a single compliance model from the outset.

Share to:

Author: Gonka

Opinions belong to the column author and do not represent PANews.

This content is not investment advice.

Image source: Gonka. If there is any infringement, please contact the author for removal.

Follow PANews official accounts, navigate bull and bear markets together