Author | Gonka.ai
Foreword: Amidst the escalating global discussion on AI, industry focus often centers on model capabilities, technological breakthroughs, and regulatory frameworks. However, beneath these discussions, a more fundamental question is emerging: who ultimately controls the computing infrastructure for AI? In a dialogue at the Unlockit Conference , Daniil and David Liberman, co-creators of the Gonka protocol, futurists, entrepreneurs, and investors, presented a core argument: artificial intelligence has never been a neutral technology; computing infrastructure determines who AI ultimately serves. In their view, the future of AI is not only a technological race but also a long-term struggle over control of the infrastructure.
The true foundation of AI: not models, but computing power.
Centralized AI infrastructure only appears inevitable when people stop questioning its underlying assumptions.
For a long time, most discussions about artificial intelligence have focused on models, ethics, or regulation. But beneath these, there is a more decisive layer—computing power. Who owns the computing power, who controls access to it, and under what conditions it can be used—these ultimately determine how artificial intelligence operates and who it serves.
Once AI is viewed from this perspective, the current landscape becomes difficult to ignore. OECD research and other publicly available data indicate that advanced AI computing power is increasingly concentrated in the hands of a few cloud service providers, and in a limited number of countries. This creates a widening "computing power gap"—the disparity between those who have access to the infrastructure and those who do not.
This concentration is no accident. Access to advanced GPUs is currently controlled by a few providers and is increasingly influenced by national priorities. The result is expensive computing power, limited capacity, and uneven geographical distribution. And all of this is happening at a critical moment when AI is becoming the infrastructure of science, industry, and society.
At the same time, current decentralized systems have not automatically solved this problem. Many decentralized systems still consume a significant amount of computing power on consensus and security overhead, while incentive mechanisms often reward capital rather than actual computational contributions. This discourages hardware providers and slows down innovation at the infrastructure level.
It is here that our thinking begins to diverge. We are not starting from an ideological standpoint, nor are we choosing decentralization in opposition to centralized participants. We are starting from a more practical question: what would AI infrastructure look like if efficiency, access, and contribution could be aligned rather than conflicting?
This question ultimately leads us to a model where most computing power is used for genuine AI work, not for system overhead; participation and governance are determined by proven computational contributions, not capital; and access to global GPU resources is designed to be permissionless. In practice, these assumptions are also constantly being stress-tested through ongoing open discussion, including real-time collaboration with GPU operators, developers, and researchers—for example, in our Discord community.
AI has never been just software. It has always been an infrastructure. And the choice of infrastructure often locks society onto a development trajectory that lasts for decades. Placing such infrastructure under the control of a few companies or nations is not a neutral technological outcome, but a structural decision with long-term economic and geopolitical consequences. If intelligence itself is to become rich, then the infrastructure that underpins it must be designed for “richness” from the outset.
The true standard of success for decentralized AI
The difficulty lies in the fact that you are not arguing with a person, but with the "default assumptions".
The mainstream tech community tends to optimize what works in the short term: speed, capital efficiency, centralized control, and scale through consolidation. These choices may be reasonable in certain contexts, but once they become the defaults, they are rarely questioned. When you challenge these default assumptions, it feels like speaking another language—not because the ideas are extreme, but because they touch upon the incentive structures that have been established in many professions, companies, and strategies.
The timing is even more challenging. Centralized systems often appear very successful before their long-term costs become apparent. While the massive investments and infrastructure spending are already evident, deeper costs often emerge later, such as increased dependence, loss of flexibility, the concentration of pricing power in the hands of a few providers, and the inability to change course once the system is deeply embedded.
For us, success doesn't mean winning an argument or replacing existing players. Success is much quieter. Success is when decentralized infrastructure ceases to be a manifesto and becomes commonplace: when people use it not because they believe in decentralization, but because it's the most practical option.
Ultimately, true success comes when the entire discussion itself changes. When the question shifts from "Should intelligence be centralized?" to "Why did we ever think it had to be centralized?" At that point, beliefs no longer need to be directly challenged; they evolve naturally.
How do companies decide whether to pursue a centralized or decentralized path?
AI infrastructure is no longer just a technological layer; it is becoming a strategic dependency.
For businesses, centralized AI infrastructure creates an irreversible lock-in effect. Once critical systems become dependent on a few providers, control gradually shifts from users to infrastructure owners. Over time, this will impact pricing, accessibility, the speed of innovation, and the range of viable strategic options.
For businesses, the challenge lies in strategic flexibility. Centralized infrastructure may function well in the early stages, but it often becomes entrenched in long-term dependence. Costs become increasingly difficult to control, alternatives become harder to adopt, and changing architectural decisions at scale becomes increasingly challenging.
The critical moment for decision-making often arrives earlier than most people realize. Infrastructure choices are often locked in before their consequences are even apparent. Once AI moves from the experimental stage to everyday infrastructure, the cost of changing the underlying architecture rises exponentially. Therefore, the real decision-making moment is not when centralized systems fail, but while they still appear to be functioning well. Exploring decentralized solutions early on preserves choice; waiting often means the choice has already been made.
If we've already become reliant on centralized infrastructure, is it too late?
It's rarely really "too late," but the difficulty increases exponentially over time.
Once most systems are built on centralized AI infrastructure, the challenge will no longer be technical, but institutional. Workflows, incentive mechanisms, budgets, compliance requirements, and even talent development paths will gradually assume that centralization is "how things work." At that point, change will no longer be just about migrating infrastructure, but will require relearning the habits, contractual patterns, and ways of thinking that are already deeply ingrained in the organization.
Research on infrastructure lock-in reinforces this point. Industry analyses consistently show that switching costs rise sharply, rather than linearly, after several years of operation in centralized cloud environments. This increase stems from long-term contracts, regulatory frameworks, deeply integrated internal processes, and a highly specialized workforce. OECD research also indicates that countries and organizations that did not acquire AI computing power early on face accumulating disadvantages over time, losing not only competitiveness but also architectural freedom—the ability to truly choose alternative infrastructure models.
At the same time, history shows that infrastructure transformations rarely happen all at once. They usually begin at the periphery. New use cases, new players, and new constraints create pressure points where centralized systems begin to fall short—perhaps due to excessive cost, slow speed, too many restrictions, or excessive vulnerability. This is often where alternatives begin to become important.
Over time, what is truly eroded is "choice." The longer centralized infrastructure dominates, the fewer real choices there are.
Dependencies gradually solidify, and decentralization shifts from an active design decision to a passive correction, which is always more expensive, more complex, and more difficult to control.
Therefore, the real risk is not that it's too late. The real risk is waiting until decentralization is no longer an option, but a necessary measure forced by systemic failure. The earlier we explore, even if only in parallel with centralized solutions, the more room we have to proactively shape the outcome, rather than being forced to change under pressure.
For the next generation, AI architecture will determine the allocation of opportunities.
The next generation needs to understand that technology does not become neutral just because it becomes more advanced.
Each generation inherits the infrastructure choices made by the previous one, often without realizing that these choices were deliberate decisions, not inevitable outcomes. For future generations, AI will be as natural as electricity or the internet is today. This is why the underlying architecture is so important—it determines not only what is possible, but also for whom it is possible.
The next generation needs to know that access to intelligence can be organized in fundamentally different ways. It can be seen as a shared foundation: open, abundant, and difficult to monopolize. Or it can be fenced off, priced, and controlled, even if it appears convenient and efficient on the surface. Both paths can produce impressive technologies, but only one can preserve long-term freedom, resilience, and genuine choice.
They should also understand that centralization often arrives quietly, not through coercion, but through convenience. The initial trade-offs often seem small: slightly lower costs, faster deployment, and simpler coordination. But the consequences will become apparent later—when changing course becomes expensive or even nearly impossible.
Equally important is recognizing that infrastructure directly impacts social mobility. Seemingly technology-neutral systems may reduce inequalities at the starting point between individuals and across generations, or they may quietly lock these inequalities in for decades. As you may know, this is a topic of great concern to us. Younger generations already face greater disadvantages than previous generations at their age. Current implementations of AI do not address this issue and may even exacerbate it. In this sense, architectural choices determine not only efficiency but also who truly has the opportunity to experiment, build, and shape the future.
Most importantly, future generations need to understand that these systems are still designed by humans. They are not determined by fate, by the "market," or by the machines themselves. Questioning default assumptions, asking who benefits from a particular architecture, and insisting on retaining choice are not resistance to progress. This is precisely how we keep progress open.
Why did I decide to share these stories on Unlockit?
Unlockit seems to be a space for discussion where the conversation doesn't revolve around hype, releases, or predictions, but rather around why people make certain choices. This is important to us. Our story isn't really about a particular project or technology, but about identifying structural patterns early on and deciding not to treat them as inevitable.
For years, we've operated within the mainstream system: building companies, investing, partnering with large organizations, and benefiting from centralized infrastructure. We understand how these systems work from the inside out. At some point, we realized that simply repeating the same structure and hoping for different results usually doesn't produce anything truly new. Rather than remaining silent or packaging it as just another success story, we decided to share this realization publicly.
At the same time, we're here at Unlockit not only to reflect, but also to share practical experiences that are relevant to the diverse groups present. For entrepreneurs, these issues involve infrastructure control, dependence on providers, and the ability to scale without sacrificing flexibility. For investors, they involve long-term risk, infrastructure lock-in, and which models truly create lasting value. For businesses and technology leaders, it's about cost structures, reliability, regulatory constraints, and strategic freedom in a rapidly changing environment.
We want to share an alternative path that's already in practice—not as a universal answer, but a different way of thinking: how to build AI infrastructure that is less reliant, more transparent, and offers greater long-term flexibility. Equally important, we also want to hear feedback from those making real decisions at the business, capital, and institutional levels.
We also believe that these discussions shouldn't be limited to insiders. Once infrastructure decisions are no longer openly discussed, they quietly solidify into default choices. Unlockit provides a space for reflection before these choices become irreversible, which makes participating in this dialogue meaningful.
Ultimately, participating in Unlockit isn't about explaining what we're doing, but about demonstrating why questioning default assumptions remains important, especially in an era where technological progress seems rapid, powerful, and inevitable. It's also about listening to the perspectives of those shaping the future of business, technology, and social systems.

