Author: Shayon Sengupta
Compiled by: Deep Tide TechFlow
Deep Dive: Shayon Sengupta, a partner at Multicoin Capital, has put forward a disruptive view: in the future, it will not only be agents working for humans, but more importantly, humans working for agents. He predicts that the first "Zero-Employee Company" will emerge in the next 24 months—a token-governed agent will raise over $1 billion to solve unsolved problems and distribute over $100 million to the humans working for it.
In the short term, the demand for agents will exceed the demand for agents, which will create a new type of labor market.
Cryptographic orbits provide an ideal foundation for coordination: a global payment track, a permissionless labor market, and an infrastructure for asset issuance and trading.
The full text is as follows:
In 1997, IBM's Deep Blue defeated then-world champion Garry Kasparov, making it clear that chess engines would soon surpass human capabilities. Interestingly, well-prepared human-computer collaboration—an arrangement often referred to as a "centaur"—could outperform the most powerful engines of that era.
Skilled human intuition can guide the engine's search, navigate complex mid-games, and identify subtle differences missed by standard engines. Combined with the computer's brute-force computation, this combination often produces better practical decisions than a computer alone.
When I consider the impact of AI systems on the labor market and the economy in the coming years, I anticipate seeing a similar pattern emerge. Agent systems will unleash countless intelligent units to tackle the world's unsolved problems, but they cannot do so without strong human guidance and support. Humans will guide the search space and help ask the right questions, guiding AI toward the answers.
Today's working assumption is that agents will act on behalf of humans. While this is practical and unavoidable, a more interesting economic unlocking occurs when humans work for agents. In the next 24 months, I anticipate seeing the first Zero-Employee Company, a concept my partner Kyle outlined in his "Frontier Ideas for 2025" section. Specifically, I expect the following to happen:
- A token-governed agent will raise over $1 billion to address an unsolved problem, such as curing rare diseases or manufacturing nanofibers for defense applications.
- The agent will distribute over $100 million in payments to humans who work for the agent in the real world to achieve the agent's goals.
- A new dual-class token structure has emerged, which separates ownership based on capital and labor (making financial incentives not the only input to overall governance).
Because agents are far from achieving both sovereignty and the ability to handle long-term planning and execution, in the short term, agents will need more humans than humans need agents. This will create a new type of labor market, achieving economic coordination between agent systems and humans.
Marc Andreessen's famous quote, "The spread of computers and the internet will divide work into two categories: people who tell computers what to do, and people who are told what to do by computers," is more true today than ever before. I anticipate that in the rapidly evolving agent/human hierarchy, humans will play two distinct roles—labor contributors who perform small, bounty-based tasks on behalf of agents, and decentralized boards that provide strategic input to serve the agent's North Star.
This article explores how agents and humans will co-create, and how cryptographic orbits can provide an ideal foundation for such coordination, by examining three guiding questions:
- What are the uses of proxies? How should we categorize proxies based on the target scope, and how does the required range of human input vary across these categories?
- How will humans interact with agents? How will human input—tactical guidance, situational judgment, or ideological alignment—be integrated into the workflows of these agents (and vice versa)?
- What happens as human input decreases over time? As agents become more capable, they become self-sufficient, able to reason and act independently. What role will humans play in this paradigm?
The relationship between generative reasoning systems and those who benefit from them will change dramatically over time. I examine this relationship by looking forward from the current state of agency capabilities and backward from the endgame of zero-employee companies.
What are the uses of agents today?
The first generation of generative AI systems—the era of 2022-2024—was based on chatbot-based LLMs, such as ChatGPT, Gemini, Claude, and Perplexity—primarily aimed at enhancing human workflows. Users interacted with these systems through input/output prompts, parsed the responses, and then decided how to bring the results into the world based on their own judgment.
The next generation of generative AI systems, or "agents," represents a new paradigm. Agents like Claude 3.5.1 with "computer usage" capabilities and OpenAI's Operator (i.e., agents that can use your computer) can interact directly with the internet on behalf of the user and make decisions on their own. The key difference here is that judgment—and ultimately action—is exercised by the AI system, not a human . AI is taking on responsibilities previously reserved for humans.
This shift presents a challenge: a lack of determinism . Unlike traditional software systems or industrial automation, which operate predictably within defined parameters, agents rely on probabilistic reasoning . This makes their behavior less consistent across the same scenarios and introduces an element of uncertainty—which is not ideal for critical situations.
In other words, the existence of deterministic and non-deterministic agents naturally leads to two classification methods for agents: those best at expanding existing GDP, and those better suited to creating new GDP.
- For agents best suited to extending existing GDP , the work is, by definition, already known. Automating customer support, handling freight forwarder compliance, or reviewing GitHub PRs are examples of well-defined bounded problems where the agent can directly map responses to a set of expected outcomes. In these domains, a lack of certainty is generally undesirable because the answers are known; creativity is unnecessary.
- For agents best suited to creating new GDP , the job is to navigate a set of highly uncertain and unknown problems to achieve long-term goals. The outcomes here are less direct because the agent inherently lacks a set of expected results to map to. Examples include drug discovery for rare diseases, breakthroughs in materials science, or running entirely new physics experiments to better understand the nature of the universe. In these areas, a lack of certainty can be helpful, as it is a form of generative creativity.
Agents focused on existing GDP applications are already unlocking value. Teams like Tasker, Lindy, and Anon are building infrastructure for this opportunity. However, over time, as capabilities mature and governance models evolve, teams will shift their focus to building agents capable of addressing cutting-edge issues of human knowledge and economic opportunity.
The next batch of agents will require exponentially more resources precisely because their outcomes are uncertain and boundless—these are the zero-employee companies I anticipate will be the most compelling.
How will humans interact with agents?
Today's agents still lack the ability to perform certain tasks, such as those that require physical interaction with the real world (e.g., driving a bulldozer) or those that require "human-in-the-loop" intervention (e.g., sending a bank wire transfer).
For example, an agent assigned to identify and mine lithium may excel at processing seismic data, satellite imagery, and geological records to find potential mining sites, but it may struggle when trying to acquire the data and images themselves, resolve ambiguities in their interpretation, or obtain licenses and contracted labor to carry out the actual mining process.
These limitations necessitate humans acting as "enablers" to enhance agent capabilities, providing the real-world touchpoints, tactical interventions, and strategic inputs required to accomplish the aforementioned tasks. As the relationship between humans and agents evolves, we can distinguish the different roles humans play within agent systems:
First are the labor contributors , who operate on behalf of the Agent in the real world. These contributors help the Agent move physical entities, represent the Agent in situations requiring human intervention, perform tasks that require manual or physical coordination, or grant access to experimental laboratories, logistics networks, and so on.
Secondly, there is the Board of Directors , which is responsible for providing strategic input, optimizing the local objective functions that drive the Agent's day-to-day decisions, and ensuring that these decisions are aligned with the "North Star" objective that defines the Agent's purpose.
In addition to these two, I also foresee humans playing the role of capital contributors, providing resources to agent systems to enable them to achieve their goals. This capital will initially come naturally from humans, and over time from other agents as well.
As agents mature and the number of labor and mentoring contributors increases, crypto rails provide an ideal platform for coordination between humans and agents—especially in a world where agents command humans who speak different languages, receive different currencies, and reside in different jurisdictions around the world. Agents will relentlessly pursue cost efficiency and exploit the labor market to achieve their predetermined missions. Crypto rails are essential, providing agents with a means to coordinate these labor and mentoring contributors.
Recent crypto-driven AI agents such as Freysa , Zerebro , and ai16z represent simple experiments in capital formation—a topic we've written extensively about, viewing them as a central unlocking of cryptographic primitives and capital markets across various contexts. These " toys " will pave the way for an emerging model of resource coordination, which I anticipate will unfold in the following steps:
- Step 1: Humans collectively raise capital through tokens (Initial Agent Offering?), establish broad objective functions and guardrails to inform the agent system of its intended purpose, and then allocate control of the raised capital to the system (e.g., developing new molecules for precision oncology).
- Step 2: The Agent considers the steps to allocate the capital (how to narrow down the search space for protein folding, and how to budget for inference workload, manufacturing, clinical trials, etc.), and defines the actions that human labor contributors will perform on its behalf through custom tasks (Bounties) (e.g., inputting a set of all relevant molecules, signing a compute service level agreement with AWS, and conducting wet lab experiments).
- Step 3: When the agent encounters obstacles or disagreements, it will seek strategic input from the "board" when necessary (combining new papers, changing research methods), allowing them to guide the agent's behavior in the margins;
- Step Four: Finally, the agent progresses to a stage where it can define human actions with increasingly higher precision and requires very little input regarding how resources are allocated. At this point, humans are only used to ideologically align the system and prevent its behavior from deviating from the initial objective function.
In this example, crypto primitives and capital markets provide the agent with three key infrastructures for acquiring resources and scaling capabilities:
First, the global payment system ;
Second, the unlicensed labor market is used to incentivize labor and guide contributors;
Third, asset issuance and trading infrastructure , which is essential for capital formation and downstream ownership and governance.
What happens when human input decreases?
In the early 2000s, chess engines made tremendous progress. Through advanced heuristics, neural networks, and ever-increasing computational demands, they became nearly flawless. Modern engines, such as Stockfish , Lc0 , and variants of AlphaZero , have far surpassed human capabilities; human input rarely adds value, and in most cases, humans introduce errors that the engine itself would not make.
A similar trajectory may unfold in agent systems. As we refine these agents through repeated iterations with human collaborators, it's conceivable that in the long run, the agents will become so competent and highly aligned with their goals that any strategic human input will become worthless.
In a world where agents can continuously handle complex problems without human intervention, humanity risks being relegated to the role of "passive observers." This is the core fear of AI doomers (however, it remains unclear whether such an outcome is actually possible).
We stand on the edge of superintelligence, and the optimists among us prefer that agent systems remain extensions of human intentions rather than entities that evolve their own goals or operate autonomously without oversight. In practice, this means that human identity and judgment (power and influence) must remain at the center of these systems. Humans need strong ownership and governance over these systems to ensure the preservation of oversight and to anchor these systems in collective human values.
Preparing the "shovel" for our agent's future.
Technological breakthroughs lead to non-linear growth in economic progress, while surrounding systems often collapse before the world can adjust. The capabilities of agent systems are rapidly increasing, and cryptographic primitives and capital markets have become much-needed coordinating matrices, both for advancing the construction of these systems and for setting up safeguards as they integrate into society.
To enable humans to provide tactical support and proactive guidance to agent systems, we anticipate the following opportunities for "picks-and-shovels":
- Agent Proof-of-Agenthood + Proof-of-Personhood: Agents lack a concept of identity or property rights. As agents of humans, they rely on human legal and social structures to obtain authority. To bridge this gap, we need robust identity systems for both agents and humans. A digital certificate registry allows agents to build reputation, accumulate credentials, and interact transparently with humans and other agents. Similarly, proof-of-personhood primitives like Humancode and Humanity Protocol provide strong human identity guarantees against malicious actors in these systems.
- Labor market and off-chain verification primitives: Agents need to know whether the tasks they assign are completed according to their objectives. Tools that allow agent systems to create task bounties, verify completion, and distribute rewards are the cornerstone of any meaningful economic activity mediated by agents.
- Capital Formation and Governance Systems: Agents need capital to solve problems and require checks and balances to ensure their behavior conforms to defined objective functions. New structures for acquiring capital for agent systems, as well as novel forms of ownership and control that integrate financial benefits and labor contributions, will be a rich area of exploration in the coming months.
We are actively seeking and investing in these key layers of the human-agent collaboration stack. If you are deeply involved in this field, please contact us.
