Original source: a16z crypto
Original translation by: AididiaoJP, Foresight News
AI agents are rapidly evolving from auxiliary tools into true economic participants at a pace far exceeding that of other infrastructure.
While agents are now capable of executing tasks and transactions, they still lack standardized ways across environments to prove "who I am," "what I am authorized to do," and "how I should be paid." Identity cannot be transferred, payments are not programmable by default, and collaboration remains isolated.
Blockchain is addressing these issues at the infrastructure level. Public ledgers provide auditable credentials for every transaction; wallets grant agents portable identities; and stablecoins serve as another settlement layer. These are not future concepts; they are already available today, enabling agents to operate as true economic agents in a permissionless manner.
Providing identities for non-humans

The current bottleneck in the agent economy is no longer intelligence, but identity.
In the financial services industry alone, the number of non-human identities (automated trading systems, risk engines, fraud models) is already about 100 times that of human employees. This proportion will continue to rise across industries as modern agent frameworks (tool-invoking large models, autonomous workflows, multi-agent orchestration) are deployed on a large scale.
However, these agents are essentially still "unbanked." They can interact with the financial system, but not in a portable, verifiable, and default-trustworthy manner. They lack standardized ways to prove their authority, operate independently across platforms, or take responsibility for their actions.
What's missing is a universal identity layer—equivalent to an agent-based SSL, capable of standardized collaboration across platforms. Current solutions remain fragmented: on one hand, there are vertically integrated, fiat-first stacks; on the other hand, there are crypto-native, open standards (such as x402 and emerging agent identity proposals); and there are also developer framework extensions that attempt to bridge application-layer identity (such as MCP, Model Context Protocol).
There is currently no widely adopted, interoperable method that allows one agent to prove to another agent who it represents, what it is allowed to do, and how it is compensated.
This is the core concept of KYA (Know Your Agent). Just as humans rely on credit records and KYC (Know Your Customer), the Agent will require cryptographically signed credentials, binding them to the subject, permissions, constraints, and reputation.
Blockchain provides a neutral coordination layer: portable identities, programmable wallets, and verifiable proofs that can be resolved in chat applications, APIs, and marketplaces.
We have already seen early implementations emerge: on-chain agent registries, wallet-native agents using USDC, the ERC standard for "minimum trust agents," and developer toolkits that combine identity with embedded payments and fraud control.
However, until a universal identity standard emerges, merchants will continue to block agents at the firewall.
Systems for governing AI operation

Agents are beginning to take over real systems, which raises a new question: who truly has control? Imagine a community or company where an AI system coordinates key resources (whether it's allocating capital or managing the supply chain).
Even if people can vote on policy changes, such authority is extremely fragile if the underlying AI layer is controlled by a single provider that can push model updates, adjust constraints, or override decisions. The governance layer in form may be decentralized, but the operational layer remains centralized—whoever controls the model ultimately controls the outcome.
When agents assume governance roles, they introduce a new layer of dependency. In theory, this could make direct democracy more feasible: everyone could have an AI agent to help understand complex proposals, model trade-offs, and vote according to predetermined preferences.
However, this vision can only be realized if agents are truly accountable to the people they represent, are portable across providers, and are technically bound to follow human instructions. Otherwise, the system you get may appear democratic on the surface, but it is actually manipulated by opaque models whose behavior is not truly controlled by anyone.
If the current reality is that agents are primarily built on a few basic models, then we need a way to prove that an agent is acting in the interests of the user, rather than the interests of the model company.
This will likely require providing cryptographic guarantees at multiple levels:
(1) The training data, fine-tuning or reinforcement learning on which the model instance is based;
(2) The specific prompts and instructions followed by the agent;
(3) Its actual behavior in the real world;
(4) Trustworthy guarantees, meaning that the provider cannot change its instructions or retrain it without the user's knowledge after deployment. Without these guarantees, agent governance degenerates into governance by people who control the model weights.
This is where encryption can be particularly effective. If collective decisions are recorded on-chain and executed automatically, AI systems can be required to strictly adhere to verified results. If agents possess cryptographic identities and transparent execution logs, people can check whether their agents are acting within acceptable boundaries.
If the AI layer is user-owned and portable, rather than locked to a single platform, then no company can change the rules with a single model update.
Ultimately, governing AI systems is fundamentally an infrastructure challenge, not a policy one. True authority depends on building enforceable guarantees within the system itself.
Filling the gap in traditional payment systems for AI-native businesses

AI agents are starting to purchase a variety of services—web scraping, browser sessions, image generation—and stablecoins are becoming an alternative settlement layer for these transactions. At the same time, a new market for agents is emerging.
For example, Stripe and Tempo's MPP marketplace aggregates over 60 services designed specifically for AI agents. In its first week, it processed over 34,000 transactions with fees as low as $0.003, and stablecoins are one of the default payment methods.
The difference lies in how these services are accessed: they do not have a checkout page. The agent reads the schema, sends a request, makes payment, and receives the output, all within a single exchange.
This represents a new type of unidentified merchant: a single server, a set of endpoints, and a price per call. There is no front-end interface, and no sales team.
Payment tracks that enable this are already live. Coinbase's x402 and MPP use different methods, but both embed payments directly into HTTP requests. Visa is also expanding its card payment track in a similar direction, providing a CLI tool that allows developers to spend from the terminal, with merchants receiving stablecoins instantly on the backend.
The data is still in its early stages. After filtering out non-organic activities such as fraudulent transactions, x402 processes approximately $1.6 million in agent-driven payments per month, far less than the $24 million recently reported by Bloomberg (citing data from x402.org). However, the surrounding infrastructure is expanding rapidly: Stripe, Cloudflare, Vercel, and Google have all integrated x402 into their platforms.
Developer tools represent a significant opportunity, and as "vibe coding" expands the pool of people capable of building software, the total addressable market for developer tools is also growing. Companies like Merit Systems are building products for this world, such as AgentCash—a CLI wallet and marketplace that connects MPP and x402. These products allow agents to purchase the data, tools, and capabilities they need using stablecoins in a single balance.
For example, a sales team's agent can call an endpoint to simultaneously retrieve data from Apollo, Google Maps, and Whitepages to enrich potential customer information, without the user leaving the command line.
There are several reasons why this agent-to-agent business tends to use crypto payment tracks (and emerging card-based solutions).
First, there is the risk of underwriting: traditional payment processors need to assume the risk of merchants when they are connected to them, and it is difficult for a headless merchant without a website or legal entity to be underwritten by a traditional processor.
Second, stablecoins have permissionless programmability on open networks: any developer can enable an endpoint to support payments without accessing a payment processor or signing a merchant agreement.
We've seen this model before. Every shift in business models creates a new type of merchant that existing systems initially struggle to serve. The companies building this infrastructure aren't betting on $1.6 million a month, but on what that number will look like when agents become the default buyers.
Repricing Trust in Agent Economy

For the past 300,000 years, human cognition has been the bottleneck to progress. Today, AI is pushing the marginal cost of execution to zero. When scarce resources become abundant, the constraint shifts. When intelligence becomes cheap, what becomes expensive? The answer is validation.
In the agent economy, the real limitation to scalability lies in our biologically limited ability to audit and underwrite machine decisions. Agent throughput has far exceeded human oversight capabilities. Due to the high cost and lag in oversight failures, the market tends to underinvest in oversight. The "human in the loop" is rapidly becoming physically impossible.
However, deploying unverified agents introduces compound risks. The system can relentlessly optimize "agent" metrics while silently deviating from human intent, creating an empty facade of productivity that masks the accumulation of massive AI debt. To safely entrust the economy to machines, trust can no longer rely on human oversight—it must be hard-coded into the system architecture itself.
When anyone can generate content for free, the most important thing is verifiable provenance—knowing where it came from and whether you can trust it. Blockchain, on-chain proofs, and decentralized digital identity systems are changing the economic boundaries of what can be securely deployed. You no longer treat AI as a black box, but gain a clear, auditable historical record.
As more AI agents begin to transact with each other, settlement tracks and proof of origin are becoming increasingly intertwined.
Systems that handle funds (such as stablecoins and smart contracts) can also carry cryptographic credentials that show who did what and who is responsible if problems arise.
Human comparative advantage will shift upwards: from spotting minor errors to setting strategic direction and taking responsibility when things go wrong. Lasting advantage belongs to those who can encrypt and insure their outputs, and absorb blame in the event of failure.
Unproven scaling is a liability that accumulates over time.
Maintaining user control

For decades, new layers of abstraction have defined how users interact with technology. Programming languages abstracted away machine code; command lines gave way to graphical user interfaces, followed by mobile apps and APIs. Each shift has hidden more underlying complexity, but has always kept users firmly in the loop.
In the agent world, users specify the result, not the specific action; the system decides how to achieve it. The agent not only abstracts how the task is executed but also who performs it. Users set initial parameters and then step back, letting the system run itself. The user's role shifts from interaction to supervision; unless user intervention occurs, the default state is "on."
As users delegate more tasks to agents, new risks emerge: ambiguous input may cause agents to act based on incorrect assumptions without the user's knowledge; failures may go unreported, leading to a lack of clear diagnosis; and a single approval may trigger unforeseen multi-step workflows.
This is where encryption can help. Encryption has always been dedicated to minimizing blind trust.
As users increasingly entrust more decision-making to software, agent systems exacerbate this issue and raise the bar for rigor in our design process—by setting clearer constraints, increasing visibility, and enforcing stronger guarantees about system capabilities.
A new generation of crypto-native tools is emerging. Scope delegation frameworks—such as MetaMask's Delegation Toolkit, Coinbase's AgentKit and Agent Wallet, and Merit Systems' AgentCash—allow users to define what agents can and cannot do at the smart contract level. Intent-based architectures (such as NEAR Intents, which has processed over $15 billion in cumulative DEX trading volume since Q4 2024) allow users to simply set the desired outcome (e.g., "bridge tokens and stake") without specifying how to achieve it.

