Author: Lao Bai
Two years later, V2X relaunched on Twitter. I'll reiterate what I said in that research report from two years ago—even the date is exactly the same: February 10th. (Related reading: ABCDE: Analyzing AI+Crypto from a Primary Market Perspective )
Two years ago, Vitalik Buterin (V神) had already subtly expressed his skepticism about the then-popular Crypto Helps AI initiatives. At that time, the three main drivers of the industry were the assetization of computing power, data, and models. My research report from two years ago primarily discussed some phenomena and skepticism I observed in the primary market regarding these three drivers. From Vitalik's perspective, he still favored AI Helps Crypto.
The examples he gave at the time were:
- AI as a participant in the game;
- AI as the game interface;
- AI as the rule of the game;
- AI as the game objective;
Over the past two years, we have made many attempts with Crypto Helps AI, but with little success. Many tracks and projects are just issuing a token and that's it, without any real business product-market fit (PMF). I call this the "tokenization illusion".
1. Computing power assetization - Most cannot provide commercial-grade SLAs, are unstable, and frequently disconnect. They can only handle simple to small-to-medium-sized model inference tasks, mostly serving peripheral markets, and revenue is not linked to tokens...
2. Data Assetization - On the supply side (individual users), there is significant friction, low willingness, and high uncertainty. On the demand side (enterprises), what is needed are structured, context-dependent, and professional data providers with trustworthy and legally responsible entities, which are difficult for DAO-based Web3 projects to provide.
3. Model Assetization - A model is inherently a non-scarce, replicable, fine-tunable, and rapidly depreciating process asset, rather than a final state asset. Hugging Face is a collaboration and dissemination platform, more like GitHub for ML than the App Store for models. Therefore, attempts to tokenize models using the so-called "decentralized Hugging Face" have almost always ended in failure.
In addition, we have also tried various "verifiable reasoning" methods in the past two years, which is a typical story of looking for a nail with a hammer. From ZKML to OPML to Gaming Theory, and even EigenLayer has transformed his restaking narrative into one based on verifiable AI.
But it's basically the same thing that's happening in the restaking space - very few AVSs are willing to pay for additional verifiable security.
Similarly, verifiable reasoning is basically about verifying "things that nobody really needs to be verified," and the demand-side threat model is extremely vague - who exactly is it defending against?
AI output errors (model capability issues) far outweigh malicious manipulation of AI output (adversarial issues). As seen in the recent security incidents on OpenClaw and Moltbook, the real problem stems from:
- The strategy was designed incorrectly.
- Granting too many permissions
- The boundaries are not clear.
- Unexpected interaction with the tool set
- ...
There is almost no such thing as a "model being tampered with" or "the reasoning process being maliciously rewritten."
I posted this picture last year, I wonder if any of you remember it.
The ideas that Vitalik Buterin presented this time are clearly more mature than those from two years ago, which is also due to the progress we have made in various areas such as privacy, X402, ERC8004, and prediction markets.
As we can see, the four quadrants he divided this time are divided into two parts: one half belongs to AI Helps Crypto and the other half belongs to Crypto Helps AI, instead of the former that was clearly biased towards two years ago.
Top left and bottom left - Leveraging Ethereum's decentralization and transparency to solve trust and economic collaboration issues in AI.
1. Enabling trustless and private AI interaction (infrastructure + survival): Utilizing technologies such as ZK and FHE to ensure the privacy and verifiability of AI interactions (I'm not sure if the verifiability inference I mentioned earlier counts).
2. Ethereum as an economic layer for AI (infrastructure + prosperity): Enables AI agents to make economic payments, hire other bots, pay deposits, or build reputation systems through Ethereum, thereby building a decentralized AI architecture rather than being limited by a single giant platform.
Top right and bottom right - Leveraging the intelligent capabilities of AI to optimize user experience, efficiency, and governance within the crypto ecosystem:
3. Cypherpunk mountain man vision with local LLMs (Impact + Survival): AI as a "shield" and interface for users. For example, local LLMs (Large Language Models) can automatically audit smart contracts and verify transactions, reducing reliance on centralized front-end pages and safeguarding individual digital sovereignty.
4. Make much better markets and governance a reality (Impact + Prosperity): AI is deeply involved in prediction markets and DAO governance. AI can act as a highly efficient participant, amplifying human judgment through massive information processing, thus solving various market and governance problems previously faced by humans, such as insufficient attention span, high decision-making costs, information overload, and apathy in voting.
Previously, we were fervently advocating for Crypto to Help AI, while Vitalik Buterin (V神) stood on the other side. Now we've finally met in the middle, though it seems to have little to do with various tokenizations or AI Layer 1. Hopefully, looking back at today's post two years from now will bring some new directions and surprises.

