Author: Sid @IOSG

The next era of P2E: the convergence of games, AI agents, and cryptocurrency

The Current State of Web3 Gaming

As newer and more attention-grabbing narratives emerge, Web3 games as an industry have taken a back seat in both primary and public market narratives. According to Delphi's 2024 report on the gaming industry, Web3 games have raised less than $1 billion in cumulative primary market funding. This is not necessarily a bad thing, it just shows that the bubble has subsided and capital may now be compatible with higher quality games. The following figure is a clear indicator:

The next era of P2E: the convergence of games, AI agents, and cryptocurrency

Throughout 2024, gaming ecosystems like Ronin saw a massive surge in users, and almost rivaled Axie’s glory days in 2021 thanks to the emergence of high-quality new games like Fableborn.

The next era of P2E: the convergence of games, AI agents, and cryptocurrency

Gaming ecosystems (L1s, L2s, RaaS) are becoming more and more like Steam for Web3, they control the distribution within the ecosystem, which has become a motivation for game developers to develop games in these ecosystems because it can help them acquire players. According to their previous report, the user acquisition cost of Web3 games is about 70% higher than that of Web2 games.

Player stickiness

Retaining players is just as important as attracting them, if not more important. While data on player retention for Web3 games is lacking, player retention is closely related to the concept of “Flow” (a term coined by Hungarian psychologist Mihaly Csikszentmihalyi).

The "flow state" is a psychological concept in which a player achieves a perfect balance between challenge and skill level. It's like "getting in the zone" - time seems to fly and you're completely immersed in the game.

The next era of P2E: the convergence of games, AI agents, and cryptocurrency

Games that consistently create flow tend to have higher retention rates due to the following mechanisms:

#Advanced Design

Early game: simple challenges, build confidence

Mid-game: Gradually increasing difficulty

Late Game: Complex Challenges, Game Mastery

As players improve their skills, this fine-grained difficulty adjustment allows them to stay within their own pace.

#Participate in the loop

Short term: immediate feedback (kills, points, rewards)

Mid-term: Level completion, daily tasks

Long-term: character development, ranking

These nested loops maintain player interest over different time frames.

#The factors that destroy the flow state are:

1. Improper difficulty/complexity settings: This could be due to poor game design, or even an unbalanced matchmaking due to insufficient player numbers

2. Unclear goals: Game design factors

3. Feedback delay: Game design and technical issues

4. Intrusive monetization: game design + product

5. Technical issues/delays

The next era of P2E: the convergence of games, AI agents, and cryptocurrency

Symbiosis of games and AI

AI agents can help players achieve this state of flow. Before we look at how to achieve this, let’s first understand what kind of agents are suitable for use in games:

LLM and Reinforcement Learning

Game AI is all about speed and scale. When using LLM-powered agents in games, every decision requires calling a giant language model. It’s like having a middleman before every step. The middleman is smart, but waiting for his response makes everything slow and painful. Now imagine doing this for hundreds of characters in a game. Not only is it slow, it’s also expensive. This is the main reason why we haven’t seen large-scale LLM agents in games yet. The largest experiment we’ve seen so far is a 1,000-agent civilization developed on Minecraft. With 100,000 concurrent agents on different maps, this would be very expensive. Players would also be affected by interruptions in the flow due to latency caused by adding each new agent. This breaks the flow state.

Reinforcement Learning (RL) is a different approach. Think of it like training a dancer, but instead of giving them hands-on instruction through a headset. With RL, you spend time upfront teaching the AI how to “dance” and how to respond to different situations in the game. Once trained, the AI is naturally fluid, making decisions in milliseconds without asking for anything from above. You can have hundreds of these trained agents running in your game, each making independent decisions based on what it sees and hears. They’re not as articulate or as agile as LLM agents, but they do things quickly and efficiently.

The real magic of RL comes when you need these agents to work together. Whereas LLM agents require lengthy “conversations” to coordinate, RL agents can develop an implicit understanding during training — like a football team that practices together for months. They learn to anticipate each other’s moves and coordinate naturally. While it’s not perfect, and sometimes they make mistakes that LLM wouldn’t, they can operate at a scale that LLM can’t match. For gaming applications, this tradeoff always makes sense.

The next era of P2E: the convergence of games, AI agents, and cryptocurrency

Agents and NPCs

Acting as an NPC agent will solve the first core problem facing many games today: player liquidity. P2E was the first experiment to use cryptoeconomics to solve the player liquidity problem, and we all know how that turned out.

The pre-trained agent serves two purposes:

  • Populating the world in multiplayer
  • Maintains the difficulty level of a group of players in a world, keeping them in a flow state

While this seems super obvious, it is difficult to build. Indie and early Web3 games don’t have the financial resources to hire AI teams, which provides an opportunity for any agent framework service provider with RL at its core.

Games can work with these service providers during trials and testing to lay the foundation for player liquidity when the game is released.

This way, game developers can focus on game mechanics and making their games more interesting. Although we like to integrate tokens into games, games are games after all, and games should be fun.

Agent Player

League of Legends, one of the most played games in the world, has a black market where players train their characters with the best attributes, even though the game prohibits them from doing so.

This helps form the basis for gaming characters and attributes as NFTs, creating a marketplace to do this.

What if a new subset of “players” emerged, acting as coaches for these AI agents? Players could coach these AI agents and monetize them in different forms, such as winning tournaments, and also acting as “training partners” for eSports players or passionate gamers.

The return of the metaverse?

Early versions of the metaverse may have simply created an alternative reality, not the ideal one, and therefore missed the mark. AI agents help metaverse residents create an ideal world — escape.

This is where LLM-based agents come in handy, in my opinion. Perhaps someone could populate their world with pre-trained agents that are domain experts and can have conversations about things they like. If I create an agent that was trained on 1,000 hours of Elon Musk interviews, and a user wants to use an instance of this agent in their world, I can get rewarded for that. This could create a new economy.

With metaverse games like Nifty Island, this can become a reality.

In Today: The Game, the team has created an LLM-based AI agent called “Limbo” (speculative tokens released), with the vision of multiple agents interacting autonomously in this world, while we can watch a 24x7 live stream.

The next era of P2E: the convergence of games, AI agents, and cryptocurrency

How does Crypto fit in?

Crypto can help solve these problems in different ways:

  • Players contribute their own game data to improve the model, get a better experience, and get rewarded for it
  • Coordinate multiple stakeholders including character designers, trainers, etc. to create the best in-game agency
  • Create a marketplace to own and monetize in-game agent ownership

There is a group that is doing all of these things and more: ARC Agents. They are solving all of the problems mentioned above.

They have the ARC SDK, which allows game developers to create human-like AI agents based on game parameters. With a very simple integration, it solves player flow issues, cleans game data and turns it into insights, and helps players maintain flow in the game by adjusting the difficulty level. To do this, they use Reinforcement Learning technology.

They initially developed a game called AI Arena, where you basically train your AI character to fight. This helped them form a baseline learning model that formed the basis of the ARC SDK. This formed a flywheel of sorts like DePIN:

The next era of P2E: the convergence of games, AI agents, and cryptocurrency

All of this is coordinated with their ecosystem token $NRN. The Chain of Thought team explains this well in their article on ARC Proxy:

The next era of P2E: the convergence of games, AI agents, and cryptocurrency

Games like Bounty are taking an agent-first approach, building agents from scratch in a wild west world.

The next era of P2E: the convergence of games, AI agents, and cryptocurrency

Conclusion

The convergence of AI agents, game design, and Crypto is not just another tech trend, it has the potential to solve a variety of problems that plague independent games. The beauty of AI agents in gaming is that they enhance what makes games fun - good competition, rich interactions, and challenges that keep people coming back. As frameworks like ARC agents mature and more games integrate AI agents, we are likely to see entirely new gaming experiences emerge. Imagine a world that comes alive not because of the other players in it, but because the agents in it are able to learn and evolve with the community.

We are moving from a "play-to-earn" era to something much more exciting: games that are both truly fun and infinitely scalable. The next few years will be exciting for developers, players, and investors who are watching this space. The games of 2025 and beyond will not only be more technologically advanced, but they will also be fundamentally more engaging, more engaging, and more alive than anything we've seen before.