Understanding Huang Renxun's Physical AI: Why is it said that Crypto's opportunities are also hidden in "nooks and crannies"?

At the Davos Forum, NVIDIA CEO Jensen Huang announced a pivotal shift in AI computing from training to inference and "Physical AI," marking the end of the era defined by simply stacking GPUs for model training. He emphasized that future AI competition will center on real-world applications.

Physical AI, described as the second half of Generative AI, aims to bridge the gap between AI intelligence and physical action. It must overcome three core challenges:

  • Spatial Intelligence: AI must understand and navigate the 3D physical world, requiring massive amounts of real-time environmental data from every corner of indoor and outdoor spaces.
  • Virtual Training Grounds: Robots need extensive trial-and-error training in simulated environments (like NVIDIA's Omniverse) before operating in the real world, demanding immense physics simulation and rendering compute.
  • Tactile Data & Electronic Skin: For AI to interact with the physical world, it needs sensors to collect data on touch, pressure, and texture—a largely untapped data asset.

This shift creates significant ecosystem opportunities for the Crypto sector, particularly in filling data and infrastructure gaps:

  • DePIN Networks can use token incentives to crowdsource data collection from "nooks and crannies" that large corporations cannot easily access.
  • Distributed Computing networks can aggregate idle consumer hardware to provide the necessary edge computing and rendering power for robot training and operation.
  • Data Ownership Models (like DeData) can enable the private and incentivized sharing of sensitive data, such as tactile information, by granting contributors ownership and dividends.

In essence, Physical AI represents the next frontier for AI, and Crypto projects like DePIN, DeAI, and DeData are well-positioned to build the foundational infrastructure and economic models it requires.

Summary

What exactly did Jensen Huang say at the Davos Forum?

On the surface, he was selling robots, but in reality, he was undertaking a bold 'self-revolution'. With a single statement, he ended the old era of "stacking graphics cards," but unexpectedly, he provided the Crypto industry with a once-in-a-lifetime entry ticket?

Yesterday at the Davos Forum, Huang pointed out that the AI application layer is experiencing explosive growth, and the demand for computing power will shift from the "training side" to the "inference side" and the "Physical AI side".

That's really interesting.

As the biggest winner of the "computing power arms race" in the AI 1.0 era, NVIDIA is now actively announcing a shift towards "inference" and "Physical AI," which actually sends a very straightforward signal: the era of "miracles through brute force" by stacking GPUs to train large models is over, and the future AI competition will revolve around "application is king" in terms of application scenarios.

In other words, Physical AI is the second half of Generative AI.

Because LLM has read all the data that humans have accumulated on the internet over the past few decades, it still doesn't know how to open a bottle cap like a human does. Physical AI aims to solve the problem of "unity of knowledge and action" beyond AI intelligence.

Because physical AI cannot rely on the "long reaction time" of a remote cloud server. The logic is simple: if ChatGPT is a second slower in generating text, you will only feel a lag, but if a bipedal robot is a second slower due to network latency, it may fall down the stairs.

However, while Physical AI appears to be a continuation of generative AI, it actually faces three completely different new challenges:

1) Spatial intelligence: Enabling AI to understand the three-dimensional world.

Professor Fei-Fei Li once proposed that spatial intelligence is the next North Star in the evolution of AI. For a robot to move, it must first "understand" its environment. This is not just about recognizing "this is a chair," but about understanding "the position and structure of this chair in three-dimensional space, and how much force I should use to move it."

This requires massive amounts of real-time 3D environmental data that covers every corner of the indoor and outdoor environment;

2) Virtual training ground: Allow AI to train through trial and error in a simulated world.

The Omniverse mentioned by Jensen Huang is actually a kind of "virtual training ground." Before entering the real physical world, robots need to be trained in a virtual environment to "fall down 10,000 times" to learn to walk. This process is called Sim-to-Real. If robots were to directly try and fail in reality, the hardware costs would be astronomical.

This process places exponential demands on the throughput of physics engine simulation and rendering computing power;

3) Electronic skin: "Tactile data" is a gold mine of data waiting to be tapped.

For physical AI to possess a "feel," it needs electronic skin to sense temperature, pressure, and texture. This "tactile data" is a completely new asset that has never been collected on a large scale before. This may require large-scale sensor collection. At CES, Ensuring showcased a "mass-produced skin" with 1,956 sensors integrated into a single, densely packed hand, which is why it can achieve the amazing effect of a robot peeling an egg.

This “tactile data” is a completely new asset that has never been collected on a large scale before.

After reading this, you will definitely feel that the emergence of the Physical AI theory has given a lot of opportunities for wearable devices and humanoid robots to emerge. You should know that these were basically criticized as "oversized toys" just a few years ago.

Actually, what I want to say is that in the new landscape of Physical AI, the Crypto sector also has excellent opportunities to fill ecological gaps. Let me give you a few examples:

1. AI giants can send street view cars to scan every main street in the world, but they cannot collect data from the nooks and crannies of streets, neighborhoods, and basements. However, by using token incentives provided by DePIN network devices to encourage global users to use their personal devices to fill in these gaps, it may be possible to fill the gaps.

2. As mentioned earlier, robots cannot rely on cloud computing power, but they need to utilize edge computing and distributed rendering capabilities on a large scale in the short term, especially for processing a lot of simulated data. By using distributed computing networks to aggregate idle consumer-grade hardware, and then distributing and scheduling it, it can be put to good use.

3. "Tactile data," besides large-scale sensor applications, is obviously extremely private as the name suggests. How can we get the public to share this privacy-related data with AI giants? A feasible path is to allow those who contribute data to obtain data ownership and dividends.

In summary:

Physical AI is what Huang (Huang) calls the second half of the web2 AI track. And isn't the same true for web3 AI + Crypto tracks like DePIN, DeAI, and DeData? What do you think?

Share to:

Author: 链上观

This article represents the views of PANews columnist and does not represent PANews' position or legal liability.

The article and opinions do not constitute investment advice

Image source: 链上观. Please contact the author for removal if there is infringement.

Follow PANews official accounts, navigate bull and bear markets together
Recommended Reading
2 hour ago
2 hour ago
2 hour ago
2 hour ago
3 hour ago
3 hour ago

Popular Articles

Industry News
Market Trends
Curated Readings

Curated Series

App内阅读