Everyone's token-maxing: an arms race no one dares to stop.

The author observed Silicon Valley's AI development pace has shifted from monthly to weekly, with traditional institutions like YC becoming lagging indicators. Tech giants engage in token-maxxing arms races, ignoring code security and spending token budgets close to engineer salaries, yet productivity gains haven't translated into revenue growth. xAI lost 90% of its early team due to high pressure, and Musk transferred people from SpaceX. Both engineers and researchers face anxiety over AI replacement, while new roles like 'AI Builder' emerge but are hard to recruit. Nvidia centralizes power, with GPU shortages persisting until 2028. Valuation frameworks collapse, SaaS being repriced. Society sees anti-AI protests and CEO safety threats. Despite chaos, AI offers hope in biomedicine, potentially curing cancer.

Summary

Everyone's token-maxing: an arms race no one dares to stop.

Original content by LatePost team

April 26, 2026, 18:06, Beijing

picturepicture

We took a trip to Silicon Valley and found that even the wavemakers were almost drowning from the waves.

LatePost columnist | Meng Xing, Partner at Five Sources Capital

On the morning of March 24, 2026, I was sitting in the audience at YC W26 batch Demo Day. When the fifth company came on stage to give its presentation, I decided to stop taking notes.

It's not that it's unimportant, but I realized that the things I've written down might be outdated by next month.

The more than 100 companies in this cohort are actually doing highly concentrated work: about 80% are vertical agents, such as helping lawyers organize documents, helping customer service distribute work orders, and helping HR screen resumes.

If I had seen these projects last October, I would most likely have thought, "They're pretty innovative." But the problem is, the world has changed in the last five months.

Claude Code evolved from a more developer-oriented tool into an interface that almost anyone can use directly. After Opus 4.6 was released, the barrier to entry for Vibe Coding was lowered to the ground.

Those vertical agents, before they establish business barriers, can be built by an average engineer today, or even myself, in a weekend. They have lost their investment value.

Y Combinator's program cycle is three months. This batch of students entered in December, and with the initial screening, they were essentially "good companies" selected five months ago. And five months is enough time for several paradigm shifts to occur in the current pace of AI iteration.

When I started my first business in 2012 and received a Fly Out from Y Combinator (an invitation for an in-person interview), Y Combinator was almost unrivaled in the accelerator field, and the companies it selected often represented the "next direction." But the competitive landscape has changed, and in recent years Y Combinator seems to have reversed course, gradually becoming a lagging indicator.

Y Combinator's batch system, from application, screening, entry, refinement, to pitching, has been operating successfully for over a decade in the mobile internet era. However, this pace was designed for a slower world.

In the year and a half since I returned to the venture capital industry, I've been visiting Silicon Valley about once a quarter, the last time being last October. Every time I came before, I felt that things were changing very quickly, but this "speed" was mostly perceived on a monthly basis.

This time, it has to be done on a weekly basis.

One evening at dinner, a friend who does post-training casually remarked:

"I've noticed that Silicon Valley itself is starting to fall behind."

picture

All-member token-maxxing: An arms race that no one dares to stop.

Six months ago, if someone had told me that Meta's tens of thousands of engineers were all writing code using competitors' products, I would have thought they were joking.

But it's true. The entire Meta team uses Claude Code. This isn't a startup, not some experimental team; it's a trillion-dollar company.

Code security is disregarded, token budgets have exploded, leaderboards are booming, and the entire Silicon Valley is pouring money into AI without regard to cost. But what happens after all that money is poured in?

Let's start with code security. Six months ago, this would have been unthinkable, because code is a company's core asset. How could you allow an outside company's API to access it? Meta initially thought the same way; they even developed something internally called myclaw to try and solve this problem. A friend at Meta told me they created a coding product, but it was "not user-friendly, nobody used it." After that, the company had to relax its restrictions: as long as it didn't involve customer data, anyone who wanted to use Claude Code could use it.

Then each department started holding internal meetings on "how to become an AI native organization," conducting training, and carrying out assessments. Code security and usage security, which used to be taken for granted, were all relegated to the back burner; efficiency was the priority.

For security reasons, Google prohibits most employees from using competitor tools such as Claude Code or Codex, but DeepMind is an exception. Several teams responsible for the Gemini model and internal applications use Claude Code.

Google itself has also made efforts: they launched the internal coding tool Antigravity, and in February of this year, they claimed that about 50% of the company's new code has been written by AI.

Even so, DeepMind is still using Claude Code. A key reason DeepMind dares to do this is that Anthropic has provided them with a private deployment; after all, Anthropic's inference and training primarily run on Google Cloud TPUs, and the two companies have this foundation of trust. But Meta and other tech giants don't have this relationship; they've truly disregarded code security. Everyone is betting on the same thing: maximizing speed first.

Code security is just the first flag to fall; the second is token budgeting.

Among the AI-native startups I spoke with in Palo Alto, an engineer's annual token budget was around $200,000. This figure itself isn't unusual; what is unusual is that it means the AI ​​costs incurred by a top engineer are approaching their salary. It seems like companies are using AI to cut costs through layoffs, but in reality, the total cost may not have decreased at all; they've simply replaced human costs with token costs.

Meta took the most extreme approach in this regard. They created an internal token consumption leaderboard: whoever used the most tokens would be listed, and those at the bottom might be laid off. As a result, Meta employees even clamored for an unofficial title called "token legend."

However, at the same time, Meta has undergone two rounds of layoffs this year, totaling tens of thousands of employees. While everyone was using Claude Code to boost token sales, large-scale layoffs were also taking place.

These two things are not contradictory; they are two sides of the same coin.

I visited a Series C funding company, and the CTO showed me Slack. It was full of agents running—a dozen or so Cursor agents running in parallel in the background, with a Claude Code window for scheduling. The most prevalent anxiety among programmers right now is: if I don't know what my dozen or so agents are doing before I go to sleep, I get really anxious.

But has productivity really increased by that much? Since the end of last year, many CTOs of top inference engine and database companies have been excitedly telling me about "100 times more engineers" and "10 times more efficiency." What used to take 60 people a year can now be done by 2 people plus Claude Code in a week.

At first, I was excited along with them, but then I calmed down and started asking myself: Okay, if efficiency improves by 100 times, does that mean the company's revenue increases by 100 times? Or that the product line expands by 100 times? You can't expect a "100-fold" improvement to end up with laying off a lot of people, right?

I didn't get a direct answer. The truth is, a 100-fold increase in efficiency only translates to a 50% or 1-fold increase in the company's revenue.

What's the difference? Nobody can say for sure yet.

"After using so many tokens, the company should have undergone a genetic mutation and become a completely different kind of company. But what exactly it will become, I don't know."

A founder with a background in B2B sales told me that his team of 16 people, including two sales representatives, achieved a $30 million ARR from scratch in 12 months, all thanks to AI coding. You do see cases like this occasionally. But most of the time, what I see are startups building more things, but these things don't have product-market fit (PMF).

In Silicon Valley, it's currently popular to use Vibe coding to try 100 different approaches to see which one works, rather than just trying 10. But who will be able to catch the next trend? It's hard to say.

One of the most striking counterexamples came from within Anthropic. I asked a friend at Anthropic, "What's the most painful scenario for you when using agents?" He said it was oncall (instant response).

A typical scenario for oncall tasks is: if Claude's API suddenly becomes slow, a model inference node crashes, or a user reports an abnormal prompt output, the oncall engineer needs to quickly locate the root cause of the problem, determine whether it is a code bug, a computing power allocation problem, or an anomaly in the model itself, and then decide how to fix it.

Anthropic is the world's best coding agent company, and this scenario is extremely close to their core competency, yet their internal oncall agent still doesn't work well.

This is the reality in April 2026: the steam engine has been invented, but sometimes it's not even as fast as a horse-drawn carriage. The key is that everyone knows the steam engine will eventually run faster, so everyone's throwing money at it: code security is ignored, token budgets are overflowing, and leaderboards are booming. As for when the steam engine will actually outrun a horse-drawn carriage? Nobody knows, but nobody dares to stop and wait for that day.

Because the cost of stopping might be greater than burning the wrong token.

Moreover, token consumption is unlikely to grow linearly. This reminds me of my previous experience working on autonomous driving: in 2021, we achieved five consecutive hours of unattended autonomous driving in Shanghai for the first time. At the time, it felt like a major breakthrough. Before that, the test fleet might have gradually increased from 10 to 15 to 20 vehicles; but after that inflection point, it quickly reached 100 to 1000 vehicles. Today's coding agents are in a similar phase.

picture

In Shanghai in 2021, Didi's autonomous driving system achieved five consecutive hours of unattended driving, marking a milestone for autonomous driving in China. The picture shows Meng Xing, then COO of Didi's autonomous driving company, in conversation with Sebastian Thrun, the "father of Google's self-driving car," in 2021.

METR is a California-based research institution specializing in evaluating AI coding capabilities. Last year, they proposed a metric: measuring how long an AI agent can complete a task with a 50% success rate (based on the completion time of human experts). When first released in March 2025, Claude 3.7 Sonnet achieved this timeframe of 50 minutes; by the end of 2025, Claude Opus 4.6 had reached 14.5 hours. Over the past two years, the doubling cycle of this metric has been compressed from 7 months to 4 months. Once the agent's reliability improves further, token consumption will not just increase by 50% annually, but will increase by an order of magnitude overnight.

There's a prediction that's been widely shared among friends: by the end of this year, many companies (including major tech companies) will actually only need 20% of their staff.

After the xAI team collapsed, the rocket builders started making models.

At a steakhouse in Mountain View, around 9 p.m., a friend who had worked with Musk for a long time sat down across from me. We chatted for over three hours, and looking back, I realized he didn't seem to have said a single good word about Musk.

One detail: I asked him, "You've worked at xAI for three years, what's your daily routine like?" He said he'd basically lived at the company for the past three years, so his home wasn't very furnished, and he hadn't even bought a bed. He slept in one of those sleeping pods at the company, kind of like a youth hostel. I said, "Now that you have a huge amount of stock options and you've already left the company, you should at least buy a bed." He smiled.

xAI is known for its intense workload in Silicon Valley, but now about 90% of the early team has left. They have a group chat for former employees, and they're constantly adding new people.

The trigger was Tony Wu's departure, which then triggered a chain reaction. In the words of an insider, "Other companies might need six months to prepare for the departure of their senior management team, but xAI only needed one month." Some people sensed Musk's dissatisfaction as early as last October, but they didn't expect the entire purge to happen so quickly.

Now Musk is bringing in people from SpaceX and Tesla to take over xAI; "the rocket builders are starting to build models."

Musk's dissatisfaction stems from the fact that despite investing countless funds and computing power, Grok has never been able to reach the forefront. But why? This is a question I ask everyone I meet who comes from xAI. The answer is actually simpler than I imagined. A friend put it bluntly: the team is extremely competitive and works incredibly hard, but the management style of manufacturing may not be suitable for large model companies.

Having worked on autonomous driving for eight years, I have some thoughts on this matter. Musk's past work with SpaceX and Tesla was essentially systems engineering: the chain is very long, involving software, hardware, and the supply chain, each of which has room for innovation, but ultimately it is an end-to-end engineering problem.

His strength lies in identifying key leverage points within such long chains of processes and then tackling them by compressing the timeline to the extreme. Rocket engine cascading and reusable landing are products of this thinking.

But at xAI, what he's doing doesn't seem like systems engineering. He's doing three things: first, building the world's largest GPU cluster (today, people even joke that xAI, originally a neo lab, is now more like a neo cloud, providing computing power for the Cursor), then setting pulse-like deadlines for the team, and finally, personally photographing some product features. This is focusing on a few key points, not making a complete plan.

Anyone working on autonomous driving knows that in the later stages, the core conflict arises regarding leadership among the software, infrastructure, and hardware teams. Each of these areas requires a CTO-level decision-maker, but no single person possesses expertise in all three. A good approach is for the founder, while not an expert in every area, to know how to balance resources and prioritize tasks at each stage—prioritizing software in one phase and delegating to infrastructure in the next. This demonstrates a holistic planning approach.

The problem with xAI is the lack of a holistic plan; it's all about sprinting. If the pressure weren't so intense, intelligent people could self-correct; given time, different areas would find their own rhythm of collaboration. But Musk's extremely high-pressure management, coupled with insufficient holistic planning, caused everything to fall apart under pressure. Each person in charge was focused on protecting their own priorities; no one was coordinating the overall picture.

One often overlooked reason for the success of SpaceX and Tesla is that Musk has essentially never encountered competitors of equal stature in those industries; he's been fighting against himself. But AI is different. AI faces fierce competition to the point where even OpenAI could be undermined by Anthropic.

A friend who works at a top lab said last year that there were two things he hadn't expected: first, the competition was so fierce; second, there were so few opportunities for application innovation in the AI ​​era, as they were all being consumed by models.

picture

Anthropic's rise represents the most dramatic reversal in the AI ​​industry over the past year. It has also completely changed the focus of the battle: a year ago, everyone was vying for C-end user numbers and video generation, but now (at this stage) the battleground determining the outcome is B2B and coding.

Of course, xAI's story is also a story of "what happens when money comes too fast and too much".

I believe those who left xAI today won't regret their decision to join. xAI is arguably Silicon Valley's fastest wealth creation myth. From its initial funding round of several billion dollars to its current status as a $250 billion behemoth after merging with SpaceX, xAI achieved this in just one year. Almost every one of xAI's 11 co-founders became a billionaire, and core engineers earned tens of millions to hundreds of millions of dollars. There's just too much money in Silicon Valley. If they were to start another business today, they would have the confidence to pursue their interests rather than focusing on quick profits.

Anxious engineers, even more anxious researchers

Talking to engineers these days reveals a strange unspoken understanding: everyone admits they don't write much code anymore, but they all pretend it's no big deal because they'll become AI-armed engineers who will eliminate those who aren't.

Today, 80% of the core skills of software engineers have been replaced by models. The reason they are still used is that models occasionally make mistakes and require someone to monitor them. But the act of "monitoring" itself may soon be unnecessary.

To take a more radical view: today's so-called "AI native organizations" sound very sexy—having each department streamline workflows, digitize AI-enabled components, and create skills. But essentially, it's about manually distilling oneself: you turn your skills into machine skills, the company acquires your skills, and that's effectively completing the AI ​​transformation. Whether or not to lay off employees because of this is a moral issue. Meta is doing exactly that today.

Although everyone is embroiled in token-maxxing today, you can still feel a pervasive sense of anxiety at the grassroots level throughout Silicon Valley.

What surprised me even more was that this anxiety was spreading to the researcher community.

Researchers are the most elite talents. This term doesn't refer to any general "researcher," but rather to the group of people at large modeling companies (OpenAI, Anthropic, DeepMind, etc.) who are responsible for model training and algorithm innovation. The difference between them and engineers is: engineers "build things," writing code, deploying, and optimizing performance; researchers are further upstream, "thinking out what to build": proposing new training methods, designing model architectures, and running experiments to verify hypotheses.

And now, even the work of researchers is being automated. This is what DeepMind is doing—using models to train models, which is also the AI ​​self-evolution that has become a hot topic in Silicon Valley this year. This year, engineers are being phased out, and by the end of the year, researchers will also begin to be replaced.

This is not a new concept. Andrej Karpathy's auto research started the trend, and today various AI scientist tools and harness frameworks are moving in this direction. However, most current closed loops only go as far as "publishing papers"—AI helps you run experiments and write papers, but ultimately, humans still make the judgments.

Companies like OpenAI, Anthropic, and Google are taking a more radical approach: they aim to close the loop directly to model upgrades themselves, not just minor improvements, but allowing AI to find its own next paradigm-level breakthrough. If this can be achieved, it will truly replace researchers. Google DeepMind has been working on this internally for over a year, letting the model decide what experiments to run next, evaluate which path is more promising, and then follow that path—this is how the model trains its next generation.

Moreover, researchers have a greater incentive to be laid off, for a harsh reason—because they are expensive. There may only be a few thousand researchers worldwide, and their annual salaries can easily reach millions, tens of millions, or even hundreds of millions of dollars.

“The future scenario might be that 10 people do the work of 100 people in the past, receive 20 salaries, and then 90 people lose their jobs.”

Moreover, the actual number of layoffs is much larger than the reported figures suggest. For many companies, the first blow isn't to their own financial statements, but to their outsourced service providers. This means that countries like India and the Philippines, which once provided customer service, data labeling, and financial back-office support to Europe and the US, may be among the first to be impacted. The "service industry ladder" that some developing countries rely on to upgrade their economies may be being ripped away by AI.

The entire Silicon Valley is watching Meta closely. If its experiment succeeds—revenue doesn't drop and efficiency truly improves—other major companies will quickly follow suit, and layoffs will go from isolated cases to the norm across the industry. Moreover, layoffs have a cruel self-accelerating mechanism: initially, people are afraid to lay off employees for fear of damaging morale; once it becomes the norm, the layoffs become faster and faster, and the less they feel the pain.

However, while old positions are being cut, new positions are also emerging.

Many startups are starting to hire a new role called "AI builder"—a combination of product manager, front-end engineer, and back-end engineer. There are also hybrid roles combining data scientist and machine learning engineer, as well as content operators who integrate writing, marketing, and operations.

Silicon Valley companies have a huge demand for these new roles, but the core challenge is: nobody knows how to recruit them. You can't screen them out with resumes because the role didn't exist before, and the candidate's skills might be hidden in their own projects; you can't test them with on-the-spot coding tests either, because the core competency is a combination of "aesthetic sense + AI usage ability." So some startups are already doing this: automatically generating a simulated environment based on the employer's needs, allowing interviewees to complete tasks on-the-spot using AI tools. It's somewhat like old coding tests, but testing something entirely new.

When AI can do everything, the value of humans is shifting from "what they can do" to judging "what is worth doing and what should not be done".

With two valuations in one funding round, Nvidia aims to secure chips at every "table."

We've discussed so many people who have been replaced—engineers, researchers, and finance professionals. But there's one role that hasn't been replaced; in fact, it's become increasingly like the behind-the-scenes boss in this reshuffling.

This world, which appears to be a distributed innovation, is actually extremely centralized at its core.

This center is Nvidia.

I had assumed that the scarcity of GPUs had eased over the past year. There was indeed a brief respite; around mid-2025, some Neo Cloud companies backed by Nvidia ("new cloud service providers" that rose to prominence during the AI ​​wave, specializing in GPU computing power) struggled to secure funding, some experienced sluggish business growth, and some even sold themselves at that time. But this time, I discovered that the scarcity has returned, and it's even more absurd than before.

A concrete example: If you can provide a stable API service today, achieving 99th percentile stability, you can sell it for two to three times the price of the official API.

picture

Following the surge in demand for Anthropic, API outages are becoming more frequent, which is causing problems for many agent products built on top of Claude.

Previously, the business of router services operated on the principle of "I'm cheaper than the official provider, so I get traffic." Now, the logic is completely reversed: stability itself has become a scarce resource. A number of startups have made a lot of money from this, and now mini versions of Coreweave/Nebius are springing up like mushrooms after rain in Silicon Valley.

Moreover, this computing bottleneck isn't just a matter of GPU allocation. Elad Gil recently wrote an assessment I strongly agree with: the capacity expansion cycle for upstream memory manufacturers (Hynix, Samsung, Micron) will take at least another two years. This means that before 2028, no AI company can significantly gain a competitive advantage simply by increasing computing power. Computing constraints are objectively reinforcing the oligopolistic structure of the large-scale model market—it's not that anyone isn't working hard enough, it's just that the manufacturing cycle in the physical world is inherently slow.

The power structure behind it is clear: whoever has the cards is powerful, and whoever has the cards is decided by Nvidia. CoreWeave, Lambda, and Nebius, which went public today, are all backed by Nvidia.

Nvidia's strategy is more sophisticated than I previously understood. An investor in Reflection told me that when Neo Lab first started raising funds, it was focused on coding. Then the founder met with Jensen Huang, who told him: "Stop coding, come out and create 'the American DeepSeek,' an American open-source model. I'll give you money and resources." Reflection then underwent a complete 180-degree transformation.

As a result, the US capital market has seen some previously unseen structures: in the same round of financing, two valuation tiers are offered. Investors with good relationships and who entered early are placed in the lower valuation tier; while industry leaders like Nvidia, which have plenty of money, and those investors who arrived later are squeezed into the higher valuation tier. This structure has also recently begun to appear in China.

But no matter how much Nvidia tries to control the allocation, it can't control something that doesn't exist.

Protests against data centers are escalating across the United States. Currently, approximately 100 data center projects nationwide are facing obstacles, with 40 likely to be cancelled altogether. Maine recently passed a bill completely banning data center construction. In one town that approved a $6 billion data center project, half of the council members were voted out overnight, and the new council's sole objective was to overturn the decision.

The reason why computing power is insufficient is not because the products are not good enough or there are not enough users, but because the physical world cannot keep up with the appetite of the digital world.

This is another level of "falling behind".

Silicon Valley's valuation system is being rewritten.

Let's look at a number first.

The US GDP is approximately $30 trillion. OpenAI and Anthropic currently each have a run rate (annualized revenue) of around $30 billion, meaning each company already accounts for 0.1% of the US GDP. If both reach $100 billion by the end of the year, plus cloud services and other AI revenue, AI will account for approximately 1% of the US GDP. From almost zero to 1%, it only took a few short years.

This speed is unprecedented. But strangely, the faster the growth, the less investors know how to price it – in the face of such rapid growth, Silicon Valley's valuation framework is collapsing.

I had several in-depth conversations with friends who work in the secondary market, and one recurring term was "re-rationalization" (the rational return of valuations).

In the past few years, the valuation logic for investing in AI has been based on future cash flow: it doesn't matter if you lose money today, I'm betting on your ARR in three or five years. But now, this framework is failing.

The problem lies in the most basic valuation model: DCF (Discounted Cash Flow). Normally, when performing DCF, you predict the cash flows for the next 10 years and then add a terminal value, which assumes the company will continue to operate stably and packages the remaining value into a single lump sum. Typically, the terminal value accounts for 70%-80% of the total valuation.

But now two things have changed at the same time: First, you can probably only predict 3 years instead of 10 years, because it's impossible to see what the industry will look like in 3 years (sometimes even 1 year); Second, terminal value is even more impossible to calculate. It is based on the premise that the company will eventually operate stably, but if AI may disrupt everything at any time, the assumption of "stable operation" will not hold.

I was discussing an analogy with a friend who works in secondary market investing: companies not currently on the AI ​​front are more like waiting for a nuclear bomb. You know they will be disrupted, but you don't know when. Therefore, your evaluation focus shouldn't be "what if they aren't disrupted," but rather "how quickly they can respond when they are disrupted." This is a completely different valuation logic.

SaaS was the first to be repriced by Wall Street. In 2023, Snowflake would have taken nearly 100 years to break even based on free cash flow, but its valuation has now halved. ServiceNow and Workday are following the same trend, and this is just the beginning.

Conversely, the only companies truly suitable for valuation using DCF might be the leading large-scale model companies, because relatively speaking, their future seems to be steadily growing in a positive direction. They won't be "bombed," but rather they're looking at how wide their boundaries can be expanded.

In the past, startups used to recruit by saying, "The salary will be lower, but you'll get stock options that will be worth a lot of money in the future." But this approach is based on the premise that the company will still be around and valuable in 15 to 20 years. If that premise no longer holds, the most rational response from employees will be—"Don't give me stock options, just give me a cash raise."

This, in turn, will change the company's cost structure and financing logic.

The VCs are also suffering. In the past three to six months, almost every fund in Silicon Valley invested in at least one Neo Lab, researchers from renowned AI labs who raised hundreds of millions of dollars with their ideas. But now, in hindsight, everyone feels it was a bit impulsive and a bit overpriced. But why did they still invest? Because if this company actually succeeds, its growth will be so fast that you'll feel the initial valuation was cheap.

An investor friend put it bluntly: it's either zero to 100 or zero to zero anyway. Instead of investing in an expensive Series A round to earn "hard-earned money," it's better to gamble on a ticket to Neo Lab, which has unlimited potential.

In the past, people thought that 1 dollar of ARR meant 1 dollar of ARR, regardless of whether you were building models, applications, or infrastructure. But now, that equation has been broken.

The valuation multiple is lowest for vertical agents (around 5x), higher for general agents (around 10x), and highest for models (20-30x ARR, e.g., Anthropic with a $30 billion ARR and an $800 billion valuation, resulting in a 26.7x multiplier). A year ago, I thought multiplying the ARR by a uniform factor to calculate the valuation was sufficient, but that algorithm is completely wrong today.

Lime Trees and AI Assassination List

Silicon Valley is experiencing a deep crisis of security.

During this trip to Silicon Valley, I repeatedly heard my friends seriously discussing the same thing: buying Bitcoin, building bunkers, and installing bulletproof glass in their homes. They weren't joking.

Recently, lime trees have indeed become popular in Silicon Valley because their branches have 4-inch thorns that will cost anyone who tries to climb over them.

The Wall Street Journal even reported on a $15 million "fortress mansion": a circle of lime trees planted in concrete flower pots, behind which was a moat, behind which was a laser intrusion detection system; the front door was a 3-inch thick solid steel plate with 13 bolts; inside was a safe refuge room with a 2,000-pound heavy door; even the landscape design was a defensive fortification.

Companies providing home security for CEOs saw their highest growth rate since 2003. This trend accelerated sharply, especially after the shooting death of UNH's CEO in Manhattan.

Then, gunshots rang out at the doorstep of the AI ​​guru.

At 4 a.m. on April 11, a 20-year-old boy wearing a Champion sweatshirt flew from Texas to California, carrying a kerosene can, stood in front of Sam Altman's $27 million mansion, lit a Molotov cocktail, and threw it inside.

An hour and a half later, he appeared at OpenAI headquarters, picked up a chair, smashed the glass door, and shouted at the security guards, "I'm going to burn this place down and kill everyone inside."

The FBI found a document on him titled "Your Final Warning." It listed the names and home addresses of several AI company CEOs and investors.

Two days later, early Sunday morning, Altman's home was attacked again: a Honda sedan briefly stopped at the door, the passenger put his hand out of the window, fired a shot into the house, and then fled.

This is not an isolated incident. In late March, a large-scale anti-AI protest took place in downtown San Francisco, with protesters holding signs that read "Stop the AI ​​Race" and "Don't Build Skynet," and giving speeches outside the offices of Anthropic, OpenAI, and xAI. Senator Bernie Sanders warned in Congress, "Humanity may really lose control of this planet."

Talking with friends at xAI, it's said that Musk is also very worried about being shot, which is an open secret in the industry.

The underlying fear is actually quite simple: if AI takes over most of production and humans are no longer essential participants in the economy, then all past social contracts regarding "how much you contribute and how much you should receive" become invalid. What remains is a minimalist power structure: whoever controls the GPUs and electricity controls everything. The social hierarchy isn't just widened, it's flattened: on one side, a tiny minority; on the other, everyone else.

"The hottest campaign topic in the US presidential election two years from now will definitely be the relationship between AI and society. There might even be a Luddite movement in the AI ​​era."

picture

Inflation in the US remains severe. Having lived in California for many years, I've never seen gas prices in the 7s (presumably referring to a unit of currency, possibly 7 rupees). This coincides with Citrini's release of the Global Intelligence Crisis report at the end of February, which simulated an economic crisis that could occur in 2028 due to AI's "over-success"...

end

On the plane back to Beijing, I flipped through my notes from the past two weeks and found that I had been writing the same word the whole time: "Can't keep up".

YC can't keep up, Meta's code security rules can't keep up, xAI's management can't keep up, researchers can't keep up, computing power can't keep up, valuation frameworks can't keep up, and society's psychological tolerance can't keep up... to the point that Silicon Valley itself can't keep up with itself.

But what I want to say in the end is that a friend from Anthropic mentioned that Dario Amodei said something internally: with the help of AI, cancer has been conquered in a sense. It doesn't mean that it has disappeared, but that it may become a chronic disease that doesn't kill people. It's just that the treatment cost is still too high, and it will take time for it to become widespread.

I'm not sure if Dario's statement that "cancer has been conquered" is too optimistic, but this time in Silicon Valley, the most common startup direction we saw was AI4S and AI for Biotech. Many people from large model companies don't understand medicine, but they want to use AI technology to change the industry.

In the past two weeks, I've witnessed so many instances of "falling behind," which is indeed anxiety-inducing. But if AI truly makes cancer a chronic disease within a few years and accelerates materials science by twenty years, then this "falling behind" may be the biggest acceleration in human development history.

My baby is two years old this year, and we might have a second child next year. I have absolutely no imagination to imagine what the world will be like for their generation.

But I hope that in the world they grow up in, there will be more people who are healed by AI, rather than more Molotov cocktails and gunshots hurled at the doorsteps of AI practitioners.

picture

In his 2008 book, *Cities and Ambition*, Paul Graham wrote: “While people in Silicon Valley have great respect for intelligence, the message Silicon Valley sends is: you should be more influential, which isn’t entirely the same message New York sends. Influence is certainly important in New York, but New York highly values ​​‘billions of dollars,’ even if you’ve just inherited it. But in Silicon Valley, apart from a few real estate agents, nobody cares about that. What really matters in Silicon Valley is how much impact you have on the world. People care about Larry and Sergey not because of their wealth, but because they control Google, and Google affects almost everyone.” Now, AI has pushed this atmosphere to a new level.

LatePost columnist Meng Xing: Partner at Five Sources Capital and former COO of Didi Autonomous Driving. This is the first article in his AI investment observations; he will continue to update his investment observations on LatePost.

Image source: Visual China

Share to:

Author: PA荐读

Opinions belong to the column author and do not represent PANews.

This content is not investment advice.

Image source: PA荐读. If there is any infringement, please contact the author for removal.

Follow PANews official accounts, navigate bull and bear markets together
PANews APP
沉寂10个月的某巨鲸从币安提取300枚BTC,总持仓增至718枚
PANews Newsflash