72 Hours, Triple Crisis: Anthropic's Soul is Being Auctioned Off

Anthropic AI company faces multiple pressures recently:

  • The Pentagon pressures to allow its AI model Claude for military use, or face contract cancellation and blacklisting.
  • Releases new safety policy, deleting the key safety pause commitment, triggering brand crisis.
  • Accused by Elon Musk of large-scale training data theft and paying high settlement.
  • Accuses Chinese companies of 'distillation attacks,' but criticized for double standards.
  • Company valued at $380 billion, but safety narrative value declines, facing identity compromise.
Summary

Written by: Ada, Deep Tide TechFlow

Tuesday, February 24. Washington, D.C., Pentagon.

Anthropic CEO Dario Amodei sat opposite Defense Minister Pete Hegseth. According to multiple media outlets, including NPR and CNN, citing sources familiar with the matter, the atmosphere of the meeting was "polite," but the content was anything but.

Hegseth gave him an ultimatum: lift the military use restrictions on Claude by 5:01 p.m. Friday, allowing the Pentagon to use it for "all legitimate purposes," including autonomous weapon targeting and domestic mass surveillance.

Otherwise, cancel the $200 million contract. Invoke the Defense Production Act and forcibly requisition the equipment. Designate Anthropic as a "supply chain risk," which is tantamount to blacklisting it as a hostile entity to Russia and China.

On the same day, Anthropic released the third version of its Responsible Scaling Policy (RSP 3.0), quietly removing one of the company's core commitments: it would not train more powerful models if it could not guarantee that safety measures were in place.

On the same day, Elon Musk posted on X, "Anthropic stole training data on a massive scale, that's a fact." Meanwhile, X's community notes added reports that Anthropic paid a $1.5 billion settlement for using pirated books to train Claude.

Within 72 hours, this self-proclaimed AI company played three roles simultaneously: a security martyr, an intellectual property thief, and a traitor to the Pentagon.

Which one is true?

Perhaps they are all of them.

The Pentagon's "Obey or Get Out" Strategy

The first layer of the story is very simple.

Anthropic was the first AI company to receive classified access from the U.S. Department of Defense. The contract it received last summer was capped at $200 million. OpenAI, Google, and xAI subsequently each secured contracts of similar size.

According to Al Jazeera, Claude was used in a U.S. military operation in January of this year, reportedly related to the kidnapping of Venezuelan President Maduro.

However, Anthropic drew two red lines: it does not support fully autonomous weapon targeting, and it does not support mass surveillance of U.S. citizens. Anthropic believes that artificial intelligence is not reliable enough to control weapons, and there are currently no laws or regulations governing the application of artificial intelligence in mass surveillance.

The Pentagon is not buying it.

White House AI advisor David Sacks publicly accused Anthropic on X last October of “using fear as a weapon to engage in regulatory capture.”

The competition has already collapsed. OpenAI, Google, and xAI have all agreed to allow the military to use their AI in "all legal scenarios." Musk's Grok was just approved for access to classified systems this week.

Anthropic was the last one standing.

As of press time, Anthropic stated in its latest press release that they have no intention of backing down. However, the 5:01 PM deadline on Friday is fast approaching.

An anonymous former Justice Department and Defense liaison officer expressed bewilderment to CNN: "How can you simultaneously declare a company a 'supply chain risk' and force that company to work for your military?"

Good question, but that's not on the Pentagon's radar. What they care about is taking coercive measures if Anthropic doesn't compromise, or becoming a Washington outcast.

"Distillation Attack": A Slap in the Face to the Accusation

On February 23, Anthropic published a strongly worded blog post accusing three Chinese AI companies of launching an "industrial-grade distillation attack" against Claude.

The defendants are DeepSeek, Moonshot AI, and MiniMax.

Anthropic alleges that they used 24,000 fake accounts to initiate more than 16 million interactions with Claude, specifically targeting and extracting Claude's core capabilities in agent reasoning, tool invocation, and programming.

Anthropic characterized this as a national security threat, claiming that the distilled model is "unlikely to retain security fencing" and could be used by authoritarian governments for cyberattacks, disinformation campaigns, and mass surveillance.

The narrative is perfect, and the timing is perfect.

This comes just after the Trump administration eased chip export controls to China, and just when Anthropic needed to find ammunition for its lobbying efforts on chip export controls.

But Musk fired back: "Anthropic stole training data on a massive scale and paid billions of dollars to settle for it. That's a fact."

image

Tory Green, co-founder of AI infrastructure company IO.Net, said: "You train your own model with data from the entire internet, and then others use your public API to learn from you, and that's called a 'distillation attack'?"

Anthropic calls distillation an "attack," but it's commonplace in the AI ​​industry. OpenAI uses it to compress GPT-4, Google uses it to optimize Gemini, and even Anthropic itself is doing it. The only difference is that this time, it's itself being distilled.

According to Erik Cambria, an AI professor at Nanyang Technological University in Singapore, who spoke to CNBC, "The line between legitimate use and malicious exploitation is often blurred."

Ironically, Anthropic just paid a $1.5 billion settlement for using pirated books to train Claude. It trained its model with data from the entire internet, then accused others of using its public API to learn from it. This isn't just double standards; it's triple standards.

Anthropic intended to play the victim, but ended up being exposed as the defendant.

Removal of safety commitments: RSP 3.0

On the same day that Anthropic confronted the Pentagon and clashed with Silicon Valley, it released the third version of its Responsible Scaling Policy.

In a media interview, Anthropic Chief Scientist Jared Kaplan stated, "We feel that stopping the training of AI models does no good for anyone. In the context of rapid AI development, making unilateral commitments... while competitors are moving at full speed, makes no sense."

In other words, if others don't practice martial ethics, we won't pretend either.

At the heart of RSP 1.0 and 2.0 is a hard commitment: if a model's capabilities exceed the coverage of safety measures, training will be paused. This commitment has earned Anthropic a unique reputation in the AI ​​safety community.

But it was removed in version 3.0.

Instead, a more "flexible" framework has been adopted, separating security measures that Anthropic can implement on its own from security recommendations that require industry-wide collaboration. A risk report will be issued every 3-6 months for review by external experts.

Sounds very responsible?

Chris Painter, an independent reviewer from the nonprofit organization METR, commented after reviewing an early draft of the policy: "This shows that Anthropic believes it needs to move into 'triage mode' because the methods for assessing and mitigating risks are not keeping pace with the growth in capabilities. This further demonstrates that society is not prepared for the potentially catastrophic risks of AI."

According to TIME, Anthropic spent nearly a year internally discussing the rewrite, which was unanimously approved by CEO Amodei and the board. The official explanation is that the original policy was designed to promote industry consensus, but the industry simply didn't keep up. The Trump administration adopted a laissez-faire approach to the development of artificial intelligence, even attempting to repeal relevant state regulations. Federal-level AI legislation remains a distant prospect. While establishing a global governance framework by 2023 seemed possible, three years later, that door has clearly closed.

An anonymous researcher who has long followed AI governance put it more bluntly: "RSP is Anthropic's most valuable brand asset. Removing the promise to pause training is like an organic food company quietly tearing the word 'organic' off the packaging and then telling you that their testing is now more transparent."

Identity Disconnect Under a Valuation of 380 Billion

In early February, Anthropic completed a $30 billion funding round at a valuation of $380 billion, with Amazon as the anchor investor. Since its inception, it has achieved $14 billion in annualized revenue. Over the past three years, this figure has grown more than tenfold annually.

Meanwhile, the Pentagon threatened to blacklist him. Musk publicly accused him of data theft. His core security commitments were removed. After resigning, Anthropic's head of AI security, Msrinank Sharma, wrote on X: "The world is in danger."

contradiction?

Perhaps contradiction is in the DNA of the Anthropic.

This company was founded by former OpenAI executives who were concerned that OpenAI was moving too fast on security issues. They then started their own company to build more powerful models at a faster pace, while simultaneously telling the world how dangerous these models were.

The business model can be summarized in one sentence: We are more afraid of AI than anyone else, so you should pay us to build AI.

This narrative worked perfectly in 2023-2024. AI security was a hot topic in Washington, and Anthropic was the most popular lobbyist.

In 2026, the winds changed.

"Woke AI" has become a hashtag for attacks, state-level AI regulatory bills have been blocked by the White House, and although California SB 53, supported by Anthropic, has been signed into law, there is a complete lack of federal legislation.

Anthropic's safe-haven strategy is sliding from a "differentiation advantage" into a "political liability."

Anthropic is playing a complex balancing act, needing to be "safe" enough to maintain its brand, yet "flexible" enough to avoid being abandoned by the market and the government. The problem is, the tolerance space on both ends is shrinking.

How much is a security narrative still worth?

When you look at the three things together, the picture becomes clear.

Accusing the Chinese company Claude of distillation is part of a lobbying narrative aimed at strengthening chip export controls. The security moratorium commitment was removed in order to avoid falling behind in the arms race. Rejecting the Pentagon's demands for independent weapons is a way to preserve a last shred of moral pretense.

Each step has its own logic, but they also contradict each other.

You can't claim that Chinese companies "distilling" your models will endanger national security while simultaneously removing promises to prevent your own models from spiraling out of control. If the models are truly that dangerous, you should be more cautious, not more aggressive.

Unless you are an anthropic.

In the AI ​​industry, identity isn't defined by your statements, it's defined by your balance sheet. Anthropic's "security" narrative is essentially a brand premium.

In the early stages of the AI ​​arms race, this premium is valuable. Investors are willing to pay higher valuations for "responsible AI," governments are willing to give the green light to "trustworthy AI," and customers are willing to pay for "safer AI."

But by 2026, this premium is evaporating.

Anthropic is no longer facing a choice between "whether to compromise" and "who to compromise with first." Compromising with the Pentagon damages its brand. Compromising with competitors undermines its security promises. Compromising with investors means conceding to both sides.

Anthropic will release its answer at 5:01 PM on Friday.

But whatever the answer may be, one thing is certain: Anthropic, which once stood on the premise that "we are different from OpenAI," is becoming just like everyone else.

The end of an identity crisis is often the disappearance of one's identity.

Share to:

Author: 深潮TechFlow

Opinions belong to the column author and do not represent PANews.

This content is not investment advice.

Image source: 深潮TechFlow. If there is any infringement, please contact the author for removal.

Follow PANews official accounts, navigate bull and bear markets together