OpenAI, Microsoft, Google, and other AI companies are orchestrating a war together with "killer factories."

  • The Epic Fury operation in February 2026 marked the first "AI full-stack war," testing AI in real combat to compress sensor-decision-shooter cycles.
  • OpenAI shifted from ethical stances to a major defense contract with the U.S. Department of War, deploying GPT models for intelligence and decision support with defined red lines.
  • Anthropic was excluded from defense deals due to stricter principles, labeled as a supply chain risk.
  • Microsoft Azure and Google Project Nimbus provided crucial cloud infrastructure, enabling AI tools and data processing for military operations.
  • Israel's AI systems like Lavender automated target identification, raising ethical concerns about civilian casualties and algorithm bias.
  • The emergence of an AI-Cloud-Defense complex reshapes market dynamics, with companies balancing profits and ethical risks.
  • Future conflicts will increasingly rely on AI and cloud technologies, highlighting urgent regulatory and accountability challenges.
Summary

Author: Anita , Senior Executive, Sentient Asia Pacific

The Iran war brought large-scale models from the laboratory to the battlefield all at once.

Operation Epic Fury at the end of February 2026 was not just a joint air strike, but more like an AI stress test in a real war zone. Whoever could compress the "sensor-decision-shooter" link in minutes or even seconds would hold the pricing power for the next round of geopolitics.

I. Epic Fury: The First "AI Full-Stack War"

In this operation, US and Israeli official sources claimed that the concentrated strikes against key Iranian military and nuclear facilities "achieved strategic success" and repeatedly hinted that Iran's Supreme Leader Khamenei was very likely killed in an attack on an underground command facility north of Tehran. However, Iran's long-standing refusal to provide a clear confirmation of his fate makes this "decapitation" seem more like a game of power and authority.

From an operational perspective, the defining characteristic of Epic Fury is not its duration, but its density: a high-intensity air campaign lasting over ten days, involving drone swarms, special operations, and cyber warfare, all underpinned by a highly software-driven operational stack—Palantir's battlefield ontology and digital twin platform, the intelligence fusion system of the US defense agencies, Israel's automated target generation tools, and the new roles of cutting-edge large-scale model companies like OpenAI.

This war marks a symbolic turning point: from here on, "AI involvement in military decision-making" is no longer a buzzword in the Pentagon's PowerPoint presentation, but a real source of cash flow and political risk that cannot be avoided in market, regulatory, and ethical debates.

II. OpenAI: From an "Ethical Declaration" to the Department of Defense's Most Expensive SaaS Subscription

In just two or three years, OpenAI has made a remarkable transformation in its public stance.

From distancing itself from "military use" to acknowledging that it can support national security and defense projects while meeting security principles, it has secured what may be the most sensitive major client contract of our time.

Around February 27, 2026, Sam Altman announced that his company had reached an agreement with the U.S. Department of Defense to deploy GPT series models on a classified network for "defense-related scenarios" such as intelligence analysis, translation, and combat simulations. In some public materials and media reports, this traditional Department of Defense was deliberately referred to as the "Department of War," symbolically reverting to more offensive warfare language, although the legal name of the organization remains the Department of Defense.

The publicly reported "red lines" can be roughly summarized into three categories:

  • It does not participate in large-scale surveillance within the United States;

  • The use of force must be kept "human presence" and not directly driven by fully autonomous lethal weapon systems.

In high-risk decision-making, human oversight and accountability should be retained.

  • These principles represent OpenAI's ethical stance to the outside world and also serve as bargaining chips in contract negotiations—the signal it sends to Washington is that the company is willing to cooperate, but hopes to do so "within a controlled scope."

What role do these models play in real-world scenarios like Epic Fury? Public information only provides a description of security—assisting intelligence processing, analyzing complex data, and helping decision-makers form a situational picture more quickly.

However, from a technical perspective, feeding massive amounts of satellite imagery, signal intelligence, and social media streams into large models, and then having them sort, predict paths, and assess risks for potential "high-value targets," is essentially very close to a "battlefield brain."

For Wall Street, the significance of this agreement is straightforward.

After Anthropic was labeled a "supply chain risk" by the Pentagon for insisting on tougher red lines, OpenAI accepted this multi-hundred-million-dollar defense contract with a "limited ethical compromise and huge commercial gain" approach, which is extremely difficult for competitors to shake.

III. Anthropic: A "principled" firm hovering on the edge of the defense budget.

In stark contrast to OpenAI's "pragmatism" is Anthropic's predicament: it was originally one of the most valuable cutting-edge model providers in the eyes of the Pentagon, but because it refused to back down on red lines, it was excluded from the entire system in an extremely brutal manner.

Multiple media outlets reported that Anthropic took a hard line on two points during negotiations with the Department of Defense:

  • Claude does not participate in fully autonomous weapon systems;

Claude is not involved in the mass surveillance and profiling of U.S. citizens.

  • The Pentagon's demand is closer to "no legitimate use should be pre-defined by the model supplier."

After negotiations broke down, Defense Secretary Pete Hegseth announced after the deadline that Anthropic would be designated a national security “supply chain risk” and required all contractors doing business with the military to relocate from Claude within six months. This label, previously used primarily for companies from rival countries such as Huawei, has now been applied to a U.S.-based AI startup for the first time, triggering a chilling discussion in Silicon Valley.

Pentagon's internal assessment indicates that a complete replacement of the large model stack embedded in the classified system could take months, meaning the ban's implementation window highly overlaps with the Epic Fury's.

Based on technological realities, Claude likely continued to participate in U.S. national security work in some form before being "ousted" by executive order. However, no one was willing to clarify this connection at the hearing, which is a typical "gray area" of the modern military-industrial complex.

The capital market has learned a simple yet dangerous lesson: when the "safety red line" conflicts with the "maximization of defense orders," the company that is more willing to negotiate is often the safer investment target; while companies that stick to their principles may be branded as having "supply chain risks" overnight and have investors press the "revaluation" button.

IV. The True Central Nervous System: Microsoft, Google, and the "Cloud-Based Military-Industrial Complex"

If OpenAI and Anthropic are the "brains" in this war, then Microsoft and Google are the true central nervous system:

Without their cloud, all the big models and native AI tools would remain just PowerPoint presentations.

Microsoft Azure: From Office Cloud to Kill Chain Operating System

According to investigations by AP and multiple organizations, since October 2023, the scale of the Israeli military's use of machine learning tools on Azure has surged by dozens of times in just a few months, reaching as high as 64 times, with overall AI function calls approaching 200 times.

At the same time, large-scale data storage has reached a level equivalent to that of the Library of Congress.

This computing power is used to transcribe and translate large volumes of communications, process signals intelligence from surveillance infrastructure, and work in conjunction with local Israeli AI systems (such as Lavender and Gospel) to automatically generate target lists and risk assessments, significantly increasing the throughput of the “target production line.”

Although Microsoft later reduced its services to some Israeli military units (especially those related to surveillance) under pressure from public opinion and employees, its core cloud and AI contracts continued to operate, which brought the company large commercial orders, but at a considerable cost to its reputation.

Google Project Nimbus: The Wartime Cloud with the Highest Political Risk Premium

Starting in 2021, Google and Amazon, through Project Nimbus, provided the Israeli government and military with approximately $1.2 billion worth of unified cloud infrastructure, encompassing computing, storage, and machine learning tools. Employees, academics, and human rights organizations have consistently warned against this.

Nimbus’s general-purpose cloud and AI capabilities are highly suitable for surveillance and military target selection, despite Google’s repeated emphasis that the contract “does not include offensive military use.”

By the time Epic Fury was introduced, it was widely believed that cloud platforms like Nimbus were the key computing power foundation supporting the Israeli military’s complex target planning, battlefield simulation and real-time intelligence fusion, but the specific calling paths and case details remained confidential.

From a risk perspective, this means that Google is paying a “slightly higher political risk premium” for stable revenue from Middle Eastern security clients, while the protests and resignations within the company surrounding the project serve as a reminder to investors that this is not a business that can be simply regarded as an ordinary enterprise cloud contract.

V. Israel's AI Kill Factory: The Portability of Lavender Logic

To understand how AI is changing the battlefield, one might start with one of the most controversial Israeli systems: Lavender, Gospel, and Where's Daddy.

A survey by +972 Magazine and Local Call shows that:

  • "Lavender" conducted behavioral and relationship mapping analysis on almost all adult men in Gaza, assigning each person a "suspected militant score" of 1-100, and quickly identified as many as 37,000 targets suspected of being members of armed groups.

  • "Gospel" focuses on buildings and infrastructure, automatically marking buildings deemed to be used for military purposes to create a bombing list that can be consumed in bulk by the air force;

  • "Where's Daddy" is responsible for optimizing the time dimension: tracking when a listed target returns home and triggering an attack when they are at home with their family—this greatly increases the probability of a "successful kill," while also putting family members and neighbors at high risk.

In an interview, an Israeli intelligence official on the front lines admitted that human review of targets recommended by Lavender often amounted to just a few dozen seconds of "formalistic ticking".

Human rights organizations and UN experts have described the system as a "highly automated mass assassination factory," pointing out its structural problems in amplifying algorithmic bias, compressing the space for human judgment, and increasing the risk of civilian casualties.

It is important to emphasize that public reports more explicitly link this system to the Gaza War, while the official stance on its specific application in the Iranian theater has remained silent for a long time.

However, from the perspective of technological portability, as long as sufficient communication data, location trajectories, and social graphs within Iran are obtained, it is not an unimaginable thing to "translate" Lavender's logic into the power elite of Tehran—which is why many analysts believe that Epic Fury is more like an spillover experiment of a "Gaza-style algorithmic killing factory" onto the capital of a sovereign state.

VI. Market and Regulation: Pricing Power of the AI-Cloud-Defense Complex

Piecing these pieces together, you get a picture that's very unlike "Silicon Valley":

  • On one end, large model companies, represented by OpenAI, are willing to make limited compromises on red lines and have quickly gained a foothold in the defense budget; reuters+2

  • On the other hand, Anthropic, which insisted on stricter security principles, was kicked out by the Secretary of Defense under the pretext of "supply chain risks," giving the entire industry a practical lesson: "Don't confront your sole buyer head-on." (axios+5)

  • At the bottom layer are cloud giants like Microsoft and Google, who use GPU clusters and confidential cloud networks to build the "operating system" of modern warfare, absorbing the vast majority of cash flow from wartime AI, while bearing increasingly high reputational and regulatory risks.

From an asset pricing perspective, this is no longer simply a binary opposition of "tech stocks vs. defense stocks," but a new AI-Cloud-Defense complex:

  • Tactically, low-cost drone swarms, automated target production, and AI decision-making systems are eroding traditional great power deterrence, making expensive fifth-generation fighter jets and aircraft carrier battle groups seem like the capital-intensive assets of the previous generation.

  • In the industry, large-scale models and cloud vendors have obtained counter-cyclical cash flow that only a very few players can enjoy through the military, and have entered a profit black box that is difficult to fully transparent under the guise of "security and confidentiality".

  • Politically, when "who is more aligned with the national security agenda" becomes the decisive variable in securing key contracts, companies' adherence to ethical red lines will be systematically discounted, and such incentive structures will be silently remembered by all future entrepreneurs and investors.

The Iranian battlefield may just be the prologue. Whether the next conflict breaks out in the Taiwan Strait, Eastern Europe, or on another piece of the Middle East floor, what truly determines the pace of war will no longer be just the number of tanks and the caliber of artillery, but rather the models trained on petabytes of classified data and the cloud connected to countless GPU racks.

The question is, before we outsource more and more kill chains to a few large model and cloud companies, does global regulation and democracy still have time to seriously answer the question: when algorithmic recommendations turn into a series of explosion coordinates in real combat, who is responsible for these decisions?

Share to:

Author: Anita

Opinions belong to the column author and do not represent PANews.

This content is not investment advice.

Image source: Anita. If there is any infringement, please contact the author for removal.

Follow PANews official accounts, navigate bull and bear markets together