While many experts have focused on the significance of drones in the new reality of warfare, the AI revolution is a much bigger deal. AI now enables drones to be far more impactful, helps select and prioritize battle targets, designs tactical operations and assesses results that go beyond iterative evaluations.
While AI is changing the battlefield space and the US has massive advantages, there are also significant risks that AI systems could be compromised by US adversaries and perhaps even by “friends.”
Recently AI has played an important role in several conflicts: the Gaza war (Operation Gideon’s Chariots); US-Israel operations against Iran (Operation Rising Lion); the capture of Nicolas Maduro and his wife in Venezuela (Operation Absolute Resolve); operations to locate and stop “rogue” oil tankers; and the Ukraine war, where AI is playing a major role.
If the US and Israel take action against Iran in the coming days, planning and operations would likely be supported by AI.
While the US and Israel are among the leaders in using AI to support military operations, other consequential players are emerging. China is well along in developing home-grown AI engines; Russia is using AI for drones and tactical operations; and US allies, especially the UK, France and Germany, are implementing AI in intelligence and military development.
Much of the intelligence integration in the Ukraine war is supported by major Western organizations with deep AI capabilities. One of them, Palantir Technologies, has emerged as the premier US company in big data and analytics, teamed with Nvidia and Anthropic.
In Israel, there is a blend of military units, especially Unit 8200, and private-sector players, including well-established companies such as Elbit Systems and startups such as Skyforce for edge-computing and AI-driven battlefield autonomy, Robotican for autonomous robotics and drones and Radiant Research Labs for zero-click intelligence-gathering tools.
OpenAI (GPT), Google (Gemini), Anthropic (Claude), and Meta (LLaMA) now control 32% of the world’s large language model (LLM) market. The Pentagon is now in a major dispute with Anthropic over its use of Claude in the capture of Maduro.
Anthropic has said it does not allow Claude to be used for military operations, despite securing a $200 million contract with the US Department of Defense (DoD) in July 2025 to develop AI for national security.
The number of AI engines, including specialized systems, is rapidly growing. There are roughly 65 to 70 major AI tools that have reached mass-market status in the US, including Perplexity for search, Midjourney for art and Grok for social media.
As of this year, there are over 62,000 AI-related startups globally, with the US holding the largest share, with roughly 25,000–30,000. One recent tally suggests there are now over 90,000 AI companies worldwide, though many are “wrappers” that use the Big Three’s engines to power their own specific services.
In the US, more than $108 billion has been invested so far in data centers, not including collateral investments in power generation and grid improvements. Private investment in US AI companies reached a record $109.1 billion, according to the 2025 AI Index Report from Stanford HAI.
Not counting the billions spent for new foundries and other high-end chip-related infrastructure via the CHIPS Act, the US government is investing billions every year in AI research and development and in specific applications.
Over time, these investments will reshape the federal government’s workforce, possibly eliminating tens of thousands of jobs. It will also change military ranks and roles. The Pentagon is investing at least $2 billion a year directly and tens of billions indirectly through weapons procurement, creating a future AI juggernaut that few competitors can hope to match.
This is bad news for Russia in particular, as it lacks both the infrastructure and investment to come close to match US spending and capabilities.
Russia uses AI in drones such as the Geran-2 and the Zala Lancet. These drones are powered by older type Nvidia-made chip sets, which Russia acquires on the gray market. In early 2026, Russia also introduced the “Svod” AI system, a tactical situational awareness platform.
It uses AI to aggregate data from satellites and drones into a single map for commanders. Crucially, it is designed to “model scenarios,” offering Russian officers pre-calculated tactical options. Svod was developed as a collaborative effort between Russia’s Ministry of Defense’s research institutes and civilian software engineers tasked with digitizing the Russian military’s “kill chain.”
It is designed to solve Russia’s historical “command bottleneck” by aggregating data from satellites, drones, and reconnaissance reports into a single digital “operational picture.” It is built on the domestically developed Astra Linux operating system to ensure “technological sovereignty” and reduce dependence on Western software.
For its primary function—identifying targets in drone feeds and satellite imagery—Svod utilizes the YOLO (You Only Look Once) framework. This is the global standard for real-time object detection and is highly efficient for battlefield hardware. To process text-based intelligence reports and “reconnaissance summaries,” developers have adapted models like Mistral (French) and LLaMA (Meta).
These are embedded in on-premise, air-gapped environments to ensure data doesn’t leak back to Western servers. There is increasing evidence that Russian developers are incorporating Qwen, developed by China’s Alibaba. Qwen’s architecture is particularly adept at the complex coding and logic tasks required for situational modeling.
On the front lines, Russian AI runs on “tactical tablets” and small, ruggedized computers. Due to sanctions, these often rely on smuggled or dual-use Chinese chips rather than specialized Russian silicon.
Complex scenario modeling for predicting where a Ukrainian counterattack might occur is handled by regional command centers using server clusters that utilize diverted high-end GPUs such as Nvidia H100s/A100s obtained through third-party intermediaries.
Russia faces major hurdles in applying battlefield AI and net-centric warfare, particularly for communications. Ukraine has a distinct advantage because it uses Elon Musk’s Starlink for the backbone of its communications, a system that is currently very difficult for Russia to jam or disrupt.
Russia’s earlier access to Starlink has been severed, leaving it with ad hoc communications that are far more vulnerable to disruption, far less reliable and lacking the bandwidth Starlink provides to Ukraine. While Russia is trying to figure out alternatives, it will be some time before a workaround is found, if ever.
Meanwhile, Russia is reportedly focusing on rougher, less elegant AI solutions, using external support, mainly Chinese, on projects. Over time, there is little hope the Russians can remain competitive unless they can work out a modus vivendi with the US, much as China appears to have done when it comes to trading off access to rare earths for Nvidia products.
China is far ahead of Russia in the AI space, although it is still playing catch-up with the US. China’s wild card is its lack of modern battlefield experience. Thus, China will have to build its AI systems on estimates of battle effectiveness rather than real-world data, unless, of course, Chinese intelligence penetrates Western systems.
Little is known about the security of AI machines. The Pentagon and US intelligence agencies rely on commercial AI products, especially for real-time updates on threats and countermeasures. The recent Pentagon debacle on cloud computing systems, where it allowed Chinese engineers to provide “routine” service, suggests that, so far at least, the downside risks of relying on commercial systems are not part of Pentagon thinking.
Nor does the Defense Department have much free choice, as it does not own AI engines and outsources their use, either directly or through defense contractors. The security of AI could become the Achilles’ heel of US AI systems. Certainly, AI machines and communications will become a major target for America’s adversaries.
For example, in February 2026, Google’s Threat Intelligence Group reported a surge in “model extraction” attacks. Adversaries, notably from China, used automated scripts to send hundreds of thousands of prompts to Gemini to reverse-engineer its internal logic and “steal” the proprietary reasoning capabilities for their own domestic models.
A 2025 Gartner study revealed that 32% of organizations reported their AI applications had been targeted via malicious prompts. State-sponsored hackers use these “jailbreaks” to force AI agents to leak sensitive data or bypass safety filters to generate malicious code.
In late 2025, reports surfaced that a Moscow-based network dubbed “Pravda” successfully “infected” several popular AI chatbots. By flooding the internet with specific narratives, they ensured that when users asked about certain geopolitical events, the AI would repeat Russian propaganda roughly 33% of the time.
Attacks are not limited to Russia and China. Iran and North Korea have joined the fray, and other “friends” may also seek commercial advantage by attacking and exploiting AI applications or simply by using them for their own military, economic and social operations.
Given AI’s national security significance, both for military use and economic security, much greater attention to AI system security is not only warranted but essential.
Stephen Bryen is a former US deputy undersecretary of defense and special correspondent at Asia Times. This article was first published on his newsletter Weapons and Strategy and is republished with permission.

Microslop uses 30% of its code done by AI. Windows 11 MALWARE is JUNK and a catastrophic failure – the most buggy trash ever released by them from all reviews I read about Western customers criticisms. I guarantee you it has everything to do with over reliance on AI.
AI is overrated. Look at the genocide in Gaza –
The Gospel, Lavendar Where’s Daddy were AI killing algorithms deployed with Palantir input. Not very intelligent.
Sloppy code in American products and systems will blow up with terrible service and quality. Bad AI code is like a ticking timebomb.
The Americans AI bubble will pop. They went all in on a pipedream.