Who Is Behind the AI Propaganda War and Why It Matters Now

We have entered an era where synthetic content is no longer a fringe concern. It is an active instrument of geopolitical warfare, deployed by state actors, non-state networks, and foreign influence operations against democracies, military alliances, and civilian populations simultaneously.

The World Economic Forum's Global Risks Report 2026 placed mis- and disinformation among the top short-term global risks, alongside geoeconomic confrontation and societal polarization. It is one of the few risks that remains severe over both the two- and ten-year horizons and is the risk that catalyses or worsens every other risk on the list.

This is no longer a content moderation problem. It is a national security crisis. And it is accelerating faster than governments, platforms, or citizens are equipped to handle.

Who Is Producing AI-Generated Disinformation at Scale

State Actors and the Authoritarian Playbook

The production of AI propaganda is not random. It is coordinated, state-funded, and strategically targeted.

The Iran-Russia-China-North Korea axis shares technology best practices with each other and then amplifies mutually beneficial anti-Western propaganda. The advancement of AI tools, particularly the advent of AI agents that can act without human oversight, has made the creation of synthetic disinformation easier than ever.

The ongoing U.S.-Israeli military campaign against Iran, which began in early March 2026, has inaugurated a new era in conflict communication. Research firm Cyabra documented a pro-Iran disinformation campaign that generated over 145 million views and more than 9 million interactions across social media platforms within days. The campaign deployed tens of thousands of fake accounts synchronized to disseminate AI-generated deepfakes portraying Iran as victorious.

The New York Times identified more than 110 unique deepfakes conveying a pro-Iran message through battlefield images, missile strike depictions, and war footage in the span of two weeks alone.

Who Operates the Infrastructure of Fake Narratives

The GoLaxy revelations of September 2025 provided a critical window into how this infrastructure operates. Documents leaked from the Beijing-based firm GoLaxy revealed a "Smart Propaganda System," an army of AI personas engineered to look and think like real people, using millions of data points to build psychological profiles of their targets.

NewsGuard reports that the number of AI-generated news sites has ballooned to over 2,089 sites across 16 languages, operating with almost no human oversight. In August 2025, leading chatbots relayed false claims 35 percent of the time, up from 18 percent a year earlier.

Who Gets Targeted by AI Deepfakes and Disinformation

Democratic Elections Under Direct Attack

Elections represent the most concentrated point of vulnerability in any democratic system, and AI-generated disinformation has been deployed against them with increasing precision.

In Ireland's 2025 presidential election, a deepfake video falsely depicted the eventual winner withdrawing his candidature, and included fake footage of national broadcasters "confirming" the news. This was released just days before polling day. The Netherlands likewise saw roughly 400 AI-generated synthetic images used to attack political counterparts.

Ahead of Romania's May 2025 presidential election, scammers used Facebook to distribute deepfake videos showing several presidential candidates promoting a non-existent government investment opportunity. A Russian-funded disinformation network was also uncovered ahead of Moldova's parliamentary election, with a group paying people to post pro-Kremlin propaganda using ChatGPT for guidance on crafting satirical, engagement-optimized messages.

In Poland, shortly after the first round of voting in the May 2025 presidential election, AI-generated images featured in four of 23 viral videos containing disinformation alleging voter fraud.

Who Suffers When Nuclear Powers Fight in a Disinformation Fog

The four-day military crisis between India and Pakistan in May 2025 became significantly more dangerous when both countries integrated disinformation and fake images into their conventional warfighting. Even reputable journalists, government officials, and politicians were misled by fabricated content shared as authentic battlefield footage.

Misinformation circulating on social media alleged that India had attacked Pakistan's Kirana Hills nuclear site and caused radioactive leakage. A doctored letter about this incident went viral. The Indian Defense Minister made a public statement the next day that led many to believe the incident had occurred, even though the IAEA later denied any radioactive leakage.

In a conflict between two nuclear-armed states, synthetic disinformation is not an embarrassment. It is a potential trigger for catastrophic miscalculation.

Who Profits from AI Propaganda and How the Economy Works

The AI disinformation economy is not exclusively ideological. Much of it is financial.

Efforts to spread disinformation have long been tied to financial motives: users exploit the virality of such content on social media platforms to generate views and convert clicks into passive income. In 2022, the 40 U.S. websites most responsible for spreading election disinformation generated an estimated $42.7 million in advertising revenue.

AI-generated disinformation has slowly created a shadow economy in elections. In the buildup to the Irish presidential election, a library of deepfakes featuring more than 120 images of Irish politicians was uploaded to a marketplace for AI-generated content.

These effects are further exacerbated by the incentive structures of major tech media platforms, whose algorithms reward engagement. Outrage spreads more quickly because it triggers immediate sharing before fact-checking can occur.

Who Is Failing to Stop It and Where Accountability Breaks Down

Governments Cut the Agencies Built to Fight This

Significant cuts to the FBI's Foreign Influence Task Force, the State Department's Global Engagement Center, and the Foreign Malign Influence Center at the Office of the Director of National Intelligence have greatly diminished the U.S. government's ability to counter foreign influence operations.

Who Is Regulating AI Propaganda and How

The EU AI Act reflects a shift toward treating disinformation as a governance issue rather than a content moderation task. Article 50 requires labelling of AI-generated and deepfake content, enforceable from August 2026 with fines up to 6 percent of global revenue.

The European Commission fined X 120 million euros in late 2025 for breaching Digital Services Act transparency rules, signaling that regulatory patience is exhausted but that enforcement mechanisms remain insufficient for real-time crisis management.

In the United States, the TAKE IT DOWN Act of May 2025, the first federal law directly restricting harmful deepfakes, established criminal penalties for the non-consensual distribution of synthetic intimate images, though its scope remains narrow relative to the national security dimensions of deepfake warfare.

Who Can Detect AI Deepfakes and What the Technical Reality Is

Experts confirm that "AI-generated deepfakes have crossed a critical threshold." Earlier tell-tale glitches have been eliminated, and this technology is now accessible to anyone with a smartphone.

Deepfake attacks occurred every five minutes in 2024. Digital document forgery rose by 244 percent in a single year. We are seeing the erosion of the "seeing is believing" standard, now morphing into the "liar's dividend": a politician caught on video can simply claim the footage is a deepfake.

AI simply supercharges the reach and realism of disinformation efforts. These campaigns thrive on emotionally charged content, visual ambiguity, and rapid dissemination through trusted figures or verified institutions.