Information warfare flow diagram showing AI-generated synthetic media attack vectors and trust erosion

AI-Powered Information Warfare: The End of Shared Reality

By Agents Squads · · 10 min

TL;DR — AI has collapsed the cost of disinformation by orders of magnitude: a convincing deepfake video costs roughly $10, a fake article less than a penny, a synthetic voice clone requires seconds of sample audio. Detection technology lags generation by months to years. But the deepest damage is not that people believe lies — it is that the existence of synthetic content gives everyone permission to dismiss inconvenient truths. The “liar’s dividend” may prove more corrosive to democratic society than any specific piece of fake content.

The Economics of Deception

Every weapon becomes more dangerous when its cost drops to zero. Swords gave way to muskets, muskets to machine guns, each reduction in cost-per-kill reshaping the nature of conflict. Information warfare is undergoing an analogous transformation. AI has not invented disinformation, but it has reduced its marginal cost to nearly nothing — and in doing so, changed who can wage it, at what scale, and with what consequences.

A fake news article that once required a human writer at $50 to $100 now costs a fraction of a cent to generate. Fake social media personas that cost $5 to $10 each to create and maintain now cost effectively nothing. Audio deepfakes that demanded thousands of dollars in studio production now cost about a dollar. Video deepfakes that required professional equipment and $10,000 or more in production costs can now be generated for roughly $10.

This is not a gradual decline. It is a collapse — a reduction in cost by three to four orders of magnitude in under three years. The implications for information integrity are profound and largely unaddressed.

The Numbers — Cost collapse of synthetic content: fake articles from $50-100 down to fractions of a cent. Fake social accounts from $5-10 to near zero. Audio deepfakes from thousands of dollars to about $1. Video deepfakes from $10,000+ to roughly $10. Three to four orders of magnitude in under three years.

What the Technology Can Do Now

The capabilities of AI-generated synthetic media have crossed critical thresholds that make detection by ordinary people essentially impossible.

Video deepfakes have reached a fidelity where even trained forensic analysts struggle to distinguish synthetic from authentic footage without specialized tools. The tell-tale artifacts that once betrayed AI-generated video — unnatural eye movement, inconsistent lighting, blurred edges around hair — have been steadily eliminated by improved generation models. Real-time deepfake video in live video calls is now possible with consumer hardware.

Voice cloning has become trivially easy. Services require only a few seconds of sample audio to generate a convincing replica of any voice. The technology has already enabled financial fraud — cases of employees transferring funds after receiving phone calls from cloned executive voices have been documented across multiple countries. A Hong Kong firm lost $25 million to a single deepfake video call in which every participant except the victim was synthetic.

Text generation was the first capability to reach human parity and remains the most scalable. The same large language models powering AI coding assistants and enterprise automation produce articles, social media posts, comments, and reviews indistinguishable from human writing. More importantly, they can be tuned for specific audiences, dialects, and emotional registers — creating personalized propaganda at a scale no human operation could match.

Key Takeaway — The critical threshold is not whether AI can fool experts with specialized tools. It is whether AI can fool ordinary people encountering content in their social media feeds, email inboxes, and messaging apps. That threshold was crossed in 2024 for text, 2025 for audio, and is being crossed now for video.

Information Operations at Scale

AI-enabled disinformation is not a future threat being studied in academic papers. It is an active operational capability being deployed by state actors, political operatives, and ideological movements worldwide.

Election interference has escalated with each cycle. The 2024 elections across multiple countries saw AI-generated candidate statements, synthetic voter suppression materials, and automated bot networks of unprecedented sophistication. In Slovakia, an AI-generated audio recording of a candidate discussing vote rigging circulated days before the election when there was insufficient time for debunking. In Bangladesh, AI-generated fake news reports influenced political unrest. In the United States, AI robocalls impersonating President Biden urged voters to stay home during the New Hampshire primary.

Geopolitical conflicts now feature AI disinformation as standard operational doctrine, adding a new dimension to the global AI arms race. Russian information operations targeting Ukraine deploy AI-generated content at industrial volume — fake civilian testimony, synthetic satellite imagery, fabricated official statements. Chinese influence campaigns focused on Taiwan combine AI-generated social media content with coordinated inauthentic behavior across platforms. The volume of synthetic content in these campaigns far exceeds what human operators alone could produce.

Corporate and financial manipulation represents a less-discussed but equally concerning vector. Fake earnings reports, synthetic executive statements, and AI-generated analyst commentary can move markets before verification occurs. Short sellers have reportedly used AI-generated negative coverage to drive stock prices down. The SEC has acknowledged the threat but has not yet developed comprehensive detection or enforcement frameworks.

The Liar’s Dividend

The most insidious consequence of pervasive synthetic media is not that people believe fabricated content. It is that the existence of convincing fakes gives everyone — politicians, corporations, individuals — a ready-made excuse to dismiss authentic evidence.

Researchers call this the “liar’s dividend.” Once a society knows that deepfakes exist, any inconvenient video, audio recording, or document can be waved away as AI-generated. A politician caught on camera making damaging statements claims it is a deepfake. A corporation confronted with documentary evidence of misconduct suggests the documents are synthetic. Whistleblower recordings are dismissed as fabricated.

This dynamic inverts the traditional relationship between evidence and accountability. In a pre-deepfake world, audiovisual evidence was considered reliable by default. In a post-deepfake world, the default shifts toward suspicion — and that suspicion can be selectively deployed to protect the powerful while undermining legitimate journalism and accountability.

The epistemic damage compounds over time. When any piece of evidence might be fake, shared reality fragments. Different communities construct incompatible versions of events, each dismissing the other’s evidence as manufactured. Democratic discourse requires some baseline of shared facts. AI-enabled synthetic media threatens to dissolve that baseline entirely.

Important — The liar’s dividend may be more damaging than any specific deepfake. When all evidence can be dismissed as AI-generated, accountability mechanisms break down. The powerful gain a universal defense: “it’s a deepfake.” The result is not a world where everyone believes lies, but a world where no one can prove truth.

Why Defense Is Losing

The asymmetry between generating and detecting synthetic content is structural, not temporary. Every advance in generation models provides the training data for better detection models, but also provides the adversarial examples that defeat them. Detection is perpetually playing last generation’s game.

Technical approaches to authentication face fundamental limitations. Content provenance standards like C2PA embed cryptographic signatures proving when and where content was created. This works for content produced through participating platforms and devices — but bad actors simply use non-participating tools. Provenance proves what is authentic; it cannot prove what is fake. And it requires universal adoption to be effective, which remains far from reality.

AI-based detection models can identify some synthetic content, but accuracy degrades rapidly as generation models improve. Studies show detection accuracy dropping from above 90% on older generation models to below 60% on the latest systems. The detection models are also trivially fooled by simple post-processing — cropping, compression, or re-encoding a deepfake video often strips the artifacts detectors rely on.

Platform-level enforcement catches some manipulation but struggles with sophisticated operators. Social media companies have invested in content moderation, but the volume of synthetic content exceeds human review capacity by orders of magnitude. Automated moderation tools produce unacceptable false positive rates when tuned for high detection, and unacceptable false negative rates when tuned for precision.

The structural problem is simple: the attacker needs to find one path through defenses, while the defender must close every possible path. This asymmetry has no known solution in information security, and it appears equally intractable in information warfare.

The Democratic Vulnerability

Democracies face a particular disadvantage in this environment. Open societies depend on the free flow of information, which synthetic content exploits. Authoritarian states can impose content controls that suppress both disinformation and legitimate dissent — a trade-off incompatible with democratic values.

Free speech frameworks designed for an era of human-generated content struggle with AI-produced material. The same legal protections that shield legitimate political speech also protect AI-generated propaganda. Drawing lines that constrain harmful synthetic content without restricting legitimate expression has proven extraordinarily difficult in practice.

The speed of viral information compounds the problem. A synthetic video can circulate to millions of people within hours. Fact-checking and verification, even when effective, typically reach a fraction of the original audience days later. The information environment structurally favors the propagation of compelling content over accurate content, and AI makes compelling fabrication trivially cheap.

Encrypted messaging adds another layer of complexity. End-to-end encrypted platforms — WhatsApp, Signal, iMessage — prevent even willing platforms from monitoring content. Disinformation circulating through private channels is essentially invisible to any form of centralized moderation. In many countries, private messaging is the primary vector for information sharing, making platform-level interventions largely irrelevant.

Our Data — The verification gap is widening: synthetic content reaches millions within hours, while fact-checks reach a fraction of the original audience days later. In an attention economy, the first narrative wins — and AI lets anyone be first, with any narrative, at any scale.

What Partial Defenses Look Like

No complete solution exists, but several approaches reduce the damage without requiring the problem to be fully solved.

Content provenance adoption, while imperfect, raises the cost of deception. If major news organizations, government communications, and corporate disclosures routinely carry C2PA signatures, the absence of provenance becomes a signal — not proof of fabrication, but reason for additional scrutiny. Camera manufacturers including Sony, Nikon, and Leica have begun embedding provenance at the hardware level, a meaningful step.

Media literacy education shows measurable effects in controlled studies, though scaling it remains challenging. Populations educated about synthetic content existence and detection heuristics share unverified content at lower rates. The effects are modest but real, and they compound across communities.

Institutional credibility becomes more valuable, not less, in a low-trust information environment. Organizations with established track records of accuracy — wire services, accountability journalism outlets, scientific institutions — provide anchoring points for shared reality. Their role shifts from informing the public to authenticating reality, a function that becomes more critical as synthetic content proliferates.

Regulatory frameworks are evolving, slowly. The EU AI Act includes provisions requiring disclosure of AI-generated content. Several US states have enacted laws targeting election-related deepfakes. China requires watermarking of AI-generated content. These efforts form part of the broader race between AI innovation and governance. None of these frameworks have been tested at scale, and enforcement against bad actors operating across jurisdictions remains an unsolved problem.

The Trajectory

The information environment will likely degrade further before stabilizing. Generation capabilities continue advancing faster than detection or authentication. The cost of producing synthetic content continues falling. The incentives to deploy it — political, financial, ideological — remain strong and largely unchecked.

But the outcome is not predetermined. Societies have adapted to previous information disruptions — the printing press, radio propaganda, internet misinformation — without collapsing, though not without damage. The current disruption is faster and more severe, but human institutions have more tools available for response than at any previous point in history.

The honest assessment is uncomfortable: we are in the early stages of an information environment transformation whose full consequences remain unclear. The technology favors offense over defense, speed over accuracy, and volume over verification. Navigating this environment will require not just better technology but better institutions, better norms, and a citizenry that understands both the capabilities and limitations of the tools reshaping their information landscape.

Sources

Related Reading

Back to Economics & AI