AI’s invisible invasion: how artificial intelligence is becoming the newest weapon in hybrid warfare


By Aleksandr Shaman.

Imagine living in the fictional country of Gortsia, a nation emerging tentatively from decades of authoritarian rule. After the sudden death of your long-serving dictator, you are about to participate in your country’s first democratic elections in nearly half a century. The candidates are relatively unknown, decades of strict censorship have left you and your fellow citizens starved for information. Naturally, you turn to technology for answers, as traditional media seems opaque, confusing, and often contradictory.

In this environment, you ask your AI-powered assistant questions like, “Who is Candidate A?” or “What are Candidate B’s policies?” Its responses seem objective, succinct, trustworthy, even reassuringly neutral. But what if these answers are quietly shaped, manipulated not by objective truth but by an unseen actor seeking influence over your perceptions?

This hypothetical scenario of cognitive manipulation through AI might feel futuristic or even paranoid — but it is neither. Rather, it reflects a sobering new reality in global geopolitics. Welcome to the age of cognitive warfare, a front line in modern hybrid conflicts where battles are fought not with tanks and missiles but with algorithms, data, and artificial intelligence.

The rise of cognitive warfare

Modern conflict has evolved through hybrid warfare, which blends conventional military power with cyberattacks, economic pressures, and disinformation campaigns. 

War has become elusive — difficult to define, harder to confront. Battles are no longer only fought on the ground: today, the primary battlefield is perception itself. Reputational attacks — efforts to undermine credibility, destabilize institutions, and disrupt societies — have become central strategic tools [1]. And now, artificial intelligence is poised to escalate this struggle dramatically.

Psychological operations at scale

Psychological operations have always aimed to influence perceptions and behaviors. Previously, such efforts depended heavily on print media, radio broadcasts, and television spots, often transparently biased and thus limited in effectiveness. Today, however, AI-driven systems like large language models (LLMs) — the technology behind popular tools like ChatGPT — are revolutionizing PsyOps.

Modern AI platforms can generate thousands of tailored, persuasive messages instantly. These messages adapt dynamically, reflecting real-time shifts in public sentiment, events, or personal preferences [2]. AI-powered chatbots and digital personas can infiltrate social media platforms, seamlessly blending into authentic discourse, subtly steering conversations and influencing perceptions without users detecting their artificial nature.

Even more troubling is AI’s ability to fabricate convincingly real “deepfake” videos. In 2023, a deepfake video portraying Ukraine’s top general accusing President Zelensky of betrayal briefly unsettled public opinion before being exposed as fraudulent [1]. The implications are profound: misinformation can become indistinguishable from genuine content, further eroding public trust [6].

A dangerous trust in AI

The growing sophistication in AI-generated content coincides with an unprecedented level of public trust. Recent surveys have found that users regard AI-generated responses as more credible than many human-generated sources [2].

The survey of KPMG and University of Melbourne indicated that 83%  of respondents trusted AI-generated political summaries more than traditional news sources, and on average 58% [3] were willing to act upon this information without further verification. This remarkable level of trust makes AI systems extraordinarily powerful conduits for influence operations [3].

Much of this trust stems from the perception of AIs as objective. Users often assume AI outputs are free from human bias, despite the reality that these systems reflect biases present in their training data or deliberately introduced by those controlling their parameters [4]. Consequently, malign actors can effectively weaponize AI to deliver propaganda that users feel as impartial truth.

A legacy of digital interference

Understanding the scope of today’s AI threat, one must consider how digital platforms have previously been manipulated. During the Arab Spring, activists leveraged social media to coordinate protests, document abuses, and undermine oppressive regimes [4]. Yet the same tools soon became weapons wielded by state actors. In the 2016 U.S. election, Russia exploited social media to exacerbate polarization and undermine democratic processes, flooding platforms with divisive content via troll farms [5].

Similarly, France experienced the “Macron Leaks” in 2017—an attempt to influence its presidential election by releasing stolen and doctored campaign emails online [7]. Such cyber-driven information warfare foreshadowed the more advanced threats posed by artificial intelligence today.

AI: From memes to mental infrastructure

Unlike earlier digital misinformation campaigns, AI-driven operations are not about flooding channels with easily debunked memes or fake news. Instead, they accurately tailor messages to individual users based on detailed profiles. This personalized, conversational approach makes AI-driven propaganda far more persuasive and resistant to traditional fact-checking efforts [2].

The integration of AI into daily digital infrastructure—such as browsers, mobile apps, and smart home devices—means misinformation can infiltrate everyday interactions yet persistently [2]. Users may never realize that their digital assistants or trusted apps are nudging their perceptions in carefully curated directions, not just informing but shaping what they think.

Gortsia’s quiet invasion

Returning to our imagined Gortsia, this silent invasion unfolds without armies or visible coercion. The AI assistant you rely on for election guidance has been carefully manipulated. Responses describing one candidate as having “visionary policies” and another as being “embroiled in scandals” shape your perceptions and decisions. The interference leaves no obvious digital trace, no hacked servers, no overt propaganda broadcasts—just meticulously crafted narrative nudges embedded in seemingly neutral AI outputs.

If citizens in Gortsia place more trust in AI than in traditional media—as current research indicates they would—then manipulating the AI effectively means manipulating an election without firing a shot or rigging a ballot [4]. This scenario illustrates a chilling new method of achieving geopolitical ends through cognitive control rather than physical force.

AI and narrative sovereignty

Control over AI infrastructure is becoming synonymous with control over national narratives and collective cognition. As foundational AI models proliferate, the strategic imperative shifts from controlling media channels to controlling algorithms. Whoever directs these systems can redefine core political concepts—democracy, freedom, legitimacy—in ways that reshape perceptions and decision-making on a massive scale [2].

Reputational Attacks as Weapons of Hybrid Warfare

Reputational attacks are deliberate communications aimed at tarnishing an opponent’s character, credibility, or goodwill. They encompass disinformation campaigns, propaganda, character assassination, smear tactics, and the strategic leaking of embarrassing information. In hybrid warfare, these attacks are not merely incidental propaganda — they are carefully orchestrated offensives designed to achieve military or geopolitical objectives. By undermining the reputation of a rival state or figure, aggressors hope to degrade the target’s political capital, sow distrust among allies or citizens, and even create a pretext for further aggression.

The logic of reputational warfare is clear: a loss of reputation translates into a loss of power. A government that is discredited in the eyes of its people may lose public support (or face civil unrest), and a country portrayed as villainous on the world stage may lose international backing or moral authority. As one NATO analysis observes, hybrid attackers aim to “erode trust between the state institutions and the people,” causing the state to lose legitimacy — which in modern societies is largely a function of public trust [4]. In other words, reputational attacks seek to sever the bond of trust that underpins authority. Without trust, institutions weaken and social cohesion frays, making a nation vulnerable to further subversion.

Crucially, reputational attacks are often synchronized with other elements of hybrid warfare to maximize impact. They might accompany economic pressure, cyber sabotage, or covert military actions, creating a multi-front assault on the target. For instance, during Russia’s 2014 annexation of Crimea, Moscow deployed unmarked special forces on the ground (the notorious “little green men”) and unleashed a torrent of disinformation portraying Ukraine’s government as neo-Nazis and criminals [4]. This dual approach combined physical and reputational blows: armed forces took territory while propaganda preemptively discredited the victim, justifying Russia’s acts and dampening international response. The same playbook – blending kinetic and information warfare — has been used in conflicts from the Middle East to Eastern Europe.

Reputational warfare is not a new phenomenon. Throughout history, belligerents have sought to vilify and demoralize their opponents. What has changed is the technological scale and sophistication with which these attacks can be carried out. In the past, propaganda was delivered via printed leaflets, radio broadcasts, or word of mouth. Now, social networks and instant messaging spread rumors like wildfire, while digital forgeries and fake personas make lies harder to detect. The cost of entry for information sabotage is low, and the potential payoff–if an adversary’s reputation crumbles–is high. This asymmetry is attractive to state and non-state actors alike.

To better understand how reputational attacks function as a weapon of hybrid warfare, it’s instructive to examine notable examples and case studies, both historical and modern. These cases illustrate the diverse forms such attacks can take — from covert disinformation campaigns to highly public character assassinations — and their strategic effects.

Strategic Implications of Reputational Warfare

Reputational attacks in the information-technological sphere have far-reaching implications for security, governance, and business. Most critically , they stress that perception is power in modern conflicts. Winning a war of narratives can, in effect, mean winning the war itself–or at least achieving key objectives — even if one’s military forces never enter the fray. As an example, Russia’s largely bloodless seizure of Crimea in 2014 was facilitated by years of prior information operations that eroded the Ukrainian government’s reputation and sowed confusion. By the time soldiers appeared on the ground, the battle for legitimacy had already tilted in Russia’s favor.

One critical strategic effect of reputational attacks is internal destabilization. When a population loses faith in its leaders or institutions due to sustained smear campaigns, the nation’s unity and resolve crumble. We have seen this with disinformation-driven polarization: societies that are bombarded by divisive, delegitimizing messaging (e.g. portraying the other side of the political spectrum as evil or untrustworthy) become fractured and less capable of collective action. Adversaries exploit these fissures. As Senator Richard Burr noted in reference to Russian interference in U.S. politics, the aim was to “aggressively sow discord and divide Americans… and undermine trust in our institutions” [8]. That is a clear tactical objective: degrade the adversary from within by attacking the bonds of trust that hold it together.

Internationally, reputational warfare can alter the balance of soft power. A country that falls victim to a major disinformation campaign may find its global standing diminished. For instance, if false allegations about war crimes stick to a nation’s image, it may lose support in international forums or face sanctions based on perception rather than fact. Likewise, a state that expertly uses information warfare to project a positive (or at least innocuous) image of itself — while tarnishing its rivals — can gain influence without firing a shot. China’s intensive global media campaigns, for example, aim to present Beijing as a responsible great power while casting doubt on Western countries’ motives. Such image-shaping efforts have real consequences: they can affect alliances, foreign investment, and diplomatic leverage [9] [10].

In the corporate realm, these attacks have economic and legal implications. Companies caught in disinformation crossfires might suffer stock plunges, consumer boycotts, or reputational crisis that take years to repair. This adds a new dimension to national security: protecting the reputational supply chain. Governments and firms increasingly recognize that malicious rumors or fake news can be a form of cyber-enabled economic warfare. For example, U.S. cybersecurity officials have warned that foreign actors might use influence operations to depress stock prices or manipulate markets for strategic gain [11]. One striking example happened in 2023, when an AI-generated fake image of an explosion near the Pentagon briefly triggered a stock market dip before being debunked [8][12]. Such incidents underscore that information attacks can trigger real-world financial tremors [11]. Disinformation is not only a political weapon — it is increasingly used against corporations and markets. In 2023, analysts noted a sharp rise in fake or misleading stories targeting major companies, seeing it as a new vector for economic warfare [11]. Even a single false report can move markets. Businesses now find themselves on the frontlines of reputational defense, as illustrated by this FTSE index board image with digital distortions symbolizing the destabilizing effect of online falsehoods.

Another key implication is how reputational attacks challenge traditional deterrence. Military might or economic strength alone cannot fend off a concerted campaign to destroy one’s reputation. A nation might have the best tanks and a largeGDP, but if it is isolated diplomatically, mistrusted by its people, or scorned globally, those assets lose value. This has led to a growing appreciation for “cognitive security”–protecting the minds and perceptions of citizens and allies. In NATO circles, there is talk of the “cognitive domain” of warfare, reflecting the idea that the battle for hearts and minds (and their understanding of truth) is as crucial as any physical territory. As one NATO review states, “what hybrid threats undercut is trust,” and thus “building trust must be the key bulwark against hybrid threats” [4]. In strategic terms, maintaining societal resilience and truthful information flows is now seen as part of national defense.

Finally, reputational warfare raises difficult questions about escalation and reponses. While a disinformation offensive might not involve bloodshed, its effects can be comparably devastating to a small-scale attack — consider a reputation ruined beyond repair, a violent riot sparked by false rumors, or a democracy derailed. Yet, responding to information attacks is tricky: they often operate in the greyzone below legal thresholds of aggression. If Country A systematically defames Country B’s leader with deep fakes and forged documents, what constitutes a proportionate response? Retaliatory cyber operations? Sanctions? The lack of clear norms or international laws governing information space conflict adds to the strategic uncertainty.

Defending cognitive democracy

Given these realities, democratic nations face an urgent need to fortify their societies against AI-driven cognitive warfare. This defense must go beyond cybersecurity measures or fact-checking initiatives. It requires fundamentally rethinking how societies engage with information technologies:

  • AI Literacy: Citizens must understand how AI systems operate, including their vulnerabilities and biases.
  • Transparency: Algorithms and training data must be openly audited to build public trust and mitigate hidden manipulation.
  • Oversight: Democratic societies should create independent bodies to ensure AI systems adhere to principles of fairness and transparency.
  • Plurality of AI systems: Diverse and competing AI platforms reduce the risk of monopolistic narrative control.

Moreover, democracies must prepare proactive strategies, such as preemptive “inoculation” campaigns, to educate citizens about potential AI manipulations before they occur [4]. This approach builds cognitive resilience, ensuring that people recognize and resist subtle influences.

The war of whispers

The next significant geopolitical conflict may begin not with troops at borders but with a quiet whisper — a carefully generated AI response to a seemingly innocuous question. The scenario of Gortsia is not anymore hypothetical: it symbolizes a reality now emerging in nations worldwide. Cognitive warfare, driven by powerful and subtly manipulated AI, threatens to reshape global politics profoundly.

Ultimately, this challenge compels democratic societies to grapple with profound ethical, technological, and political questions about truth, trust, and the very nature of reality. The invisible war for human perception has begun, and how we respond now will define the resilience of democracy itself.

Edited by Maxime Pierre.

References

[1]: Global Reporting Centre. Not Just Words: How Reputational Attacks Harm Journalists, 2023 (https://globalreportingcentre.org/reputational-attacks/)

[2]: Ukraine Crisis Media Center. (2024). Artificial Intelligence in the Kremlin’s Information Warfare. Retrieved from https://uacrisis.org/ai-kremlin-disinformation

[2]: Ljubas, Z. (2024). Disinformation as Hybrid Warfare and its Strategic Use in the United States 2024 Election. National Security and Future Journal (http://www.nsf-journal.hr/NSF-Volumes/Focus/id/1513#:~:text=predominantly%2C%20the%20use%20of%20disinformation,the%20desires%20of%20these%20rising

[3]: Gillespie, N., Lockey, S., Mabbott, J., Rowlands, D., & Gloede, S. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. University of Melbourne & KPMG International. https://kpmg.com/content/dam/kpmgsites/xx/pdf/2025/05/trust-attitudes-and-use-of-ai-global-report.pdf 

[4]: Bilal, A. (2021). Hybrid Warfare – New Threats, Complexity, and ‘Trust’ as the Antidote. NATO Review. https://www.nato.int/nato_review/hybrid_warfare_new_threats_and_trust

[5]: U.S. Senate Intelligence Committee. (2019). Russian Active Measures Campaigns and Interference in the 2016 U.S. Election. Retrieved from https://www.intelligence.senate.gov/publications/russian-interference-2016

[6]: Ukraine Crisis Media Center. (2023). Deepfakes – AI in the Hands of Propaganda. Retrieved from https://uacrisis.org/deepfakes

[7]: Cerulus, L. (2020). US calls out Russia for Macron campaign hack, even as France stays silent. Politico. Retrieved from https://www.politico.eu/article/us-russia-macron-campaign-hack/

[8]: Senate Select Committee on Intelligence, US Congress. Press Release on Russian Active Measures, 2018 (Senate Intelligence Reports On Russia Detail Broad Disinformation Plan : NPR)

[9]: Freedom House. (2022). Beijing’s global media influence: Authoritarian expansion and the power of democratic resilience. https://freedomhouse.org/report/beijing-global-media-influence/2022/authoritarian-expansion-power-democratic-resilience 

[10]: Ketagalan Media. (2025, March 31). China’s digital propaganda machinery: How Beijing is reshaping the global discourse in the 2020s. https://ketagalanmedia.com/2025/03/31/chinas-digital-propaganda-machinery-how-beijing-is-reshaping-the-global-discourse-in-the-2020s/ 

[11]: Elisabeth Braw. “Corporations Are Juicy Targets for Foreign Disinformation.” Foreign Policy, Dec. 5, 2023 (Disinformation Is Picking Corporate Targets

[12]: Shannon Bond. “Fake viral images of an explosion at the Pentagon were probably created by AI.” NPR, May 22, 2023 (AI was likely behind faked images of an explosion at the Pentagon : NPR)

[Cover Image]: Photo by Glen Carrie, licensed by Unsplash 2024: https://unsplash.com/photos/a-computer-monitor-sitting-on-top-of-a-desk-UiW8V3djY8A


Leave a comment

Other publications