GCRD Occasional Paper
AI-Enabled Disinformation and Democratic Vulnerability
From Reflexive Control to Cognitive Settlement
Founding Executive Director, Global Centre for Rehumanising Democracy
The NATO Strategic Communications Centre of Excellence published Beyond Spam Bots1 this month, a red-teaming assessment of eight leading large language models that should stop every policymaker, government official and civil society leader on their tracks. The report established that AI-powered disinformation systems are buildable today from commercially available tools, and current regulatory frameworks are demonstrably insufficient to prevent their misuse. Vulnerability scores ranged from 5.75 per cent to 80 per cent. Open-source models with safety constraints deliberately stripped achieved 80 per cent misinformation success rates. Autonomous multi-agent systems can run entire disinformation campaigns with minimal human oversight.
The findings are alarming but not surprising at all. Georgi Angelov and I had spent December 2024 to April 2025 documenting exactly this kind of system operating in Bulgaria and other countries in Eastern Europe.2,3 The convergence of two independent bodies of evidence, one from controlled red-teaming, one from field observation of a live operation, is worth serious attention.
What the Bulgaria research found
Our findings, published as a chapter in AI and the Future of Democracy: Building Resilient and Inclusive Societies2, introduced AI-Enhanced Reflexive Control, or AIRC, as a framework for understanding how artificial intelligence transforms Soviet-era reflexive control into a doctrine operating at a scale and precision the original theorists could not have imagined. Classical reflexive control manipulates the informational environment in which a target makes decisions, so the target reaches the adversary's preferred conclusion while believing they chose independently. AI does not merely accelerate this. It adds five new dimensions: algorithmic amplification, which manufactures artificial consensus across platforms; computational content generation producing culturally calibrated material at volumes no human team could match; predictive audience modelling identifying psychological access points at demographic and individual scale; feedback-driven optimisation refining tactics in real time; and dynamic narrative evolution ensuring every event, whatever its actual nature, reinforces the strategic frame.
Every one of these dimensions was observable in the Pravda Bulgaria network. When Bulgarian fact-checkers debunked specific claims, the network adapted its framing within hours. The NATO report's hypothetical seven-day health disinformation campaign,1 in which an autonomous system reaches over 85,000 users and produces measurable behavioural change, describes a mechanism we had already watched operate on a real population across five months.
Based on: Jacob, J.U. & Angelov, G. (2025). The Disinformation Matrix. In AI and the Future of Democracy. London: Routledge. DOI: 10.1201/9781003594185-2
The finding neither report fully captures alone
While the NATO report explains what these systems can do, our research explained why they work on human beings. The Pravda network was not primarily attempting to change what Bulgarians believe. It was attempting to change where Bulgarians feel they belong.2 The coordinated clusters of synthetic personas, each playing a distinct psychological role, the localised cultural framing invoking Orthodox identity and historical memory, the careful positioning around Bulgarian national anxieties: these constructed what we described as a cognitive settlement, a narrative ecosystem engineered to feel indigenous and socially validated rather than foreign and manufactured.
This is why fact-checking consistently falls short. A correction may be accurate, but it does not offer community. It addresses the surface claim while leaving intact the ecosystem of meaning that gives the claim its resonance. The NATO report rightly argues that content-level intervention is a tactical response to a strategic problem.1 The Bulgaria case study provides the anthropological explanation: the adversary operates at the level of social identity, and accuracy alone cannot compete there. Reading both documents together gives policymakers the technical capability assessment and the human mechanism through which that capability succeeds.
What deserves immediate attention
As shown in the diagram above, closing the regulatory and platform governance gaps matters, but is not sufficient on its own. The response that addresses belonging, not just belief, requires the second column. Our broader Pravda network research, covering 643,601 publications across 45 countries in a single three-month window,3 reveals that the Bulgaria case sits within a tiered, deliberate geographic strategy. Former Soviet states receive 32 times the publication intensity of Western European countries. Moldova, with its pivotal political position at the time, received 56 times more publications per capita than the Western European average. The targeting followed the contours of democratic weakness with precision. Six of the ten most intensively targeted countries were flawed democracies. The system deploys most heavily where institutional trust is lowest and the conditions for reflexive control most permissive.
One development from that research deserves urgent attention: by early 2025, the Pravda network had reportedly begun flooding 182 domains in 12 languages with content designed primarily for AI consumption, aiming to embed Russian narratives into the training data of AI systems.3 The NATO report's concern about abliteration of open-source models addresses AI as a weapon. This LLM grooming strategy addresses AI as a compromised information substrate for everyone who uses it. No governance framework capable of responding to that threat yet exists in any jurisdiction. It should.
Conclusion: Rebuilding the Moral Infrastructure of Democracy
Democracy is not merely a set of institutions and procedures. It is fundamentally about relationships between people. That conviction sits at the heart of our work at the Global Centre for Rehumanising Democracy (GCRD). When the Pravda network constructed what we termed a cognitive settlement for Bulgarian audiences — an ecosystem of false amplifiers designed to feel like community, to invoke shared history, and provide answers to unasked questions — it was not necessarily deploying a communications strategy. It was exploiting a democratic deficit that no regulatory framework, however rigorous, can repair. The deficit is relational. It is spiritual in the civic sense of that word. And it demands a relational, human-centred response.
What successful AIRC operations reveal with clarity is that the quality of democratic life is not primarily an institutional question. It is a discourse question. The Pravda network did not merely target Bulgaria's constitutional structures or electoral machinery. It targeted daily public conversation: how citizens frame their anxieties, the frames through which they interpret their government and political leaders, the stories they tell about who their country is and where it belongs. By the time institutional damage becomes visible, the discourse damage that preceded it has already done its work.
These operations are successful not necessarily because of their technical sophistication, but because their creators understand that human beings do not settle for truth alone. They settle where they belong. And belonging, genuine belonging, is not created by messaging. It is created by coherent relationships with their past, present and hopes for the future.
Rather than treating democracy as purely a set of institutions and procedures, it must be recognised as fundamentally about relationships between people. In practice, this means leaders who are genuinely present to the communities they serve, who anchor their authority not merely in institutional position but in the moral credibility that comes from sustained, visible integrity.
Sustainable democratic renewal requires not only better institutions and systems but leaders with the moral discernment to navigate complexity without losing their humanity. AIRC operations are most effective precisely where such leadership is absent: where citizens experience their leaders as distant, self-serving, or indifferent to the stories that shape their lives.
Shared story is the second foundation. The Pravda network understood that historical memory is not neutral. It exploited Bulgaria's complex relationship with Russia, invoking liberation from Ottoman rule, Orthodox brotherhood, and Slavic kinship as cognitive access points for manufactured consensus. Democratic societies cannot cede this terrain. The honest, inclusive telling of a community's shared story, one that acknowledges wounds alongside achievements, that holds complexity without resolving it into propaganda, is not merely a cultural nicety. It is a democratic asset of the first order. Communities with a robust, shared sense of their own story are communities that prove resistant to the imported, manufactured stories that authoritarian influence operations provide.
Hope is the third and perhaps most urgent foundation. Disinformation operations thrive in the space evacuated by legitimate hope. When citizens believe that democratic institutions cannot deliver meaningful change in their lives, when economic anxiety is unaddressed and institutional trust has collapsed, that vacuum does not remain empty. It gets filled. The AIRC playbook in Bulgaria ran precisely along this fault line, amplifying economic doom narratives timed to budget debates and currency discussions, deepening the sense that Western alignment had delivered nothing for ordinary Bulgarians. The antidote is not a communication campaign promising better things. It is leadership that delivers better things, and does so visibly, accountably, and in genuine relationship with the communities it serves.
"Humans do not settle for truth. They settle where they belong. Those who create the conditions for genuine belonging will determine what people believe. The work begins there."
Taken together, these three foundations — authentic relationship, shared story, and hope grounded in moral leadership — constitute what GCRD calls the rehumanising of democratic life. GCRD's mission is to restore the soul of governance and create societies where leaders serve with authentic care and citizens engage with renewed hope. In the age of AIRC, that mission is not aspirational. It is strategic. The cognitive settlements that adversaries construct will find purchase wherever democratic societies have failed to build their own: places where citizens feel genuinely held, genuinely heard, and genuinely hopeful about the democratic project they are part of.
The question before democratic societies is not whether they can defeat AI-powered disinformation with better algorithms. It is whether they can build communities worth belonging to: communities whose shared story, authentic relationships, and moral leadership offer something that no foreign influence operation, however sophisticated, can replicate or replace. In an age where algorithmic systems can generate persuasive content at scale, the capacity for genuine human connection becomes more valuable, not less. The path forward runs not through the information space alone, but also through the restoration of democratic life itself: human, relational, morally grounded, and genuinely hopeful about what it means to govern ourselves together.
References
- Bergmanis-Korāts, G., and Chia Tee Hiang, J. (2026). Beyond Spam Bots. NATO StratCom COE. stratcomcoe.org
- Jacob, J. U-U., and Angelov, G. (2025). The Disinformation Matrix. In AI and the Future of Democracy. London: Routledge. DOI: 10.1201/9781003594185-2
- Jacob, J. U-U., and Angelov, G. (2025). The Pravda Ecosystem. CIDC Disinformation Observatory. disinfobs.com

