BLINDER, OSLO – Advances in artificial intelligence are creating new opportunities to manipulate public opinion and behavior at scale, according to researchers who have identified the emergence of “AI swarms” – coordinated networks of AI-generated accounts designed to influence online discourse. The findings, presented in a research article published in the journal Science in January 2026, detail how these swarms operate and the potential threat they pose to democratic processes.
Jonas R. Kunst, Professor of Communication at BI Norwegian Business School and Professor II of Cultural and Community Psychology at the University of Oslo, led an international research effort involving twelve researchers from eight countries, including Nobel Peace Prize laureate Maria Ressam. The team examined instances of coordinated disinformation campaigns in the United Kingdom, the United States, Brazil, and the Philippines.
“The snowball started rolling in Norway,” Kunst told Nettavisen. “It was important for us to include both researchers and people with practical experience to test our ideas against real-world experiences. By bringing together different perspectives from fields such as computer science and psychology, we gained a holistic approach to the problem.”
The researchers identified five key characteristics of these AI swarms. First, they operate with “cube” coordination, where thousands of AI-created user accounts collaborate as a collective, adapting narratives in real-time and even developing internal “social norms.” Second, the swarms strategically infiltrate social networks, identifying vulnerable groups and using specific cultural codes and tailored messaging to gain trust. Third, they employ human-like imitation, utilizing photorealistic profile pictures and natural language to bypass security filters and appear as authentic individuals. Fourth, the systems continuously optimize their messaging through millions of “microtests” on audiences, identifying and amplifying the most effective content. Finally, these swarms maintain a permanent presence, integrating into online communities for years and subtly shifting language and identities over time.
Andreas Skjøld-Lorange, Director of the Norwegian National Security Authority (NSM), described the findings as “demanding – and a little scary.” He noted that intelligence services are increasingly concerned about “compound threats” – complex, multifaceted attacks that leverage multiple technologies and tactics.
The threat is amplified by recent advancements in AI language models. Previously, bot networks were inflexible and easily detected. Now, AI agents learn from human feedback, refining their tactics and becoming more persuasive. “By coordinating a chorus of seemingly independent voices, they can create the impression of consensus at the grassroots level,” Kunst explained. “Because people’s opinions are often influenced by others with similar views, this artificial consensus can influence public conversation almost invisibly over time. This undermines the democratic debate.”
Recent examples illustrate the growing sophistication of these operations. In Scotland in 2025, Cyabra, a company specializing in detecting bot networks, uncovered a coordinated campaign influencing the debate around Scottish independence on X (formerly Twitter). The analysis revealed that 26 percent of accounts arguing for independence were fake, posting over 3,000 messages in six weeks. The network disappeared overnight when internet access was disrupted in Iran, only to reappear sixteen days later with a shift in messaging towards pro-Iranian and anti-Western viewpoints, garnering 224 million potential views and over 126,000 engagements.
Intelligence agencies have been tracking such activity for years. In 2015, Western intelligence services identified the Internet Research Agency (IRA), a Russian “troll farm” linked to Yevgeny Prigozhin, for its interference in the American political debate. While the IRA’s impact was limited because it primarily targeted already polarized users, the emergence of AI swarms represents a significant escalation in capability.
According to a recent threat assessment from Norway’s intelligence services, Russia is expected to conduct influence operations in Norway in 2026, and AI provides these actors with new opportunities. The assessment notes that Russian actors are spreading pro-Russian and anti-Western narratives on digital platforms, often through channels with covert ties to the Russian government. They are also creating websites that masquerade as legitimate news sources, disseminating propaganda and polarizing content, and leveraging AI to produce large-scale, targeted content in text, image, video, and audio formats.
Skjøld-Lorange emphasized the importance of factual information being readily available to counter disinformation. He advised individuals to be critical of information sources, avoid sharing unverified content, and to seek out diverse perspectives. “Don’t contribute to the spread of disinformation,” he said.