A sophisticated Russian disinformation network is actively targeting the 2026 Winter Olympics in Milan, Italy, employing artificial intelligence to spread false narratives about Ukrainian athletes and undermine support for Ukraine. The operation, dubbed “Matryoshka,” utilizes deepfake technology to manipulate videos and create fabricated stories designed to discredit the Ukrainian team and sow discord.
The campaign builds on similar tactics used during the Paris 2024 Summer Olympics, identified as “Operation Overload,” and demonstrates a continued effort by Russia to leverage international events for its strategic goals. Analysts are particularly concerned by the network’s innovative use of AI voice cloning, which adds a layer of authenticity to the disinformation, making it more likely to deceive audiences.
What sets Matryoshka apart is the use of AI voiceovers to impersonate the voices of trusted figures,” says Pablo Maristany de las Casas from the Institute for Strategic Dialogue (ISD) think tank. “They take a real video of a real person but part-way through they switch to stock footage overlaid with a deepfake narration that sounds just like the real person so that they can insert absurd lies that appear more authentic.”
The disinformation efforts include fabricated claims about Ukrainian athletes’ behavior, doping allegations, and even attempts to portray them as aggressive or politically motivated. One example highlighted by AOL involves Ukrainian skeleton racer Vladyslav Heraskevych, who was banned from competing after wearing a helmet displaying images of athletes killed in the war. The disinformation campaign falsely claimed his brother recruited soldiers and fabricated a story about a Hungarian athlete displaying anti-Ukrainian sentiment.
The network’s tactics involve digitally manipulating news reports, creating fake social media posts, and generating deepfake videos featuring prominent figures. A particularly concerning instance involved a manipulated video of Olympics chief Kirsty Coventry during a press conference on Euronews. The AI-generated voice attributed false statements to Coventry, claiming she was “shocked” by the Ukrainian team’s presence and found their behavior “irritating.” The original footage, however, shows Coventry made no such remarks.
“The operators of Matryoshka know that its content is more credible when its delivered, seemingly, by a trusted person,” Maristany de las Casas explained. This strategy aims to exploit the trust audiences place in established figures and reputable news organizations.
Media forensics expert Darren Linvill at Clemson University further explained the process: “They take a real video of a real person but part-way through they switch to stock footage overlaid with a deepfake narration that sounds just like the real person so that they can insert absurd lies that appear more authentic.”
While the individual reach of these fake videos has been limited so far, the coordinated nature of the campaign and the sophistication of the techniques raise significant concerns. Insight News reports that the posts have racked up over a million views across multiple platforms.
This isn’t an isolated incident. BBC Verify has also investigated similar instances of AI-cloned voices used by the same operation, including the cloning of a British 999 call handler last year. The Canadian broadcaster CBC has also debunked an AI video targeting one of their journalists.
The Ukrainian government has condemned the disinformation campaign, with Sports Minister Matviy Bidny stating that Russia is attempting to “discredit Ukrainians and undermine international support for Ukraine.” Kyiv’s center for countering disinformation has also attributed the fake posts to Russia.
As the Winter Olympics progress, vigilance and critical thinking are crucial. The Matryoshka campaign underscores the growing threat of AI-powered disinformation and the necessitate for robust countermeasures to protect the integrity of international events and public trust. The focus now shifts to identifying and mitigating the spread of these fabricated narratives and ensuring that audiences have access to accurate information.
What remains to be seen is how effectively platforms will respond to this evolving threat and whether the disinformation campaign will escalate as the games continue. Share your thoughts and report any suspicious content you encounter online.