Home » Technology » UK Presses Tech Firms on Fake News & Election Interference Risks

UK Presses Tech Firms on Fake News & Election Interference Risks

The UK government is increasing pressure on major technology companies to more aggressively address the spread of false information on their platforms. Executives from Meta, X, and TikTok appeared before the House of Commons Foreign Affairs Committee this week to explain their efforts to identify and counter disinformation campaigns, particularly those employing coordinated inauthentic behavior (CIB) – tactics used by malicious actors to destabilize democratic systems. The scrutiny comes as concerns mount over the ability of these platforms to effectively police content and safeguard against foreign interference, especially with upcoming elections in Europe and the United States.

The committee members voiced significant concerns regarding the challenges tech companies face in identifying and curbing the proliferation of misleading content. One example highlighted was the “Doppelganger” disinformation campaign, which has disseminated false narratives, fabricated documents, and deepfakes about Ukraine since at least 2022. The UK government has already sanctioned the entities behind this campaign, which was reportedly launched by Russia to justify its invasion.

Executives from Meta, X, and TikTok acknowledged the threat of coordinated disinformation campaigns originating from countries like Russia, Iran, and China, particularly during election periods. They asserted they are taking measures to mitigate these risks, including account suspensions, identification of involved organizations and individuals, and public reporting on network activity to alert users. However, skepticism remains regarding the effectiveness of these efforts.

TikTok’s Proactive Content Moderation

Ali Law, TikTok’s Head of Public Policy for Northern Europe, stated that the platform employs automated moderation processes to review all posts, referring potentially problematic content to an independent fact-checking team. According to Law, “From the point of view of disinformation, we remove 99% [of publications] proactively. We remove 90% with zero visits and 95% within two hours.” He emphasized that while progress is being made, the platform’s content moderation process is designed to address disinformation at its source.

Concerns Over Real-World Impact and Election Interference

Despite these claims, members of Parliament expressed doubts about the adequacy of current mechanisms to effectively stem the flow of false information and voiced concerns about the potential impact on national security. Committee Chair Emily Thornberry cited a report from the BBC, which indicated that false posts linked to Russia on TikTok received approximately 23 million views during the 2025 Moldovan election campaign. “Our concern is that if they do it in Moldova, they can also do it in the United Kingdom,” Thornberry stated.

Parliamentarians also raised concerns about past failures by digital platforms to detect and remove false information during critical incidents within the UK. Specifically, they referenced the summer 2024 murders of three girls in Southport, where a false rumor spread on social media falsely identifying the perpetrator as a Muslim asylum seeker. This misinformation fueled violent disturbances and put the safety of thousands of immigrants at risk.

Deepfakes and Regulatory Scrutiny

The spread of sexually explicit deepfakes generated using X’s Grok AI also drew criticism. This led to an investigation by the UK regulator Ofcom, which threatened a fine of up to 10% of X’s global revenue if the company did not address the issue. Wilfredo Fernández, X’s Director for Global Government Affairs for the Americas, assured the committee that “we have implemented a whole series of measures to ensure that this incident does not happen again,” adding that the company recognizes the issue was “unacceptable” and is working to strengthen its safety systems.

The ongoing debate highlights the complex challenge of balancing freedom of expression with the need to protect democratic processes and public safety in the digital age. As disinformation tactics evolve, the pressure on tech companies to proactively address these threats will likely intensify. The effectiveness of current measures and the potential for further regulation remain key areas to watch in the coming months.

The UK’s push for greater accountability from tech platforms reflects a broader global trend. Continued scrutiny and potential legislative action will likely shape the future of online content moderation and the fight against disinformation. What remains to be seen is whether these efforts will be sufficient to safeguard against the increasingly sophisticated tactics employed by those seeking to manipulate public opinion and undermine democratic institutions.

Have your say: What further steps should tech companies take to combat online disinformation? Share your thoughts in the comments below.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.