The Algorithm Made Me Do It: How X’s Design Fuels Real-World Harm
Over 27 million impressions. That’s how many times posts speculating about the perpetrator of a horrific UK tragedy – the murder of three young girls in Southport – falsely identified him as Muslim, a refugee, or an asylum seeker reached users on X (formerly Twitter) within just 24 hours. This wasn’t a glitch; it was a predictable outcome of a platform engineered to prioritize engagement above all else, even – and especially – when that engagement is fueled by outrage and misinformation. A new technical analysis by Amnesty International confirms what many have long suspected: X’s algorithm isn’t just failing to curb the spread of harmful content, it’s actively amplifying it.
The “Conversation” Machine: How X Prioritizes Outrage
Amnesty International’s deep dive into X’s open-source code revealed a disturbing truth about its “heavy ranker” model – the system that determines which posts gain prominence. The algorithm isn’t designed to promote truth; it’s designed to promote “conversation.” Regardless of whether that conversation is constructive, hateful, or based on outright lies, if it generates reactions, it rises to the top. This creates a perverse incentive structure where falsehoods, even demonstrably harmful ones, can outpace verified information in users’ timelines. As Pat de Brún, Head of Big Tech Accountability at Amnesty International, explains, X’s choices create “heightened risks amid a wave of anti-Muslim and anti-migrant violence.”
The Premium Subscriber Boost: Amplifying Harmful Voices
The problem is further compounded by built-in biases favoring X Premium (formerly Blue) subscribers. Posts from these paid accounts are automatically promoted, giving them an unfair advantage in reaching wider audiences. This amplification effect was particularly evident in the aftermath of the Southport attack, where accounts known for spreading anti-immigrant and Islamophobic content, like “Europe Invasion,” garnered millions of views. The platform essentially provides a megaphone to those already peddling hate, accelerating the spread of dangerous narratives.
From Policy Changes to Real-World Violence
The current situation isn’t a sudden anomaly. It’s the direct result of significant changes implemented since Elon Musk’s acquisition of Twitter in late 2022. Mass layoffs of content moderation staff, the reinstatement of previously banned accounts (including that of far-right activist Tommy Robinson), and the disbanding of the Trust and Safety advisory council all contributed to a weakening of safeguards against harmful content. Robinson, banned on most mainstream platforms for hate speech violations, saw his posts reach an unprecedented 580 million views in the two weeks following the Southport tragedy – a chilling illustration of the platform’s new reality.
Musk’s Role: Amplifying the Flames
The issue isn’t limited to algorithmic design and policy changes. X’s owner, Elon Musk, himself actively amplified false narratives surrounding the attack. His comment suggesting “civil war is inevitable” while responding to a video amidst escalating riots further stoked tensions and legitimized extremist viewpoints. This direct intervention from the platform’s leader demonstrates a troubling disregard for the potential consequences of unchecked misinformation.
The Regulatory Response and the Road Ahead
The UK has responded to the fallout from the Southport tragedy with arrests and prosecutions for inciting violence online. A parliamentary report in July 2025 highlighted how social media business models incentivize the spread of misinformation. However, legal frameworks like the UK’s Online Safety Act (OSA) and the EU’s Digital Services Act (DSA) are only as effective as their enforcement. Currently, X’s opaque practices and deliberate design choices continue to pose significant human rights risks.
The Amnesty International report underscores a critical point: algorithmic accountability isn’t just a technical issue, it’s a human rights issue. The prioritization of engagement over safety has real-world consequences, as tragically demonstrated in Southport. The future of online safety hinges on robust regulatory enforcement, increased transparency from platforms, and a fundamental shift in how social media algorithms are designed. We need to move beyond simply reacting to harmful content and proactively building systems that prioritize truth, safety, and respect for human dignity.
The challenge now is to ensure that platforms like X are held accountable for the harms they enable. This requires not only stronger regulations but also a critical re-evaluation of the economic incentives that drive the spread of misinformation. What role will AI play in identifying and mitigating algorithmic bias? And how can we empower users to navigate the increasingly complex information landscape? These are the questions that will define the next chapter in the fight for a safer, more informed online world.
Share your thoughts on the future of algorithmic accountability in the comments below!