Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
The UK’s regulatory response is siloed. Meanwhile, disinformation is not.
In the wake of the Southport Riot and the virulent anti-immigrant disinformation that followed, one question looms large: are UK institutions still equipped to deal with online falsehoods that fuel real-world harm?
Despite the introduction of the Online Safety Act (OSA), the UK’s flagship digital regulation, our study shows that the law is simply not built to confront the scale, structure, or sophistication of disinformation today.
Disinformation isn’t just “bad content.” It’s an engineered system, a network of narratives, actors, and incentives that evolve across platforms, adapt to current events, and exploit platform mechanics to maximize impact. And it’s happening at a pace that regulation can’t keep up with.
Across every major platform — X (formerly Twitter), Facebook, TikTok, and YouTube — the study found a consistent pattern: emotionally charged and divisive content is algorithmically rewarded, not penalized. Disinformation about immigrants, crime, and “cultural decline” was not just tolerated but supercharged by algorithms designed to optimize engagement.
X Premium users, many aligned with far-right ideologies, enjoyed algorithmic boosts that gave visibility and legitimacy to false narratives. Elon Musk’s own posts during the Southport riots, like his now infamous “civil war is inevitable” remark and endorsement of the #TwoTierKeir hashtag, went viral — feeding outrage loops that spilled onto Facebook and YouTube. These were not edge cases. They were features of a system that monetizes attention, regardless of harm.
The OSA’s approach remains rooted in moderating specific pieces of illegal or harmful content, a fundamental mismatch for today’s disinformation problem.
False narratives don’t spread in isolation. They are seeded in niche communities, repeated in echo chambers, and opportunistically amplified by influencers and sock puppet accounts when a real-world crisis arises. Yet the OSA has no meaningful tools to monitor or disrupt these patterns of coordinated manipulation.
It doesn’t track how sock puppets flood dozens of Facebook groups. It doesn’t account for how a single viral tweet by a high-profile user can set off a chain reaction across platforms. It doesn’t examine how certain hashtags or talking points, like “two-tier policing” or “immigrants over pensioners,” are iteratively refined and normalized through repetition.
The UK’s regulatory response is also siloed. Meanwhile, disinformation is not. The same false claim might be seeded in a Telegram group, laundered through a partisan YouTube channel, and then go viral via memes on TikTok. Each platform plays a different role in the narrative lifecycle, yet current policies treat them as separate ecosystems rather than an interconnected whole.
This lack of cross-platform accountability makes it easy for malign actors to migrate, adapt, and relaunch. And because regulators depend heavily on platform cooperation, which can be withdrawn or deprioritized at any time, enforcement is often inconsistent and reactive.
If the UK is to address this threat, its approach to regulation must evolve. Tackling disinformation effectively means moving beyond the outdated model of post-by-post moderation. Instead, authorities need to proactively detect and disrupt coordinated disinformation networks, focusing not just on what is being said but on how and by whom it spreads.
Platforms must also be held accountable for the algorithmic choices that drive virality. Transparency around how content is prioritized and amplified is essential to understanding and countering the systemic incentives behind digital falsehoods. Without regulation that directly addresses these amplification mechanisms, harmful narratives will continue to flourish.
Access to data is another critical gap. Independent researchers, journalists, and civil society groups must be able to study disinformation in real time, across platforms. This means mandating reliable data-sharing frameworks that don’t depend on the goodwill of tech companies.
The aftermath of Southport shows what happens when disinformation meets a permissive digital ecosystem and a weak regulatory response: Immigrants were scapegoated, protests turned violent, and a far-right political party surged in popularity. Until the UK confronts disinformation as a systemic networked threat, rather than a series of content moderation failures, it will remain vulnerable. – Rappler.com
This article is part of a larger investigation into the Southport riot and the disinformation ecosystem surrounding immigration in the UK. You can read the full report on The Nerve’s website.
Decoded is a Rappler series that tackles Big Tech not just as a system of abstract infrastructure or policy levers, but as something that directly shapes human experiences. It is produced by The Nerve, a data forensics company that enables changemakers to navigate real-world trends and issues through narrative and network investigations. Taking the best of human and machine, we enable partners to unlock powerful insights that shape informed decisions. Composed of a team of data scientists, strategists, award-winning storytellers, and designers, the company is on a mission to deliver data with real-world impact.