As disinformation spreads during UK riots, regulators are currently powerless to take action
LONDON — Ofcom, the U.K.'s media regulator, was chosen last year by the government as the regulator in charge of policing harmful and illegal content on the internet under strict new online safety regulations.
But even as online disinformation related to stabbings in the U.K. has led to real-world violence, Ofcom, Britain's online safety regulator finds itself unable to take effective enforcement actions.
Last week, a 17-year-old knifeman attacked several children attending a Taylor Swift-themed dance class in the English town of Southport in Merseyside.
Three girls were killed in the attack. Police subsequently identified the suspect as Axel Rudakubana.
Shortly after the attack, social media users were quick to falsely identify the perpetrator as an asylum seeker who arrived in the U.K. by boat in 2023.
On X, posts sharing the fake name of the perpetrator were actively shared and were viewed by millions.
That in turn helped spark far-right, anti-immigration protests, which have since descended into violence, with shops and mosques being attacked and bricks and petrol bombs being hurled.
U.K. officials subsequently issued warnings to social media firms urging them to get tough on false information online.
Peter Kyle, the U.K.'s technology minister, held conversations with social media firms such as TikTok, Facebook parent company Meta, Google and X over their handling of misinformation being spread during the riots.
But Ofcom, the regulator tasked with taking action over failings to tackle misinformation and other harmful material online, is unable at this stage to take effective actions on the tech giants allowing harmful posts inciting the ongoing riots because not all the powers from the act have come into force.
New duties on