
Social media companies may have profited from the wave of "hateful messaging" which fuelled last summer's riots, according to scathing report which warns of the online threats facing Britain. Companies "enabled and even encouraged" the spread of misinformation and put the public in danger, according to a cross-party group of MPs. They warn the country is "vulnerable to a repeat of last summer's crisis".
The Science, Innovation and Technology Committee sounds the alarm that current laws are "unable to tackle the spread of misinformation" and "cannot keep users safe online". It describes how in the wake of the "horrific" Southport murders in July last year, "misleading and hateful messaging proliferated rapidly online, amplified by the recommendation algorithms of social media companies."
The report claims a "false name was seen 420,000 times, with a potential reach of 1.7 billion people". It adds that "social media was used to organise the unrest, with accounts created for the riots including far-right symbols and calls for 'mass deportation'." The MPs warn the Online Safety Act was not designed to tackle misinformation and say there are "major holes" in the legislation. They call for tougher regulation of social media companies.
The Government is "hamstrung" by a "lack of transparency", the claim, about how the algorithms which drive social media recommendations work. The committee also warns that the present law is "out of date" because it fails to address recent advances in Artificial Intelligence.
Dame Chi Onwurah, who chairs the committee, said: "Social media can undoubtedly be a force for good, but it has a dark side. The viral amplification of false and harmful content can cause very real harm - helping to drive the riots we saw last summer.
"These technologies must be regulated in a way that empowers and protects users, whilst also respecting free speech."
Warning that the Online Safety Act "just isn't up to scratch," she said: "The government needs to go further to tackle the pervasive spread of misinformation that causes harm but doesn't cross the line into illegality. Social media companies are not just neutral platforms but actively curate what you see online, and they must be held accountable."
Pushing social media giants to act responsibly, the report states: "We were reassured by the statements from Google, Meta, TikTok and X in our evidence session that they accepted their responsibility to be accountable to Parliament. We hope to see this in practice as we continue our work in this area."
A Government spokesperson said: "We won't stand by while online activity fuels real-world harm, and there are clear laws in place, including new false communications offence in the Online Safety Act, to target the spread of disinformation online when there is intent to cause harm.
"Under the Online Safety Act, social media platforms have a legal duty to remove illegal content and prevent the spread of illegal disinformation, with robust enforcement powers available to Ofcom.
"Ofcom has also announced a consultation on additional safety measures to stop illegal content going viral including ensuring platforms have crisis response protocols in place."
You may also like
EastEnders' Janine Butcher star admits she's unemployed two years after soap exit
Puravankara posts Q1 FY26 pre-sales of ₹1,124 cr, price realisation up 9%
Shocking moment £80m superyacht bursts into flames while docked in St Tropez
Jamie Smith Equals Fastest To 1000 Test Runs As Wicketkeeper, Breaks Balls-faced Record
Zero work done on the ground: AAP slams BJP govt over waterlogging in Delhi