By Zuhal Demirci
LONDON (AA) - The UK's new Online Safety Bill, which is set to take effect next year and aims to curb speculative news and online hate speech, has gained renewed attention after far-right groups used social media to incite violence.
The bill, enacted on Oct. 26, 2023, is subject to phased implementation of regulations and will be fully enforced next year.
Following a stabbing in the English seaside town of Southport on July 29 where 17-year-old Axel Rudakubana killed three children and injured 10 other people, far-right groups coordinated protests on platforms like Telegram, TikTok and X, spreading speculative news and calling for protests against migrants and Muslims there.
The events led to widespread clashes with police, numerous arrests and detentions, and significant property damage across the UK.
British Prime Minister Keir Starmer hinted at tougher measures if social media companies do not do more to remove harmful content, accusing them of fueling violence instigated by far-right groups.
His spokesperson has confirmed the government's focus on "working with the social media companies and ensuring that they’re following their responsibilities" and "getting the existing act implemented quickly and effectively" rather than changing the existing legislation brought by the previous government.
"We’re very clear that social media companies have a responsibility ensuring that there is no safe place for hatred and illegality on their platforms, and we will work very closely with them to ensure that that is the case," spokesperson said.
The spokesperson added that the government is also supporting law enforcement in pursuing those inciting violence online.
The new law will criminalize sharing false or threatening content intended to cause psychological or physical harm, imposing new responsibilities on social media platforms to remove illegal content such as incitement to racial hatred and criminal activity.
Aimed at protecting both children and adults online, the 286-page law, set to fully come into effect in the second half of the next year, will require technology companies to take greater measures to safeguard children from harmful material.
Social media platforms will need to remove content related to child sexual exploitation and abuse, coercive behavior, promoting or facilitating suicide or self-harm, animal cruelty, illegal drug or weapon sales, and terrorism.
Providers will also need to implement systems to reduce the risk of their services being used for illegal activities.
Once the law is in effect, Ofcom, the UK’s media regulator, will oversee its enforcement and will have the authority to take action against companies that fail to meet their new obligations.
Companies failing to comply could face fines of up to £18 million ($23.3 million) or 10% of their global revenue, whichever is greater.
Senior executives may also face criminal charges if they fail to meet Ofcom’s requirements for information.
Ofcom will also hold companies and senior executives criminally responsible where found culpable if media platforms fail to comply with enforcement notices related to child sexual exploitation and abuse.
The regulator is continuing public consultations on the bill's obligations.
*Writing by Gizem Nisa Cebi in Istanbul