Font Size: A A A

Home > News > Case Studies > Online harms regulation

Case Studies

Online harms regulation

An independent regulator would be appointed to enforce ‘online harms’ standards, and social media firms will have to abide by a “duty of care” to users and could face fines, under an Online Harms White Paper, a joint proposal from the Department for Digital, Culture, Media and Sport and Home Office. Tech companies – social media platforms, file hosting sites, public discussion forums, messaging services, and search engines – would have to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services.

In a speech at the British Library, Home Secretary Sajid Javid said: “Every single one of the 2017 terror attacks had an online element, and every month in the UK, some 400 people are arrested for online sexual abuse and exploitation offences. Last year, Facebook removed over 14 million pieces of content relating to terrorism or violent extremism. And in just three months they removed 8.7 million items that breached policies on child nudity and sexual exploitation. But how much more illegal material remains? And how much damage is being done by this cruel content is even less clear. The cyber-bullying, trolling and posts glorifying self-harm. Truly harmful content that’s linked to depression, anxiety, mental health problems and even suicide.”

A 12-week consultation on the proposals was launched on April 8, to close on July 1. That document points out that terrorist groups use the internet to spread propaganda designed to radicalise the vulnerable; and terrorists have broadcast attacks live on social media, such as the Christchurch massacre in New Zealand in March 2019. Criminal gangs use social media to promote gang culture and incite violence.

According to the consultation document, regulatory and voluntary initiatives aimed at addressing these problems have not gone far or fast enough, or been consistent enough between tech companies.

Also proposed; response to complaints (‘redress mechanisms’); codes of practice, issued by the regulator, which could include requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly at election time; and a “Safety by Design” framework for companies to have online safety features in new apps and platforms, from the start. The Government acknowledges that a regulator, while protecting users’ rights online, would have to be ‘particularly mindful’ to not infringe privacy and freedom of expression.


Related News