A new online regulator will fine web companies that fail to protect users, and possibly even block offending websites from being accessed within the UK, according to new government plans.
Companies that run social media platforms, file hosting sites, discussion forums, messaging services and search engines will become responsible for any harmful material which they allow their users to share or discover.
This harmful material includes that with a “clear definition” such as child sexual abuse and terrorist material, as well as material without a clear definition, such as cyber bullying and disinformation.
Although the lack of strict definitions about what “harmful material” is has prompted civil liberties campaigners to express concerns about the plan, Culture Secretary Jeremy Wright said that this would provide the new regulator with enough flexibility to tackle new harms.
Technology companies “had their chance to put their own house in order” but failed to do so, said Home Secretary Sajid Javid, launching a consultation on the plans.
“For too long they have failed to go far enough and fast enough to help keep our children safe. They have failed to do the right thing – for their users, for our families, and for the whole of society.
“And they have failed to show the moral leadership we expect of those trusted with the right of self-regulation.”
Mr Wright agreed: “The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough.”
Teenager Molly Russell took her own life six days before her 15th birthday in 2017 after viewing self-harm and suicide material on Instagram.
Her father Ian said the images “helped kill her” and although the family’s subsequent campaign forced the social media company to ban “graphic self-harm or graphic suicide related content on Instagram”, a Sky News investigation found such material remained on the platform.
Sky News also found YouTube videos celebrating the New Zealand mosque shootings were easily avoiding the platform’s moderation efforts, despite a general clampdown across social media platforms.
Speaking to Sky News’ home editor, Jason Farrell, Mr Javid said he was shocked when Facebook told him that it wasn’t able to prevent the live-streaming of the terror attack on Facebook Live.
He said he had recently stressed the matter of tackling the live-streaming of child sexual abuse, and when the technology companies said they hadn’t considered how to prevent the streaming of a terror attack he realised that they were not capable of self-regulation.
The new proposals follow the government’s pledge to make the UK “one of the safest places in the world to be online” after a number of scandals which have blamed harmful content on social media for causing damage offline.
The proposals on online harms, drawn up by the Home Office and Department for Digital, Culture, Media and Sport, say a regulator will be appointed to ensure companies meet their responsibilities.
The regulator would be there to monitor a new duty of care, which will give companies a legal responsibility to ensure the safety of their users.
If companies are found to have fallen short of these standards then the regulator could fine the company a “substantial” amount, block the sites from being accessed within the UK, or even make individual members of senior management legally liable.
The costs for the new regulator are unclear, although the government said it hopes that its funding would come from the technology sector itself and not the public purse.
“To recoup the set-up costs and ongoing running costs, the government is considering fees, charges or a levy on companies whose services are in scope,” stated the white paper.
“This could fund the full range of the regulator’s activity, including setting and enforcing codes of practice, preparing transparency reports, and any education and awareness activities by the regulator.”
Areeq Chowdhury, the chief executive of digital think tank WebRoots Democracy, told Sky News: “Light-touch regulation will do nothing to prevent the spread of online harms and we, therefore, welcome the steps set out by the government today in its white paper.
“In particular, the ability to suspend websites which fail to take sufficient action is one which would be necessary for any new regulator.
“We would, however, urge the government to go further and focus on how platforms can best be taxed to help fund offline action on the root causes of these harms.”
The executive director of freedom of expression organisation Article 19, Thomas Hughes, told Sky News that they opposed the “duty of care” principle due to fears it would lead to mass surveillance.
“We believe a duty of care would inevitably require them to proactively monitor their networks and take a restrictive approach to content removal. Such actions could violate individuals’ rights to freedom of expression and privacy.”
The white paper states that the regulator “will not compel companies to undertake general monitoring of all communications on their online services” but does state that it “will introduce specific monitoring requirements”.
However, it adds: “The government believes that there is, however, a strong case for mandating specific monitoring that targets where there is a threat to national security or the physical safety of children.”
It adds that “any requirements to scan or monitor content for tightly defined categories of illegal content will not apply to private channels”, before adding that it hadn’t yet defined what private channels could be.