Ofcom will regulate Facebook and Instagram in crackdown on ‘harmful content’

Ofcom will take on a new role regulating internet firms, ministers confirmed today.

The broadcast regulator will be in charge of a statutory 'duty of care' for websites that allow users to share content.

That will include Facebook, Instagram, Twitter, YouTube and Reddit – as well as smaller sites that include forums, comments and video sharing.

But Ofcom won't respond to individual complaints from the public about legal content – as is the case with their regulation of broadcasters.

Instead, companies will be allowed to decide decide what kind of behaviour and content is appropriate on their platforms.

But they'll have to set out "clear" conditions and enforce them "effectively" and "consistently".

In their response to a consultation on 'online harms', Home Secretary Priti Patel and Culture Secretary Baroness Morgan suggested the new plan would include "measures to prevent children from accessing age-inappropriate material."

It says: "We expect companies to use a proportionate range of tools including age assurance, and age verification technologies to prevent children from accessing age-inappropriate content and to protect them from other harms."

But it also pledges not to "prevent adults from accessing or posting legal content, nor require companies to remove specific pieces of legal content."

Last year the government shelved its online age verification law after years of bungling and criticism from the tech industry.

A statutory duty of care for internet companies with an independent regulator enforcing new guidelines against online harms was first proposed in a Government White Paper last year.

In a joint statement, the two ministers said the regulations would cover illegal material such as sexual abuse images as well as terrorist propaganda, cyberbullying and images of self-harm.

  • Information Commissioner launches social media crackdown in bid to protect kids online

But they insisted freedom of expression would be protected through safeguards.

The statement read: "We will ensure that there are safeguards in the legislation, so companies and the new regulator have a clear responsibility to protect users’ rights online, including freedom of expression and the need to maintain a vibrant and diverse public square."

And the response suggested the introduction of greater transparency in the process of content removal, and the introduction of an appeals process.

On the decision to appoint Ofcom as the regulator, the ministers said they had taken the decision based on "feedback from the consultation."

  • Government quietly shelves 'porn block' plans after years of delay

They said: "This would allow us to build on Ofcom’s expertise, avoid fragmentation of the regulatory landscape and enable quick progress on this important issue."

The proposals suggest Ofcom would not be tasked with regulating political campaigning or advertising on social platforms, beyond ensuring firms don't allow terrorist propaganda.

They say  "electoral integrity and related online transparency issues" will be examined in the Cabinet Office's Defending Democracy review.

Ben McOwen Wilson, UK managing director of YouTube – which is owned by Google – said the video platform would work with the Government and is keen to debate the issue.

"The Online Harms Consultation is of great importance to YouTube as well as to all British users of our products and services," he said.

"To help keep our community safe, we haven't waited for regulation; we've created new technology, hired expert reviewers, worked with external specialists, and reviewed our policies to ensure they are fit for the evolving challenges we face online.

"Our work has the most impact when companies, Government and communities work together. We look forward to working in partnership with the Government and Ofcom to ensure a free, open and safer internet that works for everyone."

But the Internet Association, the trade body which represents internet firms including Amazon, Facebook, Google, Microsoft and Twitter, said the companies were keen for further debate over what it called "issues of concern", including potential punishments for not removing content which could be considered harmful but is not illegal.

Source: Read Full Article