Social media users face age check under new rules to protect children

child using Facebook

People could be forced to verify their age before using social media platforms under government plans to protect children.

Facebook, Google, Twitter and other tech giants will face huge fines if under-age children can access their services as part of a new duty of care to be enforced by Ofcom.

The government’s new online safety bill, published yesterday, stated that enforcing terms and conditions on minimum age thresholds would be included in plans to regulate social media.

Children under the age of 13 are not allowed to sign up to Facebook, Twitter, Instagram and YouTube and those under 12 are barred from creating a Google account. WhatsApp, which is owned by Facebook, has a minimum age of 16.

Most social media companies rely on users self-declaring their age when they sign up. Ofcom, which will oversee the new duty of care, will have the power to recommend that particular platforms introduce age verification if they find they have failed to prevent under-age children accessing their sites.

Government sources said that Ofcom could be given stronger powers to force tech firms to carry out age checks on all users if they are found to persistently fail to enforce minimum age rules.

The move could require social media firms to demand that users upload ID to verify their age in the same way that betting firms have to check their customers are aged over 18.

All children would be barred from using WhatsApp under the move. However, social media companies have warned that it would also exclude millions of users — both young and old — from accessing social media platforms because many do not have the documentation required.

A government source said: “If under-age children are still able to access platforms, you have to protect them. If the only way to do that is age verification, then we will have to introduce age verification.”

Responding to the new online safety bill yesterday, a Facebook spokesman said: “Facebook has long called for new rules to set high standards across the internet. We already have strict policies against harmful content on our platforms, but regulations are needed so that private companies aren’t making so many important decisions alone. While we know we have more to do, our industry-leading transparency reports show we are removing more harmful content before anyone reports it to us.

“These are far-reaching proposals and so it will be important to strike the right balance between protecting people from harm without undermining freedom of expression.”

The Online Harms Foundation criticised the government’s plans, saying that they “overwhelmingly ignored” smaller platforms.

It said that ministers had focused on larger platforms, which are already carrying out much of what the bill demands of them. In a highly critical verdict on the draft laws, the foundation said: “This misplaced focus renders the Bill somewhat redundant. The Bill will also effectively outsource the role of adjudicating on what speech is harmful to Twitter, Facebook and Google — this is the duty of governments, not corporate giants.

“We fear that the government may not only undermine fundamental freedoms of the internet, but make existing problems worse.

“Overzealous removal of legal content also risks further radicalising vulnerable people — the government should ensure that any speech that is legal offline remains so online.

“We shouldn’t forget that the problems we are seeing online started offline. If the government is serious about tackling online harms, they cannot focus on the online world in isolation — we must confront the underlying causes for such behaviour also.”