UK Online Safety Bill faces the chop amid prime minister vacuum

Legislation would have forced platforms to monitor for fake or abusive users.

Ben Wodecki

July 14, 2022

3 Min Read

Legislation would have forced platforms to monitor for fake or abusive users.

Progress on the U.K.’s Online Safety Bill – which aims to regulate how online platforms protect users and their data − has been delayed as the country lacks a government to pass the legislation.

Prime minister Boris Johnson was effectively forced to resign after around 60 cabinet members – including the Chancellor of the Exchequer and the Health Secretary quit in protest over his leadership.

Johnson’s government fell apart after a string of scandals that included breaking lockdown rules, appointing a minister whom he knowingly was aware was the subject of sexual abuse claims and accepting funds from a party donor to refurbish his Downing Street flat.

A new leader of the ruling Conservative Party will not be elected until early September – leaving a vacuum in government as ministers wrangle over allegiances.

One government source told The Guardian that the bill has been pushed back to the fall – and could be killed off by the new prime minister.

Several ministers vying for Johnson’s position have publicly stated displeasure with the bill. Former leveling-up minister Kemi Badenoch, who came fourth in the first round of the party’s leadership election, tweeted that the bill was “in no fit state to become law.”

The reasoning for the bill’s delay is that opposition Labor politicians tabled a motion of no confidence in the outgoing Johnson, who has sought to cling to power as a ‘caretaker’ prime minister – forcing other legislative duties to be shunted down the road.

Shadow culture minister Alex Davies-Jones suggested work on the bill “might all now be for nothing” and that the government was “prioritizing their own ideals over people’s safety online.”

What is the Online Safety Bill?

The legislation, which was spearheaded by Culture Secretary and Johnson loyalist Nadine Dorres, is wide-reaching – covering everything from spreading illegal content online to protecting social media users from online trolls.

It aims to hold big tech companies accountable for content posted onto platforms – imposing a ‘duty of care’ on user-generated platforms with breaches resulting in hefty fines from media regulator Ofcom. That would impact social media sites such as TikTok and Facebook as well as search engines such as Google.

Platforms that fail to comply face fines of up to $21 million or 10% of their global annual turnover, whichever is higher. Possible sanctions for impeding an Ofcom investigation include jail time.

An early sticking point in the bill related to algorithms being used to spot abusive content. Issues were raised that simply flagging content doesn’t result in effective action. To affirm this, the U.K. government added provisions requiring user verification on social media. Under the proposed law, social media users would be given the choice of only viewing content from users who have verified their accounts.

First published last year, the bill is at a crucial legislative stage, requiring it to be signed off by members of Parliament before being given to the House of Lords to debate.

The House of Lords does not have the legislative power to reject a bill – it only has the power to delay. Once ratified by the Lords, the bill would receive the royal asset, where the monarchy formally agrees to make it an act of law. Once assent is completed in a regal announcement in both houses, the bill becomes law.

Notably, the government changed the enforcement period from 22 months to just two – meaning platforms have very little time post-royal assent to ensure full compliance.

About the Authors

Ben Wodecki

Assistant Editor

Get the newsletter
From automation advancements to policy announcements, stay ahead of the curve with the bi-weekly AI Business newsletter.