The Online Safety Bill is about reducing harm caused by social media (“user-to-user”) services and search engines. It imposes a series of duties on regulated services and appoints OFCOM as the independent regulator, with powers to sanction companies who do not comply with their duties. Additional enforcement powers are business disruption measures and blocking powers, as well as the much-discussed criminal sanctions on directors. It also does two other distinct but interconnected things: introduces age-verification requirements in relation to pornography providers (which are not user to user); and introduces new criminal offences. This makes it a long and complex Bill.

The regulatory regime at its heart is risk-based: regulated services are required to carry out risk assessments of their services with regard to criminal content and content harmful to children and, where they identify harm arising from those types of content and/or the operation of their service, put in place effective and proportionate risk mitigation plans.  The Bill’s risk management regime applies duties based on three categories:

  • Category 1 – user-to-user platforms that are of significant size and functionality
  • Category 2A large search engine
  • Category 2B other user-to-user platforms passing a size threshold.

The category thresholds will be set out in secondary legislation acting on OFCOM’s advice. There has been significant concern during the passage of the Bill that small but risky platforms will not be caught.

The principal duties in the original Bill focused on illegal content, content harmful to children and content harmful to adults (though for this last category the duties were effectively only transparency not take down obligations). The Government at Commons Committee removed the latter duty, replacing it with a duty to enforce Terms of Service and a duty to ‘empower’ people a way to limit their exposure to content on a list in the Bill. The risk assessment of harms to adults was removed as a result of these changes, as was the obligation to publish a summary of the risk assessment.

Within the illegal and child protection duties, some types of harmful content are seen as ‘priority’. There is a list of priority illegal content in the Bill (Schedules 5- 7) and some new offences to be added in the Lords; priority content that is harmful for children was indicated in a WMS in July though has not been incorporated in the Bill. All regulated services are required to comply with the illegal content duty and to take account of rights to freedom of expression and (in a limited manner) privacy. Additionally, services need to carry out a Children’s Access Assessment (CAA) to determine whether or not they have to comply with the children’s duties.

Category 1 services also have to consider: the new duties (relating to Terms of Service and user empowerment); anonymity rules to allow users to avoid content from unverified accounts; and limited protection of both journalistic content and ‘content of democratic importance’.  Category 1 and 2A services have fraudulent advertising duties.

The Bill includes new rules for pornography providers that apply beyond social media and search (to “internet services” that display “regulated provider pornographic content”) requiring that in the UK children must not normally be able to encounter such content online, using measures such as age verification. New offences introduced through the Bill include: restriction of flashing images (“epilepsy trolling”); and two communications offences recommended by the Law Commission on cyberflashing and, to be introduced in the Lords, facilitating self-harm, intimate image abuse, including deepfake porn.  Broadcast and print media, already regulated or self-regulated, have a carve-out.

If OFCOM decides a platform has failed in its safety duty then it may make orders to correct behaviour and fine the service.  In extremis, OFCOM can apply to the courts to injunct companies providing, say banking and advertising, to a platform and require them to stop or as a last resort order internet service providers to not carry an offending service.  OFCOM can order a company to use a particular technology to detect and combat CSEA or terrorism – for instance changing their encryption tech.  Failure to comply with OFCOM investigations can lead to a criminal offence (non-compliance with information order); Additionally, a new amendment, imposing criminal liability on senior managers for repeated failures to comply with the regime, will be brought forward in the Lords.

The Bill is a framework regime: much of its detail requires secondary legislation to be enacted after Royal Assent and/or requires OFCOM to consult on a series of codes of practice and guidance. OFCOM’s recent roadmap indicates it will not be fully in place until 2025 – delays to the Bill since then mean this has slipped further. The Secretary of State also has significant powers to change the regime in secondary legislation.

Return to blog.