When we published our proposals for extensive amendments to the Online Safety Bill in November 2021, we argued that the Bill was too complex and needed to be simplified and strengthened:

A simpler Bill will lead to better outcomes for victims. Others benefit too: legislators, who will need to scrutinise and amend it further; companies (and their lawyers) who will need to comply; the regulator, Ofcom, who will take enforcement decisions based on it; and civil society organisations advocating for victims.

The Bill’s structure remains complex and opaque, a consequence of policy decisions made by the Government as well as drafting approaches chosen by Parliamentary Counsel. The re-ordering proposed by the Joint Committee to make the objectives of the regime clearer, which were similar to proposals we made in our amended Bill, have not in the main been actioned and the Bill still lacks any general explanation at all.  The structure of the Bill is cumbersome, with much nesting of qualifiers on terms scattered through the Bill: for instance, risk assessments link back to illegal or harmful content; and harmful content depends on an amended definition of harm (at clause 187) and of content (clause 189). Definitions are listed at the end, with other definitions scattered throughout the Bill, and an index brings them all together at clause 190. While this is not unusual, the implications are hard to assess because of this nesting. (We also have concerns about the drafting of the journalistic content exemptions (clause 16) and particularly the definition of what that content is (explanatory notes, para 132), which we will return to in due course.)

The Explanatory Notes have improved a little from the draft. But overall comprehensibility would be improved by some more simple use of diagrams and flow charts that show ‘user journeys’ from a number of perspectives[1]. It is in the Government’s interest to make the regime easier to understand – it will help passage of the legislation as well as reducing the regulatory burden. If they insist on keeping the complex structure, then the Government and/or OFCOM (in due course) should provide better supporting materials.

The Online Safety Objectives (which we had argued should have been at the front of the Bill in a new clause 1A) are now in schedule 4, which is unhelpful. These objectives, in addition, then have an overlap with some of the requirements in each of the duties in the main body of the Bill re illegal content etc.

We do, however, note some welcome simplifications to the structure, which include: the consolidation of the duties in clause 6(2); the incorporation of the child sexual exploitation and abuse (CSEA) and terrorism offences into priority illegal content; and the reordering of the risk assessment duties so that they each sit next to the relevant safety duty, which improves flow and readability.

But, where new concessions have recently been made, they are introduced as new sections (for example on pornography, user verification and fraudulent advertising); and the impact of these additions is unclear  on the overall schema, as yet. Certainly it has led to some convoluted drafting around pornographic content;[2] and the provisions around user verification (in clause 57) have been split off from those relating to the right to filter out non-verified users’ content and other user empowerment tools found in clause 14 as part of the Category 1 safety duties.

Moreover, the consequences of the boundary between advertising and other content being removed is not clear. If adverts fall within the definition of user-generated content (and it seems that they would, per clause 181 and 49(3)), then adverts are regulated content and the machinery behind advert delivery comes within scope where the content is either criminal or harmful to children or to adults (noting that economic/financial loss is not a relevant harm). This inclusion is likely to be a step forward though there will be awkward boundaries to navigate, especially given the special regime for fraudulent ads is applicable only to some service providers. Fraudulent advertising has the same impact on the victim regardless of where they are harmed by it: the new powers should apply with similar strength to all companies not just Category 1.

The Bill still leaves much to do in secondary legislation – with resultant delays to clarity about the regulatory scope. For example, companies will not know which category of service they fall into until after the Secretary of State has published definitions on thresholds and laid these as secondary legislation and OFCOM then “as soon as possible after” those regulations are made publishes the register of categories (cl 81); and as a result will not know all the duties that could apply to them. The Government could easily give broad indications of its thinking in a carefully caveated speech by the Secretary of State, reducing uncertainty and making debate during passage more practical than theoretical. In the three years since the White Paper, the Government must have formed its outline views on the basics of which companies will be subject to which regimes.

The scope of protection offered by the Bill also won’t be clear until after secondary legislation. While there is a preliminary list of priority criminal content in schedule 7, priority content harmful to adults, and both primary priority content and priority content for children, will be introduced by statutory instrument (for which there is no timing given).  These areas are critical for victims seeking to understand if the Bill will protect them in future, as well as for companies that might have to manage these risks. As the Government has embarked upon listing harms in the Bill, it should add the categories of priority content for both content harmful to children and to adults as a new Schedule 7(a) and 7(b) and make its position clear by Second Reading.

More detail on what companies will be expected to do to comply with the regime will follow in OFCOM guidance and codes of practice[3], which cannot be produced or consulted upon until after Royal Assent.  For example, the Children’s Access Assessment (CAA) is dependent on action from OFCOM. This sequencing is problematic in that the obligations in the regime on companies will not become clear until these products from OFCOM are in existence.  We note that OFCOM has said recently it will be ready to proceed with these as soon as practicable and is preparing for “phased implementation” of the regime as its powers come onstream. Note also the existence of materials in relation to

  • the Video-Sharing Platforms provisions,
  • the draft CSEA and Terrorism Codes produced by the Home Office; and
  • the voluntary safety by design guidance from DCMS

though OFCOM’s Online Safety codes may differ from these.

The draft OSB defined illegal content on the basis of when the service provider had “reasonable grounds to believe” something amounted to a “relevant offence”. In exercising their reasonable belief, service providers were to have regard only to content on their respective services. There has been a shift: now the definition in clause 41(3) is simply that material “amounts to a criminal offence”. This seems to lose the language that recognised that services would be making a judgement call (either when they thought something was criminal which was not, or when they thought something was acceptable which was not). The Bill does, however, include language recognising this point in clause 9(6) (“the provider reasonably considers is illegal content”) and 24(6) (same as 9(6)), with regard to consistent enforcement of terms of service/community standards. However, this space for uncertainty is not reflected in the obligations to have systems in place. We question whether this is quite right: surely the obligation there should reflect the fact that a system will be operating by reference to classes of content – but “amounts to” seems to suggest a more individual assessment as to whether content is criminal – and if so, this would make the regime less workable.

It seems that the regime will not start working properly until 2024. The Government should seek to bring some aspects forwards – one area worth more study could be making companies’ terms and conditions enforceable under the regime from Royal Assent, which is expected at the end of 2022.

 

 

 

[1] User journey mapping is a technique for understanding complex system problems espoused in particular by the UK Cabinet Office https://www.gov.uk/service-manual/design/map-a-users-whole-problem

[2] See e.g. Clause 7(1) https://publications.parliament.uk/pa/bills/cbill/58-02/0285/210285.pdf

[3] There are three codes of practice that are named (terrorism, CSEA and fraudulent advertising) and a requirement on OFCOM to produce code(s) on “relevant duties “OFCOM also has at least 10 requirements to produce guidance (risk assessment, child access, child risk assessment, adult risk assessment, record-keeping, user ID verification, transparency reporting, porn duties, guidance re proactive tech, guidance re enforcement action). More details on what should be in some of the codes can be found in Schedule 4.