Our work to describe and advocate for a duty of care regime for online harm reduction has always been rooted in our belief that systemic, risk-based regulation – such as that which is established in countless other sectors – is the most appropriate approach for the online environment, requiring companies to account for, and mitigate, the harm that arises from the design and operation of their systems rather than focusing on individual items of content. [1]

Our judgement on an initial, but close, reading of the Online Safety Bill is that, compared to the draft, the approach has become more clearly systems-based. This is partly a result of DCMS tidying up and addressing many errors and loose ends and making positive drafting improvements in relation to both the risk assessment and safety duties, as well as the description of harm and functionalities. It is also due to a clearer risk assessment process, which we describe below. However, by retaining the categorisation of services – which we have argued against previously, as did the Joint Committee – the risk-based regime does not apply equally and will lead to gaps in enforcement and the likelihood of harms arising and proliferating unchecked on smaller, but potentially fast-growing platforms, before the process for recategorising them can kick in.  Very large size itself can be an absolute indicator of risk and using very large size as a proxy brings administrative simplicity, but it is wrong to suppose that smaller size means lower risk. We continue to argue for the categories of services to be removed.

System-focused

Overall, we welcome the fact that the language used around risk assessments reflects that used in other regulatory systems and sectors where there is great experience of using this technique to reduce harm. This is a positive development. For instance, risk assessments must now be “suitable and sufficient” – language we proposed some time ago[2]. The Explanatory Notes describe this meaning (in relation to illegal content, for example) that:

“service providers will therefore need to assess how likely content is to be illegal, and therefore how likely it is that illegal content is present on their service.”[3]

This description however does not focus so much on the role of the hazards in constituting that problem, although an emphasis on features can be seen; for example, in clause 8(5), especially sub-clause (d), which identifies things a service should cover in its risk assessment. The question is whether the language chosen, particularly in relation to the safety duties, implicitly pushes platforms mainly to addressing the problem – that is the content – rather than the underlying causes and exacerbating factors of platform design.  For example, the specific duties in clause 9(3) (takedown of content and minimisation) can be seen this way. Conversely, clause 9(2), which obliges service providers to “take or use proportionate measures to effectively mitigate and manage the risks of harm to individuals, as identified in the most recent illegal content risk assessment of the service”, ties back more clearly to harm and the risk assessments. Given this possible ambivalence about systems or content-based, the guidance and codes of practice that will come from OFCOM take on a higher significance.

We note that the language in the safety duties remains that of proportionality (as opposed to reasonableness – the threshold found in other duties of care), defined by reference to size/resources as well as level of risk/severity of harm. [4] This definition is new.

It is good to see the emphasis that the duties “apply across all areas of a service, including the way it is operated and used as well as content present on the service”[5] – though we note a significant omission in that this language is not replicated in relation to content that is harmful to adults. Having said that, the expanded definition of “harm” (clause 187) provides that references to harm include “the manner of [content’s] dissemination (for example, content repeatedly sent to an individual by one person or by different people)”[6]. Presumably this could include where the platform personalises content to similar effect (assuming any other relevant thresholds are satisfied) as well as pile-ons and mobbing.  Whether this is sufficient, given the omission in the safety duty for content harmful to adults itself, remains to be seen. Another issue in this context is that the fact of dissemination – in the context of the children’s safety duty – is specifically excluded in relation to some of the duties: 11(3) (prevention from encountering content) and 11(5) (terms of service in relation to clause 11(3)) There is some degree of tension between this and clause 11(4).

The definition of harm itself is modified to be equivalent to “risk of harm and potential harm” which is important given the centrality of risk assessments (which take place before any harm has occurred) to the regime[7].

More generally, the role of functionalities (and options other than take down) are more visible than in the draft Bill. While the risk assessments always took into account functionalities (the definition though is tucked away as the back of the Bill), new examples include language in the safety duties – clauses 9(4), 11(4) and 13(4) (though the list in 13(4) is very limited); and analogous provisions for search (clauses 24(4) and 26(4)). Whether these lists are sufficient requires further thought: on user-to-user platforms, how are disposable accounts dealt with, for example, and how do default settings get taken into account?  The user empowerment obligations (and not just the rules around engagement with anonymous accounts but more generally in clause 14) refer to features. Even the new “proactive technology”[8] provisions act to surface the role of the underlying system in the content environment created.

Risk assessment process

OFCOM can carry out a broadly based risk assessment of all harms. This assessment (which was found in the draft Bill at clause 61 and is largely unchanged, aside from the welcome requirement to publish a risk register and the risk profiles) underpins the entire regime. Here, there remains a welcome focus upon the “characteristics” of the service (that is functionalities, user base, business model, governance and other systems and processes[9]) rather than just the content[10] – though the risk of harm to be taken into account is only that deriving from the three types of content (addictive design per se would not seem to be covered, for example).  Nonetheless, a broadly based risk assessment will inform better policymaking downstream.

The Bill does not seem to explicitly require that OFCOM assess all risks at once. This is pragmatic as OFCOM must be able to prioritise the more serious and significant issues when the regime starts up; for instance, to begin assessing risks for children and from terrorism first. Recent comments by OFCOM at an event on the Online Safety Bill suggest that this will be their approach, with four codes of practice ready for consultation at Royal Assent covering terrorism, CSEA, illegal priority content and the risk assessment.

By contrast to the draft Bill, OFCOM cannot develop risk profiles by reference to identified harms to adults that are not designated as priority.[11] Does this mean that non-designated content that is harmful to adults nonetheless poses no relevant risk? Publishing OFCOM’s risk assessment will potentially reveal those unaddressed harms. Moreover, OFCOM has an obligation to carry out reviews on content that is harmful to children and content harmful to adults and publish a report not more than three years apart.[12] The report must include advice as to whether the regulations specifying the priority content need to be changed, though the Secretary of State is not obliged to follow that advice. Note that there is no equivalent update process as regards illegal content.

A significant and positive change to risk assessment in the regime is the inclusion of specific provisions allowing OFCOM to take action against deficient risk assessments.[13]

Perhaps as a consequence of the obligations relating to fraudulent advertising, the exclusion of “paid-for advertisements” found in the draft Bill has been removed. This suggests that the ad delivery systems in general could be relevant to the risk assessment and risk mitigation duties, in addition to the specific provisions on fraudulent ads (which seem to relate to specific content-based rules). This we feel to be potentially an important positive step given the role of the advertising funding and the overall business model in supporting certain types of problematic harms. The characteristics of advertising delivery systems should explicitly be brought into scope for risk assessment.

Overall, these small changes to risk assessments and an increase in the role of the system in managing risk are welcome and should strengthen the regime by making it more systemic.

 

 

 

[1] See our full reference paper from 2019 here: https://www.carnegieuktrust.org.uk/content/uploads/2019/04/Online-harm-reduction-a-statutory-duty-of-care-and-regulator.pdf

[2] “Suitable and sufficient” now aligns with the Management of Health and Safety at Work Regulations 1999 https://www.hse.gov.uk/managing/delivering/do/profiling/the-law.html

[3] Explanatory note: para 81

[4] See e.g. clause 9(9), 11(9) https://publications.parliament.uk/pa/bills/cbill/58-02/0285/210285.pdf

[5] Clause 9(4); cl 11(4); 24(4); 26(4) https://publications.parliament.uk/pa/bills/cbill/58-02/0285/210285.pdf

[6] Clause 187(3)(c) https://publications.parliament.uk/pa/bills/cbill/58-02/0285/210285.pdf

[7] Clause 187 (5) https://publications.parliament.uk/pa/bills/cbill/58-02/0285/210285.pdf

[8] Defined clause 184 https://publications.parliament.uk/pa/bills/cbill/58-02/0285/210285.pdf

[9] Clause 83(12); ‘functionalities’ is defined at cl 186 https://publications.parliament.uk/pa/bills/cbill/58-02/0285/210285.pdf

[10] Clause 83 (2) https://publications.parliament.uk/pa/bills/cbill/58-02/0285/210285.pdf

[11] Clause 83(6) https://publications.parliament.uk/pa/bills/cbill/58-02/0285/210285.pdf

[12] Clause 56 https://publications.parliament.uk/pa/bills/cbill/58-02/0285/210285.pdf

[13] Clause 114 https://publications.parliament.uk/pa/bills/cbill/58-02/0285/210285.pdf