Legal but harmful is dealt with in relation to children and adults separately, but both sets of provisions have the same structure and harm threshold.  It seems the intention is that OFCOM’s overarching risk profile (clause 61), on which they will be required to consult widely, which then leads to the cascade of risk assessment and risk mitigation duties that fall to companies. The continuous nature of the risk-assessment process will, to an extent, also provide an element of future proofing, whereby new risks can be identified and addressed. The mitigation duties differ, however, between the adults’ safety duty and the children’s safety duty.

The threshold of psychological or physical harm is significant – if this is too high then this part of the regime will be greatly limited in its effect. Note the requirement is that the adverse impact must be ‘significant’. The threshold is not elaborated on the face of the Bill and only described vaguely in the Explanatory Notes with regard to children (EN para 273[1]). The meaning of “psychological harm” is potentially problematic in this regard.  Given the regime is based on the duty of care, existing meanings from tort law may affect the threshold. In tort law, similar sounding thresholds for psychological harm have been set so high as to be of little use: they tend to revert to something like ‘a recognised psychiatric condition/injury’ i.e. a medical definition. Similar concerns arise in the criminal law context. – the Law Commission has criticised both[2].   We understand that the government’s intention is for the threshold to be below the much-criticised high threshold of the criminal law on emotional harm and that in tort. This is desirable. But such a vital threshold needs to be set out either as a task for OFCOM to define or, preferably, on the face of the Bill.

As noted earlier, the Bill is not clear as to whether an assessment of harm is to be done by considering the impact of an individual item of content, or the cumulative impact of such content taken together (note the word content is the same whether referring to either a single item or to multiple items). How OFCOM interprets these in a regulatory regime needs to be explained. These thresholds are central to the regime.[3]The Bill should be explicit that the relevant threshold of harm can be reached by the operation of the platforms’ systems and not just be reference to content alone. The government or OFCOM should expand upon how systems and processes can cause harm.

The thresholds for children should be at the least the same as the existing rules for video-sharing platforms in the Communications Act.[4]  Under-18s are to be protected from restricted material which includes material that might “impair the physical, mental or moral development of persons under the age of 18”, following the principle that material that has the most potential to so harm those under the age of 18 must be subject to the strictest access control measures.[5] The wording of the draft Bill seems to set a higher threshold for intervention, lowering protection.

As regards harm to adults, there are two specific areas where we feel the Bill is (either deliberately or otherwise) unclear and user protections are weakened as a result. Firstly, Section 11 (“Safety duties protecting adults: Category 1 services”) states that services have a “duty to specify in the terms of service” how “priority content” and “other content that is harmful to adults” should be “dealt with by the services.”[6] We understand that the policy intention here is to ensure that the responsibility for risk assessment, and then setting the “tolerance threshold” for legal but harmful content, sits with companies rather than with the government. However, “dealt with” is a phrase that has no qualitative meaning: it does not state whether it has to be done positively, negatively or by deciding not to do anything about the problem. (There is precedent for the challenge of this type of language e.g. the current case with the Irish Information Commissioner of use of the term “handling” – many cases were deemed to be “handled” by not taking a decision).[7] Contrast the position for the children’s safety duty where the obligation is to “mitigate and effectively manage” risks (cl 10(2)).

We have no desire to see all platforms having to set exactly the same threshold for speech and behaviour, but it is important to remember that safety duties are not just about moderation and take down. For example, a platform that wanted to adopt a more ‘anything goes’ approach, might want to ensure effective warnings at point of entry or provide their users with tools to self-curate as they adjust to risks within that online environment. It is unclear the extent to which the provisions outlining the effect of the codes (cl 37) (which should reflect the online safety objectives in clause 30) cut down platforms’ choice in this context, especially taken against the context of a deficient or wilfully blind risk assessment. This part of the Bill relies upon platforms’ enforcement of their own terms of service (as against users). In so doing, it loses close connection with the characteristics of the platform design and operation and their impact on content creation (e.g. through financial or other incentives), information flows and user empowerment (e.g. through usable curation tools) that flows from a systemic approach. By contrast, the illegal content and child safety duties emphasise the importance of these “characteristics”.

Note also that the designation of types of content as priority content seems to have no impact on the platform’s response to that problem – it just means that the platform must “deal with” the topic in its Terms of Service, irrespective of whether that issue had come up in its risk assessment.

Secondly, in clause 46, the meaning of “content that is harmful to adults” is set out, using the term that the provider of the service has “reasonable grounds to believe that the nature of the content is such that there is material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities”. This is a different phrasing to that set out in the Full Government Response (“the legislation will set out that online content and activity should be considered harmful, and therefore in scope of the regime, where it gives rise to a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”; para 2.2[8]). Does the shift indicate a different threshold is in play? Note that the Communications Act already imposes on some user-to-user platforms the obligation to “protecting the general public from videos and audio-visual commercial communications containing relevant harmful material”. “Relevant harmful material” includes material containing violence or hatred against a group of persons or a member of a group of persons based on any of the grounds referred to in Article 21 of the Charter of Fundamental Rights of the European Union of 7 December 2000. While some of this material would fall under illegal content, not all the categories are protected by the criminal law. This means that any such types of content would fall to be assessed under 46(3), which might lead to difficulties in the context of lots of low-grade abuse. Arguably, this then constitutes a reduction in the level of protection. The commitments made in the G7 communique about tackling forms of online gendered abuse will be in part delivered by this clause and to set a strong international lead, the clause needs to be made to work.[9]

 

We note that the obligations with regard to harmful but legal, which are weak as regards user-to-user services, do not apply to search at all. This suggests that there is no obligation to do anything about antisemitic auto-completes, for example, or any safeguards around the return of results on suicide and self-harm.

[1] NB explanatory notes para 273 and 275 provide more detail which suggest threshold will be lower, but this is not on the face of Bill: e.g., “content risks directly or indirectly having a significant adverse physical or psychological impact on a child of ordinary sensibilities. This could be by indirectly resulting in physical injuries or by directly or indirectly resulting in a significant negative effect on the mental state of an individual. This could include causing feelings such as serious anxiety and fear, longer-term conditions such as depression and stress and medically recognised mental illnesses, both short term and permanent”; and “content may be harmful to children in the way in which it is disseminated, even if the nature of the content is not harmful, for example repeatedly sending apparently innocuous content to a user could be bullying and intimidating. in determining whether content is harmful, provider should take into account how many users could be encountering the service and how easily, quickly and widely the content can be disseminated on the service”.)

[2] Law Commission Liability for Psychiatric Illness, 10 March 1998 (LC249); Law Commission, Harmful Online Communications: The Criminal Offences, 11 September 2020 (Consultation Paper 248).

[3] We note that the Digital Minister, in her appearance before the Lords Digital and Communications Committee on 11th May, responded to a question from Viscount Colville on this definition and confirmed that the harms covered by it would be subject to secondary legislation, with a list “compiled by Ofcom, working with expert advice, subject to democratic oversight and parliamentary debate”. Companies would then be “free to decide” how to address the risk and set it out in their terms and conditions. Ms Dinenage also elaborated on the difference between preventing adults “being offended” (which was not the aim of the Bill) and the impact of extremely harmful or extremely emotive content that is often spread by algorithms or where pile-ons target an individual, concluding that “there is a clear distinction between finding something offensive and the potential to cause harm”. Secondary legislation informed by Ofcom once they have taken expert insight. (https://committees.parliament.uk/oralevidence/2187/pdf/)

[4] https://www.legislation.gov.uk/ukpga/2003/21/contents

[5] S 368Z1 Communications Act 2003: https://www.legislation.gov.uk/ukpga/2003/21/section/368Z1

[6] We note a change in wording re the enforcement of companies’ Terms and Conditions between the draft Bill and the Government response the Inquiry into Covid and Misinformation (https://publications.parliament.uk/pa/cm5801/cmselect/cmcumeds/894/89402.htm). The latter said platforms “will need to enforce these terms effectively, consistently and transparently” (p 2), while in the draft Bill it is just “consistently” (for example, S9, 5 (b)), which could well mean badly or not at all.

[7] https://noyb.eu/en/irish-dpc-handles-9993-gdpr-complaints-without-decision

[8] https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response

[9] https://www.g7uk.org/wp-content/uploads/2021/06/Carbis-Bay-G7-Summit-Communique-PDF-430KB-25-pages-5.pdf