Is the regime systemic?
We welcome the systemic approach that the draft adopts, underpinned by risk assessments. We see the regulatory framework as “systemic” in two aspects. In its risk assessment obligations (cl 7(8)-(10), cl 16 (1)(6) and cl 61), as well as through the Online Safety Objectives, it focuses on the design and business model of the platforms, covering the processes and functioning of the platform as well as its ‘characteristics’. The use of a risk assessment-based model emphasises the roles of systems and processes in governance.
Although this is positive, we do have questions as to whether the drafting consistently reflects this approach. The fact that the threshold for action is defined by specified types of content may make it difficult to assess the contribution of the systems design on content creation and dissemination. For example, an individual item of self-harm content, which is not illegal, may not trigger the threshold for adult or children’s safety duties as, on its own, it does not lead to the required level of harm. However, repeatedly sending this content to an individual by a service provider’s systems may however have such an impact. Another example can be seen in the response on Twitter to Yorkshire Tea’s riposte to a user’s criticism directed to Yorkshire Tea of their teabags appearing in a photograph with Rishi Sunak. Yorkshire Tea’s tweet was popular, and was much re-tweeted, with many people tagging the original user leading to that user’s account being temporarily inundated. In TattleLife, for example, the platform is structured so as to direct criticism and gossip at named individuals; while the comments individually are unlikely to be harmful at the threshold specified in the definition of “content harmful to adults”, the targeting might mean that cumulatively there is a problem – effectively, the platform facilitates, through its design, pile-ons. These are all consequences of the system design, not the individual items of content.
While OFCOM’s Codes and guidance may recognise this issue, arguably they apply once the harm threshold has been triggered. So, while the operation of the system with regard to items of content that are in themselves sufficiently harmful is caught, it is unclear whether the situation where the system plays a part in getting the content to the severity threshold is. Given the sorts of considerations listed in clause 61 (as well as the Online Safety Objectives), it may be that this broader interplay between content and system is intended to be included. Yet, the obligation to take types of content into account refers to the defined terms – “content that is harmful to children” and “content that is harmful to adults” – both of which have a prior severity threshold built in.
Similar points may be made with regard to cross-platform harms; for example, funnelling of users on to other platforms (where more extreme or illegal content may be found) – and note even the illegal content risk assessment only refers to content encountered “by means” of the platform. Does that mean just illegal content encountered on the platform, or does it extent to illegal content on other platforms where the first platform has nonetheless played a role in the illegal content being encountered?
Moreover, there is no overarching duty of care. This choice may have implications beyond complexity. An over-arching duty would have the advantage of providing a clear indication of the orientation of the duty; that is, the operator has an obligation towards user safety. While it could be argued that the ‘online safety objectives’ (Cl 30) and the characteristics in cl 61 identify some design features, we have concerns about how they feed into the risk assessment and safety duties, especially as there are no quality requirements surrounding the risk assessment.
The Carnegie proposal and the White Paper (as well as the Health and Safety at Work Act), took as their starting point the obligation to take reasonable steps, implicitly referring back to principles found in tort law – that reasonable steps should be taken with respect to foreseeable harm. In the draft Bill, the obligations with regard to the safety duties is to take “proportionate” steps, rather than reasonable ones – but is unclear what proportionate refers to. Current legislation identifies conditions for proportionality (e.g. the size of the platform, the significance of the threat, types of user), but those rules separately require service providers to take “appropriate measures”. It is unclear whether the draft Bill deals with whether the measures are appropriate, though the objective of ‘effective management’ of risks may provide some comfort, as does the fact that the Codes are backed up by a “comply or explain” approach (see cl 36) (see also legal effect of codes in cl 37). The scope of obligation does depend on what are perceived to be risks in the first place (cl 9(2), 21(2)) and there are no qualitative requirements around the risk assessment; in particular, there is no requirement that the operator act reasonably in assessing whether a risk exists or not. OFCOM’s guidance (based on its risk assessment under clause 61) would not be binding (cl 62). Would this lead to operators avoiding looking at problematic aspects of their services? Given the fundamental role of the risk assessment, a weakness here could be felt through the entire system.
A general duty avoids the risk that concerns – specifically those that are not well caught by a regime in part focussed on content and content classification – fall through the gaps, whether now or as technology or the way the services are used changes. As we argued in our 2019 report, adopting a general duty contains an element of future-proofing and we suggest that such a duty be re-instated, with the current duties providing the differential obligations that arise from the different types of risk and threat (for which types of content are essentially proxies).
 S 368Z1(4) Communications Act 2003