Overview

The Government has removed all provisions on “content harmful to adults” – so-called “harmful but legal”-  and replaced them with what the Government calls a “Triple Shield”: the existing duties on platforms to remove illegal content, plus new terms of service duties (which broadly mean that companies must enforce their Terms of Service (ToS) consistently, acting on harmful content mentioned in those terms while ensuring that content that is not listed is not taken down) with user empowerment tools (allowing users to protect themselves from harmful content, a list of which is now in the Bill). Significantly, the duty of platforms to assess the risk of harm to adults from content that is harmful but not criminal, has been removed with no justification. This makes a hole in a regime that otherwise relies on continuous, dynamic risk management to understand threats and to keep up to date. The Government has instead inserted into the bill a static list of harmful issues.  Companies then have to give users a chance to limit their exposure to these issues through user tools provided (e.g. a filter) but these tools will need to be turned on.
The company does not have to tell users which of these types of harmful content are prevalent on their site, undermining the user’s ability to choose a platform in the light of the facts. Moreover, there are no minimum standards set for a service’s Terms of Service; they do not even have to deal with the list of issues found in relation to user empowerment tools. The is a large hole in a regime built on enforcement of Terms of Service.

Many of these points were made by Peers during the Second Reading debate, including: Baroness Merron for Labour, Lord Clement-Jones and Lord McNally for the Liberal Democrats, Baroness Morgan (Conservative) and cross-benchers Baroness Bull and Viscount Colville.

Analysis

(See Prof Lorna Woods’ blog for the LSE which sets out more detail on how these changes affect the original Bill here)

The Government announced their proposed amendments to the Bill, removing the adult safety duties (so-called “legal but harmful”) when it returned for Commons Report in December (see press release, WMS; and final Committee amendment paper). They were all passed in the recommittal Committee and confirmed in the January Commons Report debate. We provided detailed analysis of the amendments that were recommitted to the Public Bill Committee in December in our written evidence to that Committee; and summarise that below.

The “Triple Shield”

The main change to the Bill is the removal of the “harms to adults” safety duty, its risk assessment and the obligation to publish a summary of that assessment (old clause 12 and 13). These clauses, also known as “harmful but legal”, were the subject of much controversy in recent months. The perception that took root in some parts of the political discourse that platforms would be required to take down legal content was completely unfounded, however. (Further detail is available in our blog from November, here).

The Government in its place proposes a “Triple Shield”, comprising the following principles for protection of adults:

  • Content that is illegal should be removed. The existing duty, remains unchanged.
  • Terms of service should be enforced: new clauses 64 and 65 include a “duty not to act against users except in accordance with terms of service” and a further duty to “inform users about their rights to bring a claim for breach of contract” if platforms act against those terms. The objective of both new duties is to ensure that, if platforms specify in their terms of service that they do not allow content of a particular type on their platforms, that they enforce these terms and consistently restrict access or remove that content. However, if (legal) content is not specified in their terms of service, then service providers cannot take it down or suspend accounts as a result (with some exceptions for complying with court orders and the like). The duties provide greater clarity for users as to how they can take action if that happens and there is a requirement on OFCOM (clause 66) to produce guidance for both new duties.
  • User empowerment tools: new clause 12 states that services should provide “features which adult users may use or apply if they wish to increase their control over content” listed in the clause, including legal content related to suicide, content promoting self-harm and eating disorders, and content that is abusive or incites hate on the basis of race, ethnicity, religion, disability, sex, gender reassignment, or sexual orientation. This would include misogyny. Notably, it does not seem to give adults the choice as to whether to engage with porn, violent material, and a whole range of other material that is subject to controls in the offline world; indeed, should service providers provide tools for a broader range of content but not specify that in the terms of service, they could fall foul of the rule not to allow restrictions in clause 64. Category 1 services will still need to give users the option to verify themselves and choose not to interact with unverified users.

Along with many in civil society, we do not agree with the Government’s decision to remove the adult safety duties. See, for example, the recent representations made by the Samaritans setting out how turning 18 did not stop young people being vulnerable to self-harm or suicide content. As their Chief Executive Julie Bentley says: “This is not about freedom of speech. It is about protecting lives by restricting access to this content. When they are vulnerable, they are not able to protect themselves from this content. When people are in a vulnerable position, they are not able to make those safe choices for themselves that brings in that protection. Whereas if it wasn’t accessible to them, it would keep people safe.” (Telegraph, 15th January 2023)

The Government has not yet provided adequate evidence that its alternative approach will work to protect adult users online. Indeed, in DCMS’s own public attitudes to digital regulation survey, the most recent data shows that the number of UK adults who do not feel safe and secure online is increasing: from 38% in Nov/Dec 2021 to 45% in June/July 2022.

As we set out in our written evidence to the recommittal Committee, the adult safety duty has been replaced by two new duties (clause 64 and 65) which are focused narrowly on banning, take down and restriction, rather than upstream design changes or softer tools that might be more effective and allow different types of users to coexist on the same service more easily. (These are the kind of changes to service design and functionality that we envisaged would be encouraged by our proposals for a “systemic” approach to regulation – e.g. one that bites at the systems and process level, rather than one which is based on individual items of content.) It could be argued that this changes the emphasis of the regime away from systems and the role of the platforms themselves. In relation to large and risky user-to-user services (Category 1), the Government seems to expect that if those providers claim in their terms of service to be tackling harmful material they must deliver on that promise to the customer/user (65(3)). There are significant problems with this:

  • There is no minimum content specified for the terms of service for adults. There are two potential consequences from this:
    • companies’ Terms of Service could become much longer and ‘lawyered’, increasing the likelihood that companies will use non-take-down methods to control content (though cl 65(1) requires the terms of service to be clear and accessible); or,
    • faced with stringent regulatory application, companies might make their Terms of Service shorter, removing from the Terms of Service harmful material that is hard to deal with because they now might be in breach of their duties if they do not deal with it. Trying and failing to handle difficult material might lead to competitive and reputational issues. OFCOM will require publication of breach notices (following amendments which will be introduced in the Lords). By comparison, companies that chose in their Terms of Service not to do nothing about hard to handle material have an easier life but they might suffer reputational hits from content-harm, if or when that becomes public – for example, because of whistleblower action or media reporting – but not from the regulator under the new duties.
  • Companies now have complete freedom to set Terms of Service for adults – and the Terms of Service may not reflect the risks to adults on that service. Service providers are not obliged in the new clauses to include provisions in relation to the list of harmful content proposed by the Government for the user empowerment duties (see, below), although they are required to include provisions on how the user empowerment duty is being met (cl 12(5)). Moreover, the removal of both the risk assessment for harms to adults (previous cl 12) and the previous obligation to summarise and publish the results (cl 13(2)) means that users will lack vital information to make an informed choice as to whether they want to engage with the service or not.

Reinserting a risk assessment

The Government has not explained why it has removed the risk assessment: there was no discussion in the media material or WMS about the removal of the risk assessment, nor is there evidence justifying such a change. Under a risk-based, systemic regime – such as that which underpins the Online Safety Bill’s regime – an exploratory stage of risk assessment is essential to work out what harm occurs, affecting which people with what severity. Without this, a company and regulator would not be able to work out what needs mitigating or how that might be done beyond relying on the blunt tool of content take down. A ‘suitable and sufficient’ risk assessment will be forward looking, even when taking into account knowledge about problems and issues that have already arisen. In an area where new harms arise and spread very quickly, companies’ own risk assessments will provide vital intelligence for the regulator and equip them to advise the Secretary of State on how to stay on top of new threats.

The Online Safety Bill has several major risk assessments. First OFCOM has to perform overarching risk assessments on the key areas of harm – Priority Offences (listed in Schedules 5- 7), illegal content more broadly, children (Cl 89). OFCOM has strong general information-gathering powers which should help them with this work (we note from whistleblowers like Haugen that larger companies have information about harms on file).  OFCOM would use that assessment to inform all its further work. OFCOM will then write guidance based on the overarching risk assessments for service providers to carry out risk assessments of their own in relation to illegal content and children.

The removal of the requirement on larger Category 1 companies to carry out a “harms to adults” risk assessment, along with the requirement on companies to tell people what it had found in the risk assessment, was done  With no evidence or explanation of the problem that the amendment sought to achieve.

Instead of starting with a risk assessment, to identify what is really happening, the Government has suggested some detailed rules about problem areas as part of its triple shield.  Clause 12 – containing the user empowerment duties, which requires people to be provided with tools to limit their exposure to content – is now an a priori exhaustive list of things government thinks are harmful across all Category 1 services. It is not a particularly bad list, but has omissions – including harmful health content, which the Government in July (WMS HCWS194 7 July) had indicated would be included in the priority harms to adults list. Other issues that remain absent are climate change disinformation and hateful extremism. The Government’s list is not the product of a risk assessment informed by the sort of information OFCOM would be able to extract from companies. Such an assessment would be particularly beneficial, for example, in relation to radicalisation and (non-terror) extremism which are not well covered in the Bill.
The lack of a risk assessment from companies, with the facts published, is a substantial step backwards.  Without a risk assessment the regime cannot really be said to be a risk management one. Moreover, without a risk assessment it is difficult to tell over time whether the user empowerment tools really are effective.  Depriving people of basic information about the nature of harms on a service is a step backwards in terms of transparency and undermines peoples’ ability to protect themselves and manage their own risk, which is  the approach that the Government states it is now taking in respect of harm to adults.  “Bad” providers can remain in denial about the extent of harms to adults. OFCOM can only have a narrow view of harms to adults with no formalised early warning system of new harms emerging that are not listed in new Cl 12. It is a step back to old fashioned detailed rule-making, a structure that we know tends to fail and emphasises content and take down over more systemic approaches that seek to tackle the providers’ role in creating and exacerbating problems online.

The user empowerment duties, along with the enforcement of the Terms of Service, do not require a risk assessment. The risk assessments for illegal content and children remain in the Bill. The Government should be pressed on why there is now no requirement for a risk assessment for wider harms to adult users.

User empowerment duties: default “on”

As we set out above, the tools now proposed in clause 12 reflect an a priori view of the Secretary of State that the content referred to is harmful and that people need to be provided with tools to protect themselves. The new duties provide that an adult should be able easily to find such tools and turn them on. We note that in a number of cases people at a point of crisis (suicidal thoughts, eating disorders, etc) might not be able to turn the tools on due to their affected mental state; for others default on saves them from having to engage with content to utilise tools in the first instance. Given that a rational adult should be able to find the tools they should be able to turn them off just as easily. The existence of harms arising from mental states tips the balance in favour of turning the tool on by default. In the work we did with Anna Turley MP in 2016, we proposed an abuse blocker that was on by default in PMB Malicious Communications (Social Media) 2016:

‘Operators of social media platforms ..must have in place reasonable means to prevent threatening content from being received by users of their service in the United Kingdom during normal use of the service when the users― (a) access the platforms, and (b) have not requested the operator to allow the user to use the service without filtering of threatening content.’

Return to blog.