July 6, 2023

Strengthening the user empowerment tools in The Online Safety Bill

by Professor Lorna Woods, Professor of Internet Law, University of Essex; William Perrin, Trustee, Carnegie UK; Maeve Walsh, Carnegie Associate

The Online Safety Bill starts its Lords Report Stage this week. There is much to welcome in the raft of Government amendments tabled so far – many of them responding to areas of significant cross-party pressure during the Lords Committee stage – and there are more still to come, as listed in this Government factsheet. But there are areas where the Government could go further to address fully the concerns raised by Peers to date and to ensure the regulatory regime, once enacted, is as robust as it can be. Over the next week or so, we will publish short blog posts on some of those areas and set out why we are supporting amendments from Peers to address them.

Before setting out our analysis on where there need to be further changes to the user empowerment duties, we welcome the fact that the Government has – in its amendment to introduce a duty for assessments by platforms of the “incidence” of harm covered by the user empowerment tool, and related amendments to include findings from these assessments in record keeping and transparency reporting – acknowledged the very strong arguments put forward by Peers that the removal of the adult safety risk assessments had left a huge gap in the Bill. (You can read our previous analysis on this issue here.)

User empowerment tools: ensuring a choice of user empowerment ‘control features’

In addition to pressure on the lack of a risk assessment, the Government came under significant pressure in debates during Committee stage as to why there should not be a “default on” approach to the user empowerment tools (e.g. that users would automatically be protected from the content listed in clause 12 when using a service and would have to actively switch off the filters, rather than vice versa). Little was yielded by the Government in debates, so it is welcome that Lord Parkinson has since tabled a compromise concession (60) that would introduce a “forced choice” to users as to whether they should wish to opt in or out of the tools for “each control feature” when they sign up to a service.

Prompting users to make a choice to deploy the tools that will prevent them being exposed to harmful content removes the potential cliff-edge for users when they turn 18, moving from an online experience in which they are protected to some degree by the safety duties in the Bill into one where there is a potential deluge of the most egregious and damaging activity.

Lord Clement-Jones has tabled an amendment (number 55) to clause 12 which seeks to ensure that the user empowerment tools for Category 1 services are customisable in relation to each harm identified by Parliament, rather than an on/off option for all harms.

We support this approach for the following reasons:

  • People wanting to protect themselves from harmful content should be allowed to customise their choices. One big ‘Off’ button might deter people who know they will be harmed by one type of content but not by others. Some might have a fear of missing out.
  • The Government’s reference in its amendment to ‘each control feature’, implies that there can be more than one. Lord Clement-Jones’ amendment therefore makes it clear that Category 1 services have to offer an on/off choice for each type of harm Parliament has identified. They should also offer a big OFF button if they wish but this must be in addition to the granular choice buttons and no more prominent.

User empowerment tools: No additional cost

A further issue that arises in relation to these tools is to ensure that they are freely available to all users. Amendments from Lord Clement-Jones (59, 64, 181) insert the phrase “at no additional cost to the user” to the Government’s amendment (60) to ensure that the users are not expected to pay extra to utilise the user empowerment tools provided for in the Bill.

There is nothing within the Bill currently to mandate that this should be a universal service. The Bill’s drafting requires platforms to offer these tools to “all registered adult users .. [at the] earliest possible opportunity”. This opens up the possibility that platforms could offer these tools as an “add-on” for paying customers and still argue that they are complying with their regulatory responsibilities: those users who do not pay extra, will not receive the same access to the same tools.

It is worth noting that the OSB began its life at the high point of ‘free to the user, paid for by ads’ phase of such services. In the last 12 months, some services have come under a financial squeeze. Notably Twitter began to charge for user verification services and some features; other services have adopted versions of this ‘freemium’ model. At the heart of harm regulation in general, and the OSB in particular, is the ‘polluter pays model’ (OECD 1972) where external costs are returned to the company that creates or enables them. This is regarded as the most micro-economically efficient approach to dealing with external costs that otherwise fall upon society.

Lord Clement-Jones’s amendment focuses on new features for users required by the Bill to combat harm identified by Parliament and ensures that they do not become paid for add-ons for victims, which not all users may be able to afford with consequently adverse impacts on freedom of expression (If you can’t afford to verify then your comments might not be as widely seen, if you can’t afford to block then you may just leave the service).

These features sit within the user empowerment tools (Shield Two of the triple shield) notably the ability to turn on a control feature to limit exposure to certain types of content and the ability to filter-out non verified users and the corollary obligation on service providers in clause 57. These only apply to the largest, riskiest companies in Category 1 with the most resources.

We support Lord Clement-Jones’s small amendments which will close the potential loophole that would allow commercially-driven design decisions to override the responsibility that services have to provide a baseline level of safety for all their users.