July 14, 2021

Racist abuse of footballers using social media and the draft Online Safety Bill

by Professor Lorna Woods, Professor of Internet Law, University of Essex, William Perrin, Trustee, Carnegie UK Trust and Maeve Walsh, Carnegie Associate

Online platforms may have terms and conditions which prohibit racism. But the platform owners have made choices about their system design and resourcing so as not to be able to enforce these terms and conditions. A regulatory regime should correct those choices when they are plainly against the public interest.

We look here are how the draft Online Safety Bill proposes to do that.

Large volumes of racist material have been targeted at England footballers. While only some of this will be illegal hate speech, much of this abuse is likely to cause significant psychological harm both to the individual victims and to people in minority groups – both individually and by its existence in aggregate. There is also a harm to society by widening racial divisions, which may be the wider intent of some racial abuse. As the UN Secretary General Antonio Guterres said, hate speech “undermines social cohesion, erodes shared values, and can lay the foundation for violence, setting back the cause of peace, stability, sustainable development and the fulfilment of human rights for all[1].

Two sections of the draft Online Safety Bill could address this abuse:

  1. Provisions relating to ‘Illegal content’ (Clauses 5(2), 7, 9 et al)

A number of offences could cover racism on social media. These range from the relatively low threshold of S127 of the Communications Act 2003 – messages of a ‘grossly offensive’ or ‘of menacing character’, to patterns of behaviour that constitute harassment (though these offences might not recognise the racially based nature of the harm), through to content that triggers offences dealing with incitement to violence and stirring-up offences.

Companies should perform an ‘illegal content risk assessment’ and have processes in place to tackle such risks – see Clauses 7(8) and 9. Companies are required to examine their systems and processes to make them effective in reducing risk of criminal content circulating and including measures to take down illegal content rapidly on notification/discovery.

BUT as the debate over the last few days shows, there is little clarity on how the criminal law applies on social media. The background is an overstretched police and CPS with a historic pattern of under-enforcement as well as over-enthusiastic enforcement (for example, the Twitter joke trial). This will in turn make it hard for companies to assess what content is likely to be approaching the threshold for action.

So, while the obligations of the companies might be clear, the extent of the trigger for those obligations is much less so.

  1. Racism that falls short of a criminal threshold – duty to protect adults (Clauses 7(6),7(7) and 11)

The treatment of this in the draft Bill is weak. There are two aspects, either of which would fall under the weak clause 11 ‘duties to protect adults’ online safety’:

  • racism can be classified by the Secretary of State as ‘priority content that is harmful to adults’ (and approved by Parliament). By contrast to the position with regard to the specification of priority illegal content, the Secretary of State has to consult OFCOM before making a regulation about priority content that is harmful to adults (Clause 47) but the Secretary of State is not bound to follow OFCOM’s views.

The Secretary of State has been reluctant to even give an indicative view on what priority content would be. We understand much work is underway in government to evidence priority content but have no inkling of when the Secretary of State might set some thoughts out.  There is little to stop him at this early stage indicating, for example in a speech to Parliament, some of the areas that are of particular importance. It would make scrutiny much easier.

  • The ‘adults risk assessment’ could also reveal racism content that is harmful to adults

In either of these cases, the weak clause 11 merely requires a company to specify in its terms of service how priority content that is harmful to adults is to be ‘dealt with’ by the service and that the terms of service be applied ‘consistently’. It only applies to Category 1 platforms, which are likely to be just the largest platforms.

There are several issues with the way harm to adults is handled in the Bill.

‘Dealt with’ (unqualified) in Clause 11 is vague, you can ‘deal with’ something, even if it has come up in a risk assessment, by looking at it and deciding not to do anything. We think that here the obligation to deal with is intended to mean that if the terms of service are inadequate or are not enforced such that harm to adults occurs, then OFCOM can step in and request changes. But this is not drafted clearly and is in stark contrast to say the illegal content risk management clauses.  We expect this to be improved or elaborated upon in the next draft of the Bill.

A second issue is that relying on terms of service suggests that the regulatory focus is at the point when companies tell users what they can and cannot do – content moderation policies and take down rules. What it does not seem to do is to require companies to change their upstream systems and processes, which are more likely to be effective at scale than tighter terms of service. Such mechanisms include giving people tools to protect themselves, not algorithmically promoting racism, not recommending groups etc.

Relying on ex post reactions such as take downs not only ignores the range of possible interventions (most of which would be more proportionate from a freedom of expression perspective[2]) but gives rise to other difficulties.  The speed with which the abuse mounted up last night demonstrates how, if thousands of pieces of individual content simultaneously breach their terms and conditions, platforms using the inadequate systems they currently run will be unable to respond at the scale required to neutralise the aggregate impact. Moreover, platforms may assess the impact of each of those posts individually and fail to identify the cumulative impact of the abuse.

A more effective solution would be (as we have always suggested) to have a general duty to take reasonable steps to prevent reasonably foreseeable harms and for the Secretary of State and Parliament to give an early steer as to their priorities.

 

[1] See https://www.un.org/en/genocideprevention/hate-speech-strategy.shtml

[2] The UN Special Rapporteur for Freedom of Expression, David Kaye, recognised that these other options were available and allowed for tailoring to the severity of the speech: Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression (A/74/486), 9 October 2019, para 51. https://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/ReportOnlineHateSpeech.aspx