January 29, 2021

Freedom of Expression, Speech Rights & Modern Regulation

by William Perrin, Trustee, Carnegie UK Trust, Professor Lorna Woods, University of Essex and Maeve Walsh, Carnegie Associate

A number of social networks and web services banned President Trump from their services suggesting that there was a risk of further incitement to violence.  Twitter was first to act, followed by several others.  Although these were American companies banning an American politician, there were reactions around the world covering a full range of emotions. Notably, some commentators claimed that this is a violation of freedom of expression.  In Europe, Chancellor Merkel appeared not to agree with banning President Trump. In the United Kingdom, two Cabinet Ministers expressed concerns, including (in an article in The Times) DCMS Secretary of State Oliver Dowden, who is responsible for online harms regulation: 

“The idea that free speech can be switched off with the click of a button in California is unsettling even for the people with their hands on the mouse. Just this week, Twitter’s chief executive, Jack Dorsey, said that while he felt that it was right for his platform to ban Mr Trump, leaving platforms to take these decisions “fragments” the public conversation and sets a dangerous precedent.”  

Against this background, we examine the legal position from the perspective of the European Court of Human Rights and consider what the implications of that jurisprudence are for government policy. The case law does not give us one clear answer to when and how speech rights are engaged, as the following seven propositions show.

  • Article 10 does not grant an automatic right to a forum, especially where the forum is not a public sector space.  In Appleby, the European Court of Human Rights found that peaceful protesters had no right to hand out leaflets in a shopping centre against the wishes of the owner.  In such cases the Court engages in a balancing exercise, in which the nature of the speech (ie is it political speech?) is one factor.
  • A key factor for determining if a right exists in these circumstances is whether there is an alternative mechanism to exercise the right to freedom of expression, and this does not mean exactly the same in terms of features or reach, so there is no right to the shopping centre because it is the easiest and most effective means when you can go to the high street (Appleby); a television company applying for the right to a licence cannot demand a terrestrial TV licence providing access to the airwaves over a cable licence (Tele 1 Privatfernsehsesellschaft).
  • Even as regards public spaces, people accessing them can be expected to follow rules relevant to the space (Pussy Riot case).
  • Access to a forum, and application of any rules, should be assessed in a non-discriminatory manner.
  • Political speech is seen as really important, deserving a high level of protection.  In Castells, the Court said “while freedom of expression is important for everybody, it is especially so for an elected representative of the people. He represents his electorate, draws attention to their preoccupations and defends their interests” (para 42).  The Court has re-iterated this point in a number of case (eg Lombardo v Malta; Piermont v France).
  • But the right to freedom of expression is not unlimited, even for politicians (or journalists) and the Court has recognised that politicians also have greater responsibility. This is particularly the case when the speech constitutes hate speech or incitement to violence, as can be seen for example in relation to the views expressed in the cases of Le Pen and Feret.
  • Any sanctions in relation to speech should be proportionate. So, in Medya FM Reha Radyo ve ÿletiÿim Hizmetleri A.ÿ. v. Turkey a year-long suspension of a broadcasting licence was found to be justified – the broadcaster had repeatedly broken broadcasting rules and in this instance had broadcast comments considered capable of inciting people to violence, terrorism or racial discrimination, or of provoking feelings of hatred. By contrast, in Nur Radyo Ve Televizyon Yayıncılığı A.Ş. v. Turkey a shorter suspension (for 180 days) was not justified. The broadcaster had described an earthquake in which thousands of people had died in the Izmit region of Turkey in August 1999 as a “warning from Allah” against the “enemies of Allah”, who had decided on their “death”.  Although the comments might have been shocking and offensive, they did not in any way incite to violence and were not liable to stir up hatred against people who were not believers. In another case, a prohibition on advertising on a billboard was acceptable when information may be known over the internet Mouvement Raelien Suisse) – this is part of a line of cases where the Court has found no violation where there are restrictions on manner and form but the expression itself had not been prohibited from taking place.
  • Hate speech and incitement to violence may even fall outside the protection of freedom of speech, as can be seen in a number of cases: see eg Norwood v UK; M’Bala M’Bala v France.

We might say from this that, if a speaker has access to a number of platforms, then the balance of rights probably comes down in favour of the (private) platform, especially in respect of speech that does not comply with platform rules. Conversely, we would express concern if the rules have been applied in an unequal way – though this may be a criticism about under-enforcement as much as over-enforcement. Finally, the nature of some of the speech might be such to take it outside speech protections in any event.  

The law outlines what is possible but does not address what choices politicians and parliament might make within it. None of this prevents States from regulating access to private platforms (similar to the universal service obligation found in the postal and telecommunications sectors and must carry in broadcasting); in some instances they might be obliged to intervene. There are three main questions to look at when considering this:

  • Is all speech the same: is there a difference between (non-violent) speech on a matter of political importance; the sharing of information about the stock market; and chatter about the colour of a dress?
  • Is access to a platform the same as prominence/reach – and how should a platform distinguish between speakers who, presumably, have an equal right to speak?
  • Are all platforms the same (contrast, for example, Twitter, Gab and individual instances of Mastodon – these are all micro-blogging sites, but different in size and motivation); might they be said to have speech rights too?

The government’s online harms proposals in the UK are notable for steering clear of new rules that might apply to politicians and political discourse. The government had an opportunity to set rules in this area in September 2020 when responding to the report of the Lords Committee on Democracy and Digital Technologies that had been chaired by Lord Puttnam. But the government did not make commitments to reforms that might have addressed the Trump issue, referring to the generality of the then draft online harms work. 

The government’s Online Harms White Paper consultation response …fails to make any mention of a ‘duty of care’ towards democracy itself’. Technology is not a force of nature and online platforms are not inherently ungovernable. They can and should be bound by the restraints that we apply to the rest of society.

Lord Puttnam House Magazine 11 January 2021

The UK government’s proposals are intended to prevent reasonably foreseeable physical and psychological harm to individuals; this includes hate speech. The government chose not to tackle political disinformation or harms to society. It is not clear that even action by a British politician similar to President Trump’s would trigger obligations on a platform under the duty of care that the government proposes. The proposed UK overarching framework might engage the problem where harm to individuals is evident – such as threats to injure a particular politician – or hate speech.  Given that the current proposals seek to exclude political disinformation and harms to society, attempts to overturn a democratic election would only fall in the regime to the extent that they result in significant physical violence to someone.

Even assuming that this type of harm falls within the regime, there are some tensions in the position taken in the full response and, especially in the light of the events of 6 January, these should be clarified by the government.  In the final response to the consultation on the Online Harms White Paper, the government brings forward three proposals that are relevant to the Trump case: 

Firstly, the government makes the high-level proposal that:

‘Companies will not be able to arbitrarily remove controversial viewpoints and users will be able to seek redress if they feel content has been removed unfairly.’ (Full Government Response to the Online Harms White Paper: CP 354, page 33, para 2.34)

The government does not expand on this potentially significant and powerful phrase. It stands in isolation in the text so we do not know how it is intended to work at this stage. It seems similar to the non-discrimination point from the Convention free speech jurisprudence. We note that in the USA former Attorney General Barr proposed reform of Section 230 of the Communications Decency Act to prevent ‘arbitrary content moderation decisions’. This would seem to fit also with concerns about non-discrimination in the ECHR jurisprudence. We note that Mr Zuckerberg said on January 27th

‘we plan to keep civic and political groups out of recommendations for the long term, and we plan to expand that policy globally…. we’re also currently considering steps we could take to reduce the amount of political content in News Feed as well. We’re still working through exactly the best ways to do this’

The second proposal is that OFCOM will be responsible for ensuring that platforms have in place systems to enforce their own terms and conditions as part of overall harm reduction: 

Regulation will ensure transparent and consistent application of companies’ terms and conditions relating to harmful content. This will both empower adult users to keep themselves safe online, and protect freedom of expression by preventing companies from arbitrarily removing content.’ (Full Government Response to the Online Harms White Paper: CP 354, page 27, para 2.10)   

Again, this is less about what will be removed and more about consistency; in this the proposal seems to recognise the interests of each of the platforms in setting the terms on which users engage in their respective spaces.  Some platforms may introduce rules in relation to politicians’ speech generally. Some Category One platforms may already have their own codes of practice for some UK elections.  Should it have jurisdiction, this means that OFCOM will in fact oversee the processes of say, Twitter, should it make a similar decision to that on President Trump with regard to a British politician.  Again, this would likely be acceptable, if not desirable, from the perspective of the current Strasbourg jurisprudence.  

However, the third proposal relevant to the Trump affair is the somewhat standalone statement that: 

 ‘OFCOM should not be involved in political opinions or campaigning shared by domestic actors within the law.’ (Full Government Response to the Online Harms White Paper: CP 354, page 33, para 2.81)

On a first reading of this, it seems to deny Ofcom the oversight role of community standards envisaged for Category One platforms as far as political speech is concerned; this would allow greater space for a platform to develop an editorial (and not necessarily politically neutral) line about enforcement. One way to reconcile the statements is to see the statement as relating to scope of the duty of care rather than its implementation. On this basis, the proposal seems intended to reinforce the point that politics in general is a societal issue, out of scope of the regime,  and that it is not intended to over-ride steps to prevent of physical or psychological harm to individuals caused by domestic political actors.

The proposal reinforces that it is only harm to individuals in scope, not the disinformation that created an environment in which such harm could occur.  However, questions remain. It is unclear whether Ofcom’s oversight of the implementation of a platform’s community rules extends to all of those rules, or just the rules that pertain to content resulting in relevant harms. A narrower interpretation of Ofcom’s role leaves greater space for even a Category One platforms to follow their own editorial line on implementing their rules. Does it mean that if platforms systemically ‘arbitrarily’ remove controversial viewpoints of domestic political actors then OFCOM can’t act? At face value, it suggests that OFCOM would not supervise the platforms election codes, leaving them as unregulated as in the USA in that regard. Where the platform is a unique form of communication, this might give rise to issues under Article 10 case law (eg. Appleby), though as noted the Court has historically given a limited scope to the circumstances in which this might arise (Tele 1). Whether arbitrary enforcement of political speech, even by a private actor when that actor is a major platform, is acceptable from a Convention perspective is, however, far from certain.

Many voices have called for a more comprehensive approach to rules for political speech online, including during elections, not least the Lords Committee on Democracy and Digital Technologies, The Committee on Standards in Public Life in its Inquiry into Intimidation in Public Life and the Advertising Standards Authority which said in 2020 that political advertising should be regulated. OFCOM of course has experience of regulating broadcasters for impartiality and also in respect of broadcast political advertising rules. Working with Lord McNally, we made the modest suggestion about the particularly high-risk of harm period around elections that the duty of care on platforms should address 

threats that impede or prejudice the electoral process’ 

the basis on which the Crown Prosecution Service considers electoral crime. 

The Trump affair illustrates the issues raised by the many groups calling for modern regulation in this area. The latest online harms proposals need clarification in this respect. The government will come under sustained pressure to bring rules up to date.