March 22, 2018

Harm Reduction In Social Media – A Proposal

by William Perrin, trustee of Good Things Foundation, Indigo Trust and 360 Giving. He is a former senior civil servant in the UK government. And Professor Lorna Woods, University of Essex.

This blog is the second in a programme of work on a proposed new regulatory framework to reduce the harm occurring on and facilitated by social media platforms.  The authors William Perrin and Lorna Woods have vast experience in regulation and free speech issues.  William has worked on technology policy since the 1990s, was a driving force behind the creation of OFCOM and worked on regulatory regimes in many economic and social sectors while working in the UK government’s Cabinet Office.  Lorna is Professor of Internet Law at University of Essex, an EU national expert on free speech and data regulation and was a solicitor in private practice specialising in telecoms, media and technology law.

The UK government’s Internet Safety Strategy Green Paper set out some of the harms to individuals and society caused by users of social networks. As we set out in our opening blog post, we shall describe in detail a proposed regulatory approach to reduce these and other harms and preserve free speech. Our work is based in UK and European law and policy, drawing as much as possible upon proven social and economic regulatory approaches from other sectors. Our aim is to describe a functioning, common sense co-regulatory model. The new model would require new legislation in the UK but we believe it to be compatible with European legislation. The approach could be adopted at a European level, but would then require European legislation.  Much of the broad thrust of the model could be employed voluntarily by companies without legislation.

Freedom of expression is at the heart of our work, but (for American readers in particular) it is important to note that in the UK and Europe, freedom of expression is a qualified right. Not all speech is equally protected, especially when there is an abuse of rights.  And speech may be limited in pursuit of other legitimate interests, including the protection of the rights of others. There is also a positive obligation on the state to regulate or safeguard everyone’s right to freedom of expression and protect diversity of speech. This means that regulation, carefully crafted and proportionate, is not only permissible from a freedom of expression perspective but may be desirable.

Much existing debate about new regulatory models for social media has been framed by the question of the extent to which social media platforms fall within the exception provided for neutral hosts, or whether social media platforms are publishers. This debate goes nowhere because these regulatory models are an ill-fit for current practice – a new approach is needed.

British and European countries have adopted successful regulatory approaches across large swathes of economic and social activity.  We judge that a regulatory regime for reducing harm on social media can draw from tried and tested techniques in the regulation of broadcasting, telecommunications, data, health and safety, medicine and employment.

One approach we are considering is the creation a new regulatory function responsible for harm reduction within social media. In such a model, all providers of a social network service would have to notify the regulator of their work and comply with basic harm reduction standards. The largest operators – as they are likely to be responsible for greater risk -would be required to take more steps to limit harm. Our early thinking is that the best route to harm reduction would be to create a positive statutory duty of care, from the social network operators, to users of their service. This duty of care would be in relation to the technology design and the operation of the platform by its owners, including the harm reducing tools available to protect its users and the enforcement by the platform of its terms and conditions.  Parliament and the regulator would set out a taxonomy of harms that the duty of care was intended to reduce or prevent.  This could contain harms such as the bullying of children by other children or misogynistic abuse, which are harmful but not necessarily illegal.

Oversight would be at a system level, not regulation of specific content. The regulator would have powers to inspect and survey the networks to ensure that the platform operators had adequate, enforced policies in place. The regulator, in consultation with industry, civil society and network users would set out a model process for identifying and measuring harms in a transparent, consultative way.  The regulator would then work with the largest companies to ensure that they had measured harm effectively and published harm reduction strategies.

The duty of care would require the larger platforms to identify the harm (by reference to the taxonomy) to their users (either on an individual or on a platform-wide basis) and then take appropriate measures to counter that harm. New law might identify the framework for understanding harm but not detailed approaches or rules, instead allowing the platforms to determine, refine and improve their responses to reducing harm. These could include technology-based responses or changes to their terms of service. Specifically, and crucially, new law would not require general monitoring of content nor outlaw particular content. The social media platforms would create a pattern of identifying harm, measuring it and taking action to reduce harm, assessing the impact of that action and taking further action. If the quantum of harms does not fall, the regulator works with the largest companies to improve their strategies. This is similar to the European Commission’s approach to reducing hate speech.  If action is not effective after a reasonable interval the regulator would have penalties to implement.

Our view is that this process would be more transparent, consistent, accountable and less one sided than the ex cathedra statements currently used by the platform operators to explain their harm reduction approach.

This approach would be implemented in a proportionate way, applying to only the largest platforms, where there is the most risk, broadly speaking Facebook, Twitter, YouTube, LinkedIn, Instagram, Snapchat and Twitch. Our general view is that smaller platforms, such as those with less than 1 million members, should not be covered by the above but should be required to survey harms on their platform annually and publish a harm reduction plan. People would be able to bring networks that they felt were harmful to the regulator’s attention.

Sadly, there is a broad spectrum of harm and there may be a need for specific harm reduction mechanisms aimed at particular targets (whether vulnerable groups or problematic behaviours). In our work, we are also considering whether a regulator should become the prosecutor or investigating agency for existing crimes in this area that the police have struggled to pursue – stalking and harassment, hate speech etc.  In relation to speech that is not obviously criminal, we are also considering whether an ombudsman function would help individuals better resolve disputes – perhaps helping to avoid over-criminalisation of ‘robust’ speech (e.g. in the context of S127 Communications Act).  The UK Government has asked the Law Commission to examine this area and we shall submit our work to them.

This represents out early views that we shall be working up over the coming weeks.  We shall blog further about the work for the Carnegie UK Trust as we progress towards a publication for the Trust in the Spring.  We should be grateful for views to [email protected]

 

Any views or opinions represented in this blog are those of the authors, and do not represent those of people, institutions or organisations they are affiliated to.