June 18, 2019

The Online Harms White Paper: a summary response from the Carnegie UK Trust

by Professor Lorna Woods, William Perrin and Maeve Walsh

The Online Harms White Paper published by the UK Government in April is a significant step in attempts to improve the online environment. Given the White Paper’s breadth of scope, the complexity of the issues tackled as well as the challenging political environment within which it was drafted, it is hardly to be expected that the White Paper would be perfect. Indeed, it is not a typical White Paper; it expressly raises questions for consultation so that in some aspects it seems more like a Green Paper. We will be submitting our full response on all these questions to the Government by the end of the consultation period (1st July).

This blog focusses on the issues that we feel need further consideration rather than those that we think do not. Our principal concern relates to the meaning of the statutory duty of care put forward by the Government. There appears to be an emphasis on detail in the codes of practice – perhaps to satisfy strong voices – without basing them on a sufficiently clear model of the duty of care.

 

Spelling out a Systemic Duty of Care: the model of platform responsibility

The White Paper says in paragraph 3.1 that:

The government will establish a new statutory duty of care on relevant companies to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services … This statutory duty of care will require companies to take reasonable steps to keep users safe and prevent other persons coming to harm as a direct consequence of activity on their services.

To some extent, this may not appear that different from what we proposed in our work for Carnegie UK Trust, borrowing from the language of the Health and Safety at Work Act 1974. What is less clear in the White Paper is the reason for the platform’s responsibility in this context and consequently the sorts of steps that they might be required to take. The design choices made by the companies in constructing these platforms are not neutral; they have an impact on content and how it is shared. By contrast, older models (e.g. the e-Commerce Directive) have not expressly recognised this role in contributing to the creation of the problem but instead limited the role of the platforms to take-down and other ex post content creation mechanisms (e.g. moderation). It is here that the White Paper is not clear about the types of steps being required of companies.

Based on the evidence of the Secretary of State to the DCMS Select Committee on 8 May 2019, we accept that the White Paper intended to move to a more systematic approach, in that he referred to the responsibility of the platforms. It is also possible to point to elements in the White Paper that reflect a recognition that the companies contribute to the development of the problem and therefore need to takes steps earlier on in the system design process. For example, the White Paper at paragraph 3.16 notes that the aim of increased transparency is to get companies to ‘take responsibility for the impacts of their platforms and products on their users’ – though this could be referring to the negative impacts from the content of others, rather than the impact of platform design on communications choices users make. Perhaps more tellingly, the White Paper refers to safety by design, which is described (but not elaborated) in paragraph 8.1. A number of the Codes refer to this principle, as well as other requirements that could mitigate against the creation of the problem in the first place. They are not, however, linked to a clear description of the responsibility model and so the point rather gets lost.

 

The Codes of Practice

The White Paper envisages the existence of codes of practice. In itself, this is not a problem. Codes can be a useful mechanism for dealing with technical issues and allow the system to keep up to date in a swiftly changing environment. The difficulties arise from the way the proposed codes are drafted and structured. The Government has placed heavy textual emphasis on codes of practice to implement a duty of care. These are organised by reference to 11 different types of content, each with different specified actions that must be taken into account by the relevant operators. It may be that this approach was adopted as a vehicle to demonstrate to lobby groups that particular concerns would be met. In the codes, the Government chose to elaborate there is undue emphasis on notice and take down processes with the unfortunate consequence that the Government appears to prioritise these over the safety by design features inherent in a systemic statutory duty of care. Even if compliance with these codes is not in itself enough to satisfy the statutory duty of care, it is clear that in the view of the Government that the codes set out important steps. The Secretary of State spelled this out in his evidence to the DCMS Select Committee:

We are not saying to online companies, “You have a duty to comply with the codes of practice”. We are saying, “You have a duty to comply with that duty of care. The regulator will hold you to account for whether you do so….” … But it is the duty of care that is the significant case, and that has to be there because the guiding principle for me is that… [q. 367]

The focus in the draft codes on different types of content, while it allows for differentiation in terms of the intensity of action required by the operators, has the unfortunate side effect that platform operators will need to understand the boundaries between these different types of content in order to apply the appropriate code. In our view cross-cutting codes which focus on process and the routes to likely harm would be more appropriate; and such an approach appears to be well within the bounds of what the Secretary of State envisages.

In terms of the actions that the White Paper envisages the operators taking, as noted they tend towards take-down and moderation rather on mechanisms that incentivise certain forms of content. Worryingly, there are references to proactive action in relation to a number of forms of content (and not just the very severe child sexual abuse and exploitation and terrorist content) which in the light of the emphasis in the codes could be taken to mean a requirement for upload filtering and general monitoring to support that. While the White Paper acknowledges that Article 15 e-Commerce Directive prohibits general monitoring, the White Paper is weak in its explanation as to how it resolves the conflict between these two positions.

Some codes the Government proposes to draft itself. We cannot support the prospect of a Home Secretary drafting and approving codes of practice in relation to speech, even the most extreme and harmful speech. This is not a role that should be fulfilled by the executive. At the most it could be the responsibility of Parliament or, more likely, an independent regulator to do this, after consultation with relevant bodies including, for example, the police and security services, the Crown Prosecution Service and perhaps also the Home Secretary.

The level of detail in the draft codes, as well as the possibility of executive control in this area, raise questions about the independence of the regulator. A better route would be for the Government to appoint a (shadow) regulator to take this aspect of the work forward as soon as possible, working in consultation with relevant parties: the Government but also civil society and companies involved in the sector. Such an approach would give the parties a sense of practical and emotional investment in a long term work programme as well as supporting the independence of that process. The outcome would be likely to be more workable in practice too. In sum, we are concerned that the Government’s framing of its proposals in the White Paper has significantly reduced that space for public debate and consensus building – led by the appointed regulator and at arms’ length from the legislature and the executive – and will instead delay the introduction of the enabling primary legislation as various groups fight over the second-order detail.

 

The Regulator

The Government has not given sufficient reasons to justify consulting over whether the regulator should be other than OFCOM. If the Government is serious about the urgency of tackling harms, it should allocate the role and resources to support to an existing regulator immediately. The Commons Science and Technology Committee recommended OFCOM be in place and with powers by October 2019. While this suggestion may have been ambitious in terms of timescale, OFCOM has a track record of effective engagement with some of the world’s biggest media groups. A new body, with no track record, would take years to earn sufficient reputation to be taken seriously. A ‘shadow’ process led by a nominated regulator that informs legislation and engages the parties in solving problems outside of Government is vital to the regime working. Also, a shadow process would start to tackle problems now.

The failure to name the preferred regulator, or to be clear on the timescale in which the legislation will be brought forward and the identified regulator set to work is a significant weakness.

 

Scope of Coverage

The White Paper would have been ‘more white’ and ‘less green’ if it had been more clear as to which services are in or out. Although paragraph 4.1 contains a broad definition of the services in scope (backed up by a non-exhaustive list in paragraph 4.3), there are still areas of concern and confusion. We note that the very breadth of the proposed regime may give rise to issues in understanding how the duty of care applies in each case (a problem compounded, as noted, by the content-focussed approach to the draft codes).

We welcome the explicit inclusion of messaging, given that some private messaging groups may run into the hundreds if not thousands of users, and we acknowledge the Government’s willingness to engage on the difficult issue of where the boundary with private messaging lies. It is important to remember however that one-to-one communications have traditionally formed part of constitutional and international law-based privacy guarantees and any state intrusion into that space must be limited, clearly justified and subject to safeguards.

There are a number of areas that we explicitly raised in our work that are not drawn out in the document. As we understand it, the search industry (i.e. Google) is in and messaging and user-generated content in games is included. Whilst there are some issues with harm in these areas we feel that the Government should take an overall risk-managed approach and focus on the bigger risks of harm in new social media. We also reiterate that the existing media, including the below-the-line comments, should not be dealt with by this mechanism.

The scope of harm covered is not clear. We appreciate that a non-exhaustive approach may be necessary in a changing field. There are, however, some internal tensions regarding the nature of the harms that are definitely in scope (eg, the White Paper is not clear on the nature of public or societal harms, for example caused by disinformation and misinformation). In our work we suggested that harms went beyond those that attract great media attention. In particular economic harm over the internet (fraud and scams) is Britain’s biggest category of property crime. Enforcement is ineffective. The Government explicitly rules out including economic harm and we think this is a mistake. The White Paper avoids any mention of misogyny as a hate crime, despite having committed to Stella Creasy MP and others that it would begin a process of making misogyny a hate crime.

The distinction between clearly defined and less clearly defined harms is not helpful. Even assuming the well-defined harms to be so, there will always be boundary cases needing to be assessed. In our view, many of the difficulties around harm can be ameliorated by focusing on the means by which types of harm are likely to arise. Taking a real-world analogy, if an employer sees a floorboard sticking up that person would not think “will someone break their leg or just twist their ankle?” but would ensure the floorboard is fixed. The process is about identification of something causing a risk of relevant harm, in which the precise nature of the harm does not need to be precisely identified or quantified.

 

Conclusions

We re-iterate that we welcome the White Paper, a policy document that broke new ground and contains a substantial number of ‘green’ consultation points. However, without urgent clarification on the points above, the Government has opened itself up to (legitimate) criticism from free speech campaigners and other groups that this is a regime about moderation, censorship and takedown, rather than about designing services that hold in balance the rights of all users, reduce the risk of reasonably foreseeable harms to individuals and mitigate the cumulative impact on society. The Secretary of State has already begun to clarify some aspects of the White Paper before the Select Committee and we look forward to his further thoughts as the consultation period draws to a close, rather than waiting many months until the traditional, formal Government response.

 

Next steps

We continue to welcome feedback and to work collaboratively with other organisations seeking to achieve similar outcomes in this area. Contact us via: [email protected]

For further information, please see the resources on the project page.