The draft Bill does not offer a systemic duty to protect national security but gives the Secretary of State and the regulator powers in the curious form of a ‘public statement notice’ (clause 112). The public statement notice appears to allow the Secretary of State to ask OFCOM to ask regulated services to make a statement on how they are complying with a threat set out in the notice to public safety, public health and national security. This feels like an emergency procedure, rather a rolling process of risk assessment based on the UK government’s extensive risk assessment processes for these harms. It also does not seem to expect a minimum standard in the operators’ respective response. The public statement notice addresses collective harms, and perhaps for this reason is connected to OFCOM’s media literacy powers in the Communications Act rather than online safety which is focussed on harm to the individual. There is no indication of how a response to a public safety notice that is inadequate might be enforced.

The government said in the December policy document that:

Where disinformation and misinformation presents a significant threat to public safety, public health or national security, the regulator will have the power to act. (para 2.84) [1]

While, as we have noted, some aspects of misinformation might be caught with regards to health and public safety, the draft Bill does not deliver a systemic response to that policy intention nor does it fulfil the Prime Minister’s subsequent commitment in Parliament to the Chair of the APPG on Technology and National Security.

Darren Jones MP: …..can the Prime Minister confirm that the online safety Bill that will be presented to the House this year will contain sufficient powers to tackle collective online harms, including threats to our democracy?

Prime Minister: Yes, I can.[2]

 The Government’s consultation on Legislation to Counter State Threats[3] (contemporaneous with the draft Bill) says that:

Disinformation and information operations – increasingly, these have become core tools for state and non-state actors alike to sow discord, attempt to interfere in UK democracy, and disrupt the fabric of UK society through division and polarisation

The USA intelligence reports[4] on Russian interference in the 2020 election demonstrated how social media services are used as attack vectors by the UK’s adversaries. Media and analytical reports[5] suggest there may have been a disinformation attack on UK elections.

The new Atlantic Charter[6] commits the UK and the USA to:

oppose interference through disinformation or other malign influences, including in elections.

The accompanying UK/USA joint statement says that:

Building on the U.K. G7 Presidency’s focus on open societies, and looking ahead to the U.S.-hosted Summit for Democracy, the U.S. and U.K. will continue to make practical efforts to support open societies and democracy across the globe. We will do this by defending media freedom, advancing a free and open internet, combatting corruption, tackling disinformation, protecting civic space, advancing women’s political empowerment, protecting freedom of religion or belief, and promoting human rights of all people.

Civil regulation has an important role to play in national security by requiring proper security of infrastructure – for instance physical security of power stations, airports etc. There is a gap between traditional cyber security which defends services and networks themselves (such as the NIS regulations[7]) and defending against a disinformation attack where an adversary exploits the way a service works without having to attack its underlying software. A draft Bill on online safety would be an ideal place to address this vulnerability.

The Government suggests that attacks on elections are not within scope, pointing to the Defending Democracy programme[8] and the work of the counter-disinformation unit as their means of addressing this risk, a unit which works without formal oversight. We do not feel that either of these are sufficient for the scale of the threat. Nor do we understand why the draft Online Safety Bill cannot be used to address this clear harm. Instead, a perhaps unintended side effect of clause 13 on democratic speech is that platforms in fact might have to protect political disinformation, even if it has been catalysed by a foreign adversary. We suggest that it should be possible to separate disinformation attacks by state actors or their proxies from lower-level disinformation. The UK has a high-powered process for determining such threats and in the draft Bill has a potentially strong regime for addressing online harm it seems wasteful to separate these out. National security is also a matter that the state has unique competence to assess – the current informal mechanisms for sharing national security information with major social networks are overdue for formalisation.



[2] HC Deb, 16 March 2021, c175

[3] See

[4] Ref ‘Foreign Threats to the 2020 U.S. Federal Elections’

[5] FT coverage of report by Graphika

[6] Atlantic Charter

[7] The Security of Network and Information Systems Regulations (SI 2018/506) (NIS Regulations)

[8] See e.g.  the Government response to the ISC Russia Report, available: