The current video sharing platform provisions impose on platforms two sets of obligations – one in relation to content that it controls and one in relation to adverts under the control of others. This implies that some providers should already be putting in place controls around how they control advertising across their platform (without imposing direct liability for content). OFCOM currently has responsibility for this and will continue to do so until the Online Safety Act comes into force, after which it will stop. The ASA self-regulatory regime for the content of ads will then apply. This leads to three points:

the regime currently found in the Communications Act imposes stringent rules for ads targeted at children and includes requirements on harmful ads. Clause 39(2) in conjunction with clause 39(7) excludes paid for advertisements from the scope of “regulated content”. Will the current standards be maintained following the entry into force of the Online Safety Act?  This is important as some forms of advertising could trigger relevant (individual) harm – e.g. sale of skin lightening products. Furthermore, the boundary between ads and non-commercial content in some contexts (e.g. influencers) is fine.

It is unclear what will happen to the control element of the regime. Having systems in place as to, e.g. targeting of advertising at children, is part of the systems approach and should be part of this regime.

The ASA’s ability to impose penalties is weak, especially by comparison with the regime proposed by the draft Bill (e.g. business disruption measures, as well as fines). While compliance of the content of the ad with the relevant advertising standards may fall within the ASA’s remit, OFCOM should remain backstop regulator and should clearly have responsibility as part of the draft Online Safety Bill with its focus on systems and business models for the ad placement and targeting aspect of adverts.