Overview

For the year ending in June 2022, there were 3.8 million fraud offences in England and Wales, making it by far the most commonly experienced type of crime. More than 60% of these took place online. In addition to the life changing sums of money lost, being scammed can have a lasting impact on victims’ mental and even physical health, destroying people’s confidence in digital services and ability to trust others. The Government brought fraud into scope of the Bill after the pre-legislative scrutiny stage, adding it to the list of priority illegal content (schedule 7) and, after further campaigning from civil society groups, including Carnegie UK, and financial sector representatives, introduced a new duty (chapter 5) on category 1 platforms and category 2A search engines to have “proportionate systems and processes” to prevent individuals encountering fraudulent ads and take them down when notified. 

Analysis

We agree with the concerns raised by Which? that the fraud protections introduced into the Bill could be strengthened further. In particular, we support the principle of their calls for the Bill to give OFCOM more discretion to determine how platforms should identify illegal content (including fraud) in the Bill. The Bill will confer a significant amount of power to OFCOM to act as the online safety regulator. It is therefore critical that the regulator is equipped with the appropriate powers to determine how tech platforms identify and deal with illegal content. As it stands, the drafting of this Bill creates a loophole that means platforms could provide second-rate protections against illegal content, including fraudulent content.

As drafted, Clause 170 seeks to establish how platforms should identify illegal content: this clause is overly prescriptive and has the potential to undermine the Bill’s intention in tackling online fraud. The clause specifies that the standard for gathering information to determine if content is illegal, and therefore fraudulent, should be different if a platform is using just human moderators, automated systems by itself or a combination of the two. We agree with Which? that platforms should use all reasonably available information to determine if content is illegal, whether the process is manual or automated as critical to fraud prevention. Reasonably available information must include consumer complaints and shared data from regulators and other relevant organisations. As drafted, platforms could be required to provide substandard protections from detecting and therefore preventing the posting of fraudulent content if they only use human moderators or automated systems, therefore omitting reasonably available information based on their method of processing it. It also risks disadvantaging platforms using the current best practice of both human and automated moderating systems as they could be held to a higher standard than competitors who only rely on one method.

Amendment

The Bill must be future proofed for technological advances and give OFCOM the power to establish how illegal content should be identified through guidance published once the Bill has been passed into law, rather than enshrining these details in the legislation now. By outlining detection methods for determining what constitutes illegal content in this Bill, the Government risks inadvertently stifling innovation by disincentivizing firms from developing new methods to identify online fraud, meaning they’ll be held to higher standards to detect fraudulent content than their competitors, which is not just time consuming but costly. We recommend this is rectified by allowing OFCOM to determine the standards required by platforms. Clause 171, which relates directly to clause 170, should also be improved to ensure there is proper consultation on that guidance.

Return to blog.