June 20, 2023

The Metaverse and Machine Generated Content

by Professor Lorna Woods, Professor of Internet Law, University of Essex and Dr Alexandros Antoniou, Lecturer in Media Law, University of Essex

As the Online Safety Bill reaches the end of its Lords’ Committee scrutiny and Peers prepare to return to some of the priority issues during Report stage in July, Professor Lorna Woods and Dr Alexandros Antoniou consider whether the Bill – as currently drafted – adequately covers the common elements associated with “metaverse”-type environments.

The metaverse will include visual (and auditory?) elements of the environment in which the users engage. The same is true of online games or other virtual environments. These visual and/or auditory elements could be the environment’s landscape and architecture, or avatars produced by the service (e.g., as a moderation feature, or advertising feature, or search function – or just part of the environment’s ‘vibe’ not for control by a user).  The issue considered here is whether these elements – if harmful or criminal – are likely to be caught by the Online Safety regime?

The Government has confirmed, most recently in the House of Lords Committee stage debate on the Bill on 25 May, that the metaverse is “in scope” of the regime. Here we break down what “in scope” means with reference to three questions which must all be answered affirmatively for the regime to apply:

  • is the service regulated under the regime?
  • is the content or behaviour concerned within the regime?
  • does it trigger an obligation on the part of the provider to do something about it?

Is the service regulated?

Lord Parkinson, responding to a group of amendments on future-proofing in the House of Lords Committee stage debate on 25 May, repeated the position the Government had already taken on this (see also statement of Nadine Dorries when Secretary of State (col 96)):

“The Bill is designed to regulate providers of user-to-user services, regardless of the specific technologies they use to deliver their service, including virtual reality and augmented reality content. This is because any service that allows its users to encounter content generated, uploaded or shared by other users is in scope unless exempt.”

It is clear that a metaverse (or similar VR environment) would fall within the definition of user-to-user service (subject to there being the requisite connection with the UK). For analysis, see the Carnegie UK website here.

The fact that some content on the service may not be “regulated user-generated content” (cl 49(2) and (3))[1] does not take the service out of the regime provided there is some regulated user-to-user content present (save for certain noted exceptions in the Bill – see Schedule 1).

Nonetheless, some of the safety duties (e.g., those that focus on take down) may not be well suited to a VR environment which seems to have real-time streaming of content. Such interventions seem more suited to services with content archives for example social media such as Twitter, Facebook, Instagram, TikTok all do (even if they also allow live streaming as well). “By design” interventions (including but not limited to user empowerment tools) seem more important here.

Does the Content or Behaviour Fall within the Regime?

As regards content relevant for the regime, the drafting is not perfect on this because the duties and other definitions do not refer to “regulated user-generated content” but just “content”. If we assume that this is a drafting glitch (because otherwise cl 49 would have no purpose and certain types of content seemingly intended to lie out with the regime would suddenly fall within it) then the scope of “regulated user-generated content” is of central importance.

On this point, Lord Parkinson said in the House of Lords debate:

“The Bill does not distinguish between the format of content present on a service. Any service that allows its users to encounter content generated, uploaded or shared by other users is in scope unless exempt, regardless of the format of that content. This includes virtual and augmented reality material. Platforms that allow such content, such as the metaverse, are firmly in scope of the Bill and must take the required steps to protect their users from harm.”

Lord Parkinson’s comments seem to refer to the text of cl 49 which defines regulated content as “user-generated content” but excludes a list of content types in cl 49(2).  “User-generated content” is then defined in cl 49(3) as:

  • content
  • “generated directly” on or uploaded to or shared on the service by a user
  • “encountered by another user”.

“Content” is a defined term, drafted broadly (cl 207(1)) as “anything communicated by means of an internet service, whether publicly or privately, including written material or messages, oral communications, photographs, videos, visual images, music and data of any description”. In terms of format, it is unlikely that anything a user does online would fall outside content; content could potentially cover material sent across the service by the service provider itself (but presumably excludes data that constitutes the operation of the service itself).

Lord Parkinson elaborated on this point in the debate:

“Content” is defined very broadly in Clause 207(1) as “anything communicated by means of an internet service”. This includes virtual or augmented reality. The Bill’s duties therefore cover all user-generated content present on the service, regardless of the form this content takes, including virtual reality and augmented reality content. To state it plainly: platforms that allow such content — for example, the metaverse — are firmly in scope of the Bill.

The key point for the definition is that the user is the motivating force for the introduction of the content, whether directly or using tools including automated tools (including bots). Content generated by the service without the user is not regulated content for the purposes of the regime. The boundary of “generated directly” (cl 49(3)(a)) is, however, a little uncertain. If a user has an avatar, the actions of that avatar are probably user-generated content (given the width of the definition of content (cl 207(1)) and the fact that the user will be controlling the avatar); scenery and architecture are probably not user generated, as they would be there irrespective of the user’s presence or actions (seemingly similar to “provider content” – see Sch 1 para 4(2), though the definition is linked to para 4(1)). The middle case is unclear; if (for e.g., in a game situation) the user’s actions trigger a predetermined response, is that the user generating the content (and does it matter if the user has only a limited menu of choices to choose from)?

Note the breadth of the definition of “encounter” seems to be reasonably future proof – “read, view, hear or otherwise experience content” (cl 207(1)).

Of course, if the content produced by the platform is itself contrary to the criminal law, then the service provider would be open to prosecution (though it is easy to imagine that there might be difficulties with mental elements where content is automatically generated).  It is less clear to see how responsibility could be attributed to a service provider in relation to non-criminal behaviour. It might be argued that architecture would not be harmful but what, for example, of a virtual environment which put a strip joint (with explicit window display and assuming that pornography has been designated as primary priority content) as the backdrop to a school playground aimed at child users?

Additionally, the material must be capable of being encountered by another user so, for example, online games in which users do not interact would not trigger the application of the regime but games where users can interact would seem to be caught. Presumably the condition is satisfied if users could encounter the content, rather than deciding the question on whether another user has encountered it in a particular case; the obligations are in relation to systems not individual items of content or activity. So, looking at the examples given by Baroness Harding regarding avatars of children (Col 990-1), these might not be caught by the regime (no matter what the criminal law says, or if they are deemed to be user generated or provider content) if the content produced is not encountered/capable of being encountered by another user. A further issue is whether the encountering must be by virtue of the one service: the Oculus headset example she mentioned seems to have three elements: the headset; the app for searching and the mobile phone – how much of this would be a regulated service? Presumably the key aspect here is the element that allows the sharing of the material, satisfying the ‘encountering’ aspect of the definition.

Harmful and Criminal Content

Service providers only need to take action about “content” that is illegal or harmful to children (where the service is likely to be accessed by children). The latter category may give rise to questions about what is harmful, but these questions probably do not differ that much across online environments and into VR – the harm ultimately is felt by the same person (the user) whether by viewing harmful material or experiencing harmful encounters.

There is a question as to whether all criminal offences translate into the VR environment. The rape of an avatar by another (by contrast to threatening messages in standard social media) may not trigger the criminal law because the avatar is not a person and the physical elements of the crime may not be satisfied (though note some offences specifically include representations). This, however, concerns the criminal offence for which a user may be prosecuted; it does not necessarily mean it is a “relevant offence” (cl 53(4)) for the regime. Whether the criminal law adequately covers users’ online behaviours is a separate question (as discussed by Baroness Berridge, col 996) and one which the Minister, in the Lords debate, committed to respond to her on.

Note that – other than specified priority offences (cl 53(7)) – relevant offences must satisfy cl 53(3) and (5). Notably, cl 53(3) requires that the use, possession, viewing, publication or dissemination of the content in issue must be an offence for it to be a relevant offence for the regime. Thus, even in non-VR online environments, videos of people being assaulted would not necessarily trigger the criminal content regime even when the subject matter of the video was itself a criminal act. This point does not seem to vary between the VR and non-VR environment notwithstanding the issues about avatars already noted. Of course, the video or other content could still fall foul of other criminal offences that do relate to the use, possession etc of content: e.g., (depending on the content) the Obscene Publications Act, or s 127 Communications Act amongst others. Neither of these is listed as a priority offence under Sch. 7. Although Sch. 6 includes s 2 of the same Act (OPA), the drafting in Sch. 6 limits the offence to child sexual offences. Also, although extreme pornography (listed in Sch. 7 as priority) would trigger the regime and covers ‘explicit and realistic’ representations, some types of sexual conduct covered by that legislation (e.g., ‘life-threatening’, ‘serious injury’) are limited in scope and have attracted criticism for being vague. As far as child sexual images are concerned, virtual child pornography (incl. computer-manipulated images and computer-generated images) would trigger the regime but ‘prohibited images’ (Sch. 6, para. 7), which extend to imaginary children, require a higher obscenity threshold than the lower standard of indecency (needed for photograph-based offences) and are limited to specific types of sexual conduct.  Perhaps this point is illustrated by the footage of the recent Annecy stabbings: in the absence of terrorism, hate or glorification, it is unclear whether this footage would contravene the criminal law.

The existence of a relevant offence brings the regime into play, but the choice of criminal offence may have an impact on whether the priority rules come into play, or the proportionality of action against that sort of content. There remains the question of how activities in VR contexts are understood. As noted, “content” for the purposes of the Online Safety Bill is broadly drafted and is not limited to static text or images (cl 207(1)) and is probably broad enough to catch user activity – assuming a relevant offence is available.

Conclusion and recommendations

From this analysis, we make the following recommendations to ensure that the protections in the Online Safety Bill are sufficient for the types of activity and content likely to occur within VR environments and to ensure as much parity with the non-VR world.

  • The nature of the offences that are likely to occur within VR environments need to be reviewed: are there sufficiently serious enough criminal offences relevant for the Bill’s purposes to trigger the application of the regime?
  • The commitments made at the despatch box by Lord Parkinson to continue discussing the issue of the current extent of the criminal law – and whether it adequately covers online activity – need to be expedited as the Bill completes its progress through Parliament.
  • To what extent is ‘provider content’ – whether generated automatically by a machine or not – adequately covered (beyond the Part 5 regime for pornography)?
  • The Government needs to ensure that OFCOM – in developing the necessary risk profiles and relevant guidance (for example on risk assessments and criminal offences) and codes of practice to support the implementation of the Bill – considers the nuances of virtual environments. These products are the route by which the Government’s broad assurances that the metaverse and other VR environments are “in scope” of the legislation will be understood in practical terms by regulated services. If there is a gap, then OFCOM’s ability to enforce the regime effectively with respect to harms occurring in virtual environments will be compromised.
  • The Government should accept the amendments to introduce a future-looking risk assessment, tabled by the Lord Bishop of Oxford and presented by his colleague the Lord Bishop of Chelmsford in the Committee debate (see amendment numbers 195, 239 and 263), to ensure that future risks emerging from the evolution of VR environments, and the development of new services and functionalities related to them, are adequately identified.

[1] All clause numbers refer to the version of the Bill sent from the Commons to the Lords in January 2023 and are likely to change ahead of Lords’ Report stage: https://bills.parliament.uk/publications/49376/documents/2822