Companies will be obliged to assess whether it is possible that children can access their service and then whether it is likely and that they will only be able to conclude “no” if there are systems and processes in place to prevent children’s access. OFCOM will provide guidance to assist platform operators with this (cl 28), but the obligation on the platforms to make this assessment will not kick in until OFCOM has published the guidance. As guidance, this is not enforceable. The requirement to make the assessment and to keep it up to date is enforceable.  While there is little prescription on the face of the draft Bill regarding the precise measures required to protect children, we are content that overall approach is systemic. The risk assessment (cl 61(2)(ii)) and Online Safety Objectives (cl 30(2)(a)(vi)) note the different developmental stage of children. We welcome the indication that some measure of age verification or age assurance will be required for platforms to “be entitled to conclude” that it is not possible for children to access a service (cl 26(3)), but further clarification is required here, in particular whether standards for age verification will be set – it is not clear whether the clause 28 guidance would go this far.

While the risk assessment specifically flags the risk of adults contacting children, it assumes that contact will happen within the platform and fails to take into account the interaction between platforms: for example, where people intending to abuse or groom children use one platform to flag that activity and another where the activity is carried out. There is no cross-platform duty to collaborate, even on illegal risks such as CSEA and to risk assess on that basis. How will issues around cross-platform harm and cross-platform co-operation be addressed?