Out-Law News 2 min. read
05 Nov 2024, 4:06 pm
Online service providers in scope of the UK’s Online Safety Act (OSA) should get ahead and prepare to conduct an illegal content risk assessment to ensure they do not fall foul of Ofcom’s enforcement powers, an expert has said.
The OSA (349 pages / 3.9 MB) mandates that online service providers conduct thorough risk assessments to identify and mitigate the risks of illegal content appearing on their platforms.
Ofcom, the body responsible for regulating the OSA, has been consulting on its codes of practice which set out what services in scope of the OSA must do to comply with it. The regulator is expected to publish its final illegal content risk assessment guidance and codes of practice in December, alongside a statement setting out its decisions in respect of its illegal harms consultation. In scope services will have three months to carry out their illegal content risk assessment from the date of this statement.
Ofcom chief executive Melanie Dawes recently confirmed that Ofcom is getting ready to take strong enforcement action against services in scope of the OSA which do not comply with its provisions. This comes after renewed scrutiny of the OSA over the last few months, predominantly as a result of the Southport riots that took place across the UK in the summer. Ofcom subsequently commenced an investigation and found that there was a connection between the riots and online activity, particularly the spread of dis/misinformation. Ofcom’s enforcement powers will come into force around the same time as the expiry of the three-month period in which in-scope services must have conducted their illegal content risk assessment.
Whilst services will be focused on compliance with the extensive obligations already set out in the OSA and anticipated in the upcoming codes of practice, Ofcom is also exploring how to address other potential harms that are not addressed by the OSA. In particular, Ofcom is considering how online services could employ safety measures to protect their users from harm posed by generative artificial intelligence (gen AI). One such safety intervention is ‘red teaming’, a type of evaluation method that seeks to find vulnerabilities in AI models. This involves ‘attacking’ a model to see if it can generate harmful content. The red team can then seek to fix those vulnerabilities by introducing new and additional safeguards – for example, filters that can block such content.
There has been some debate about whether gen AI is within the scope of the OSA as currently drafted. Chat GPT’s high profile launch occurred in 2022 while the OSA was still making its way through parliament.
However, Ofcom has made clear that it requires action from all part of the technology supply chain to mitigate the risks of AI generated content, including deepfake content. The supply chain includes developers that create gen AI models through to the user-facing platforms that act as spaces for potentially harmful AI generated content to be shared and amplified. In guidance, Ofcom has set out four main ways to mitigate the risks posed by these images: prevention (prompt and output filters, red teaming, cleansing of training data); embedding (watermarks, metadata, labels); detection (automated and human content moderation); and enforcement (terms of service and community guidelines, suspending and taking down user accounts).
Technology law expert Sadie Welch of Pinsent Masons said: “Ofcom’s code of practice on illegal content is intended to be a practical guide to compliance for in-scope services in respect of their safety duties under the OSA. However, the code of practice and accompanying guidance run to thousands of pages. Ofcom has also published supplementary consultations on animal cruelty and human torture, as well as amending the list of priority offences in the OSA itself to introduce the new, broader intimate image abuse offences. Although the volume of material for in-scope services to understand and comply with is challenging, it is important for organisations to start their compliance efforts now so that they do not fall foul of Ofcom’s enforcement powers early next year.”