The Online Safety Act 2023 (the Act) had a lot of attention in its passage to become law, but now it is a law, what do you need to do?
The law’s aim was always laudable, but it had real tension with free speech and protecting whistle blowers, and there was debate on whether existing laws already provided the protection we needed but they were just not being used properly. The concern on free speech was perhaps the source of the most strident criticism. The free exchange of ideas is the foundation of all freedom, and risks to it must be protected.
Nonetheless the bill became an Act of Parliament, aimed at protecting us online, and 2025 is when we will see it largely enforced. The Act places new obligations and duties on user-to-user (including social media) and online search platforms, including requiring them to implement systems and processes to reduce the risk of such services being used in a way that is illegal or harmful for users.
The Act, heralded as a progressive step toward internet regulation, was passed less than a year after the EU Digital Services Act. The Act’s potential impact on user privacy and online security is why you should think about it in 2025.
What does the Act cover?
The Act covers a wide range of illegal content that platforms are mandated to address. It imposes a duty of care on platforms, especially concerning content for children. A major concern was the need for messaging platforms to scan users’ messages for illegal material. This provision sparked concerns from tech companies and privacy advocates, who view it as an unwarranted assault on encryption practices. On top of understanding the Act itself, businesses also have to understand what the regulator, Ofcom, expects of them. In early 2024, Ofcom had already issued over 1,700 pages of guidance in its ‘Consultation: Protecting People from Illegal harms Online’, and it continues to publish more content.
Various companies, from major corporations to smaller platforms and messaging apps, are already required to adhere to comprehensive regulations. For example, the regulations mandate age-verification processes for users. Notably, when that came out, Wikipedia declared that it cannot comply with the Act due to the violation of the Wikimedia Foundation’s principles on data collection about its users. Platforms are also obligated to provide risk assessments concerning potential threats to children on their services. Furthermore, platforms are expected to establish user-friendly channels for parents to report concerns effectively. Companies are mandated promptly to remove inappropriate content from their platforms, along with fraudulent advertisements. These regulations aim to create a safer online environment, particularly for vulnerable users, but compliance is seen by some as onerous.
One of the most contentious aspects of the Act is section 122, regarding scanning users’ messages for illegal content. To comply, organisations have considered so-called ‘client-side scanning software’ on users’ devices, which would examine messages before sending, undermining the protections of end-to-end encryption. Despite objections raised by experts and tech companies, this section remains a part of the law, albeit the Government suggested they will not use such, and/or the technology to allow it is not yet available. Nonetheless, it raises serious concerns among privacy advocates and champions of free speech, as well as those concerned with cybersecurity.
At the end of 2024, Ofcom published a roadmap on implementation. The first major policy statement was released on 16 December, and it focusses on illegal harms. Therefore, businesses need to understand the risks that illegal harms pose to users and to record risk assessments if they want to be able to show that they gave this due thought.
Businesses should also keep in mind Ofcom’s definitions, such as “illegal content”, which is defined as “content that amounts to a relevant offence”, meaning that you have to identify and understand the offences. The Act, though, categorises them as priority offences and non-priority offences.
What are priority offences?
Priority offences are the most serious offences and include offences relating to: (a) terrorism, (b) harassment, stalking, threats and abuse, (c) coercive and controlling behaviour, (d) hate offences (e.g. hatred based on race, religion, sexual orientation etc.), (e) intimate image abuse (so-called “revenge porn”), (f) extreme pornography, (g) child sexual exploitation and abuse, (h) sexual exploitation of adults, (i) unlawful immigration, (j) human trafficking, (k) fraud and financial offences, (l) proceeds of crime, (m) assisting or encouraging suicide, (n) drugs and psychoactive substances, (o) weapons offences, (p) foreign interference, and (q) animal welfare. All service providers will need to take reasonable steps to prevent users encountering content that amounts to any of these offences.
Ofcom has made clear that it will not hesitate to use its enforcement powers (which include the ability to prosecute criminally) in the event of non-compliance. Enforcement powers also include the power to fine up to the greater of £18m, or 10% of worldwide turnover, and the power to apply to courts for orders that could limit access to platforms in the UK.
The Act and Ofcom are very keen on record-keeping, so consider the Record-Keeping and Review Guidance and Online Safety Enforcement Guidance.
On 16 January, Ofcom published industry guidance on effective age checks to prevent children from encountering online porn and protect them from other harmful content. Now pornography services must introduce age checks by July 2025 at latest.
The Act divides online services into different categories with distinct routes to implement age checks. Here is a summary of what OFCOM expects:
- Requirement to carry out a children’s access assessment. All user-to-user and search services must carry out an assessment to establish if their service is likely to be accessed by children. The final deadline for them is 16 April 2025.
- Services that allow pornography must introduce processes to check the age of users. The Act imposes different deadlines on different types of providers. Services that publish their own pornographic content, including certain generative AI tools, must begin taking steps immediately to introduce age checks, and all providers must comply by July.
More is still to come; Ofcom is also issuing children’s access and risk assessments guidance in April. Services likely to be accessed by children must carry out a children’s risk assessment within three months of Ofcom publishing its Protection of Children Codes and risk assessment guidance.
If you have questions or concerns about the Online Safety Act, please contact James Tumbridge and Robert Peake.
This article is for general information purposes only and does not constitute legal or professional advice. It should not be used as a substitute for legal advice relating to your particular circumstances. Please note that the law may have changed since the date of this article.