Skip to content

Keynote

Digital Explainer – Online intermediaries: when does knowledge give rise to liability?

06 Mar 2026

6 min read

Share

‘Online intermediaries’ is often used as a shorthand for the digital platforms that are a feature of our everyday lives, both personal and professional: social media platforms, video and image sharing sites and apps, and digital marketplaces. While they may be given different names in law, such as ‘information society services’ or ‘user-to-user services’, they all depend on content contributed (though not always created) by users. That raises the question: if a user posts something unlawful, can the platform itself be held responsible?

The law in the UK does not give platforms complete immunity for what is posted by their users; instead, whether a platform can be held liable will typically depend on whether it has acted responsibly. Problems typically arise when a platform fails to take steps to address an issue that has come to its attention.

The Electronic Commerce (EC Directive) Regulations 2002 set out ‘safe harbours’ for online intermediaries. They distinguish between three types of activity:

  • Mere conduit – simply transmitting information, such as providing internet access.
  • Caching – temporary storage to improve technical efficiency.
  • Hosting – storing content uploaded by users.

For social media and similar platforms, the hosting defence is the most relevant. Essentially, the platform will not be liable for unlawful user content provided it does not have actual knowledge of the illegality. Once it becomes aware, however, it must act quickly to remove or disable access to the material.

This is the basis of the “notice and takedown” system. Platforms are not under a general obligation to monitor all of the content that users post, but once they become aware of specific unlawful material – for example, defamatory statements, illicit sexual imagery, or copyright infringement – they must respond promptly in order to mitigate their legal risks.

Defamation

The Defamation Act 2013 provides website operators with a specific defence where they can show that a defamatory statement was posted by a user and not by the operator itself. That defence can become unavailable if the operator fails to follow the required complaints procedure. But once a platform is on notice of potentially defamatory content, it is expected to take the issue seriously and respond promptly by taking down the content if it is thought to be defamatory.

Copyright

Copyright claims are a significant risk for platforms where users upload films, music, images, articles, or other protected works without permission from the rights holder. If a platform has no actual knowledge that user content infringes copyright, it will generally not be liable. However, once it is put on notice, such as through a complaint from the rights holder, it must take action to remove the content in order to avail itself of a defence to liability.

Disputes around copyright infringement will often turn on how involved the platform is in the infringing activity. A service that simply provides neutral hosting is more likely to benefit from a defence. However, where a platform selects, promotes, or monetises material that infringes copyright, it risks being seen as an active participant or facilitator, and is more exposed to potential liability.

In cases of widespread infringement, such as where many websites or services seek to distribute unlicensed content like film or television broadcasts, courts can grant orders against internet service providers to block access to those websites. These can include ‘dynamic’ blocking injunctions, where the copyright owners can return to court periodically to have the injunction adjusted to cover further infringing sites that take the place of those which have been taken down or blocked.

Online safety and data protection

Platforms are subject to overlapping regulatory obligations under the Online Safety Act 2023 (the Act) and the UK GDPR and Data Protection 2018. The Act introduced proactive duties on providers to guard against illegal content on their platforms, and to seek to ensure that children are not exposed to inappropriate content. Platforms whose services are covered by the Act – for example, those offering user-to-user chat functionality or online search services – must assess the risk of illegal content appearing on their platforms and take proportionate steps to reduce those risks. Where children are likely to access a service, additional safeguards are required.

In the UK, the regulator Ofcom is responsible for overseeing compliance and has the power to impose substantial financial penalties. One recent fine of £1.35M was issued against the provider of a pornographic website for failing to implement proper age-verification checks for users to ensure that those visiting the site are at least 18 years old.

The UK Information Commissioner’s Office (ICO) also has enforcement powers to protect children online, and recently fined the platform Reddit over £14M for the improper processing of children’s personal data. The platform failed to have in place age-verification tools to prevent those under the age of 13 from accessing the service.

In the most serious cases – particularly involving terrorist content or child sexual abuse material – a failure by a platform to act after acquiring knowledge of illegal content may expose it to criminal liability. The UK government has also recently added an amendment to a bill before Parliament which would require platforms to remove non-consensual intimate images flagged to them within 48 hours; failure to do so would risk a fine of up to 10% of worldwide turnover and possibly face being blocked from access in the UK.

The approach in the United States tends to differ from that in the UK. Section 230 of the US Communications Decency Act provides that platforms are not to be treated as the publisher of user-generated content, and the US courts have interpreted this protection broadly, so platforms tend to benefit from greater immunity from claims based on such content than they do in the UK. With the recent growth in agentic AI such as used in chatbots, the limits of that immunity are being tested, with claims being asserted against those providing the agentic tools for harms suffered by users.

Practical implications

For online intermediaries operating in the UK, managing liability risk begins with understanding the applicable legal framework and the duties it imposes. From there, businesses can adopt measures including:

  • Compliant procedures for handling users’ personal data and ensuring that children are not able to access harmful material;
  • Clear and enforceable terms of service, so that users know they may be liable if they contribute prohibited content or behave inappropriately;
  • Effective policies and procedures for handling complaints and illegal content.

Common mistakes for businesses are the adoption of inappropriate ‘boilerplate’ policies and procedures, and failing to keep their processes under review for evolving risk landscapes. A ‘set it and forget it’ approach does not assist in mitigating risks.

If you have questions or concerns about online intermediaries, please contact James Tumbridge and Robert Peake.

For further information please contact:

James Tumbridge

Partner

020 3319 3700

james.tumbridge@keystonelaw.co.uk

Robert Peake

Partner

020 3319 3700

robert.peake@keystonelaw.co.uk