In our previous article we outlined what AI is, and its risks. In this article, our technology partners James Tumbridge and Robert Peake expand on what is happening and why people think Europe is regulation-heavy on AI, by looking at what the EU AI Act prohibits.

In February, the EU AI Act took effect, including Article 5, which sets out ‘prohibited practices’. The EU Commission has published 140 pages of guidelines clarifying these prohibited practices. Breach of the legislation could result in a fine of up to 35 million euros or 7% of worldwide annual turnover, so mistakes are potentially costly. You should also keep in mind that even if an AI system is not prohibited by the EU AI Act, its effects might still be deemed unlawful under other laws. An obvious risk is if there is no lawful basis to process personal data under the GDPR, or if your use of the system infringes someone’s copyright.

Using AI for marketing

If you are using AI in marketing, you need to be aware of limitations where a system could be said to use subliminal techniques beyond a person’s consciousness. If it is manipulative or deceptive, that can also breach the law. Your intent is not that relevant to the analysis in the EU either. If an AI system somehow learns and uses subliminal, manipulative, and/or deceptive techniques – even if not instructed to do so – liability can arise if the effect is to impair a person’s ability to make decisions, causing them harm.

The law is trying to stop psychological and financial harms. If the AI is seen to exploit vulnerabilities in people, that is prohibited. This can include exploiting the fact that younger people are seen as more readily addicted and susceptible to online content and engagement with it. There is particular concern to stop human-like emotional responses by AI that can exploit children’s vulnerabilities.

There are also prohibitions to stop detrimental or unfavourable treatment of groups in social scoring contexts. For example, if a credit agency uses an AI system to determine the creditworthiness of people, or if a public authority is making decisions on public housing, the basis of decision must be justified and lawful. The same point arises for risk assessments and profiling of people, save for the prevention of crime.

AI and personal data

There are specific preclusions on facial recognition. For example, you cannot use AI to create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage. However, the use in generative AI systems which use facial images from the internet to train AI models capable of generating new images of fictitious persons is acceptable.

There is also prohibition of AI used to infer emotions in a workplace or educational setting, except where the use is intended for medical or safety reasons. It also does not prohibit AI systems inferring physical states such as pain and fatigue.

AI use to categorise individuals using biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation (these are ‘special categories’ of personal data under the GDPR) is prohibited. There is some acceptance of use in law enforcement and/or medical reasons, though, recognising that such use of AI could help with a medical diagnosis.

What’s next?

Both the EU and the UK are looking closely at the interface between copyright and AI. Read more on this topic here.

Inside the EU, there is growing concern about the risk of AI to copyright holders. The European Copyright Society has written an Opinion on Copyright and Generative AI, raising concerns about AI’s risk to the creative industries. The Opinion explores key tensions around model training, rights reservation, transparency, and fair remuneration – the very same issues are under debate in the UK in the Copyright & AI consultation, the UK’s 50 point AI plan, and the passage of the Data (Use & Access) Bill.

There is growing concern and tension between tech companies developing AI models, and the rights of creatives. This is leading to more careful review of the Directive on Copyright in the Digital Single Market (2019) and the EU AI Act (2024), and questions around how best to regulate generative AI.

The US, too, is grappling with AI and copyright disputes, and there are new markets opening before questions on regulation and law have been settled. For example, Christie’s auction house in New York is selling AI art. It has led to over 3,000 artists complaining and a demand to cancel the first-of-its-kind sale of artworks entirely produced using AI. Christie’s New York said its Augmented Intelligence sale would feature AI artworks spanning five decades. That puts several lots in the pre-generative AI era, an example of which is Charles Csuri’s piece Bspline Men from 1966. That art was computer-enabled; we should not forget, therefore, that the use of computing techniques to generate artistic works was around long before names like Chat-GPT and DeepSeek R-1 dominated the headlines.

If you have questions or concerns about AI, please contact James Tumbridge and Robert Peake.

For further information please contact:

This article is for general information purposes only and does not constitute legal or professional advice. It should not be used as a substitute for legal advice relating to your particular circumstances. Please note that the law may have changed since the date of this article.