Following on from our article outlining what AI is and its risks, our technology partners James Tumbridge and Robert Peake explain what laws/regulations on AI already exist for Europe and what legal regulation we can expect.

The European Union is the first to bring forward primary legislation in the form of the EU AI Act, and the Council of Europe has brought forward an AI Convention/Treaty.

The EU AI Act defines AI as: as a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The EU AI Act prohibits some things like:

  • deploying subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making, causing significant harm.
  • exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm.
  • evaluation or classification, detrimental or unfavourable treatment of natural persons, classification of biometric categorisation systems inferring sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation), except labelling or filtering of lawfully acquired biometric datasets or when law enforcement categorises biometric data.
  • social scoring; evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavourable treatment of natural persons.
  • assessing the risk of an individual committing criminal offences solely based on profiling or personality traits, except when used to augment human assessments based on objective, verifiable facts directly linked to criminal activity.
  • compiling facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage.
  • inferring emotions in workplaces or educational institutions, except for medical or safety reasons.
  • ‘real-time’ remote biometric identification (RBI) unless for law enforcement, and covered by exceptions

The Act classifies AI according to its risk. An unacceptable risk is prohibited (e.g. social scoring systems and manipulative AI). High-risk AI systems are to be regulated. So-called limited-risk AI systems are subject to transparency obligations so that users know when they are dealing with AI such as chatbots. The final group of AI are ‘minimal risk’ and they are unregulated (these would be things like an AI-enabled video game and spam filters). The majority of obligations fall on providers (developers) of high-risk AI systems.

Those that intend to place their service on the EU market or put into service high-risk AI systems in the EU, or use outputs of their AI in the EU, regardless of whether they are based in the EU or a third country, must comply with the Act. Users (deployers) of high-risk AI systems have obligations, though have fewer obligations than providers (developers).

General purpose AI

General purpose AI (GPAI) is an important type of AI that was hastily debated before the Act was passed. The EU offers guidance as to what they are and uses ChatGPT as an example. The European Parliamentary Research Service offers an understanding of the fears EU law makers had about GPAI when they include in their report:

The key characteristics identified in general-purpose AI models – their large size, opacity and potential to develop unexpected capabilities beyond those intended by their producers – raise a host of questions. Studies have documented that large language models (LLMs), such as ChatGPT, present ethical and social risks. They can discriminate unfairly and perpetuate stereotypes and social biases, use toxic language (for instance inciting hate or violence), present a risk for personal and sensitive information, provide false or misleading information, increase the efficacy of disinformation campaigns, and cause a range of human computer interaction harms (such as leading users to overestimate the capabilities of AI and use it in unsafe ways). Despite engineers’ attempts to mitigate those risks, LLMs, such as GPT-4, still pose challenges to users’ safety and fundamental rights (for instance by producing convincing text that is subtly false, or showing increased adeptness at providing illicit advice), and can generate harmful and criminal content. Since general-purpose AI models are trained by scraping, analysing and processing publicly available data from the internet, privacy experts stress that privacy issues arise around plagiarism, transparency, consent and lawful grounds for data processing. These models represent a challenge …”

GPAI systems have a wide range of possible uses, both intended and unintended by their developers. They can be applied to many different tasks in various fields, but law makers are struggling to understand them and provide guidance on their acceptable use. GPAI systems are becoming increasingly useful commercially and many businesses are feeling pressure to use them, sometimes without considering fully the use case for adopting GPAI. One important feature that can drive cost savings is transfer learning (applying knowledge from one task to another). These systems are sometimes referred to as ‘foundation models’ and are characterised by their widespread use as pre-trained models – the important thing for users to ask is what data were they trained on and is it safe to use them for your purpose?

A single general purpose AI system for language processing can be used as the foundation for many other applied models and the speed of development is very attractive. GPAI systems are increasingly used in applications in medicine and healthcare, finance, life sciences and chemistry, and offer speed and reduced cost in the development of new applications and even inventions.

The Act requires all GPAI model providers to provide technical documentation, and instructions for use, and they must comply with the Copyright Directive, and publish a summary about the content used for training. Generally, free and open licence GPAI model providers only need to comply with copyright and publish the training data summary.

Providers of GPAI models must consider if they present a systemic risk, and if so, they must conduct model evaluations, adversarial testing, track and report serious incidents and ensure cybersecurity protections. The cost of this to business is not yet clear.

In addition to the Act, the European Union, USA, and UK all announced in September 2024 that they would be participating in the Artificial Intelligence (AI) convention/treaty proposed by the Council of Europe. The treaty or AI Convention’s stated aims are to address the risks AI could pose while promoting responsible global innovation. It promotes protection for human rights of people impacted by AI systems. Additionally, it has set out a legal framework that will cover the entire life cycle of AI systems. Amongst other things, the framework says AI must not undermine democratic institutions or compromise the rule of law, must have transparent oversight mechanisms, must ensure accountability and that equality must be promoted.

Some of the key treaty aims are to:

  • Protect personal data
  • Emphasize non-discrimination
  • Promote human dignity
  • Ensure safe AI development

What should guide you?

You may find it helpful to adopt guidelines, or questions to always ask before you use AI, and we offer some examples here:

  1. Is the use lawful? How will you ensure the use of AI will comply with applicable laws, standards, and regulations?
  2. Are you comfortable with the level of transparency? What do your staff and customers understand about your use of AI, are you in a regulated space and can you justify what you use AI for? How readily can you explain it, and justify its decisions, if required to satisfy an enquiry, especially one from a regulator or court? What levels of cyber security are in place?
  3. Who is responsible in your organisation for the safe use of AI? Is there sufficient human oversight? Is there a risk of automated decision-making that creates a data protection problem?
  4. Can you trust the inputs and the consequent outputs? What data was/will be used to train the AI, and what data will be analysed by the AI? Will the AI be robust and reliable enough for its intended purpose? How will you assess the AI’s performance? Do you have the ability to track and report on the quality of outputs?
  5. Can you trace all the data used? Do you know the source of all the data used by the AI? Can you interrogate the supply chain to ensure legal compliance?

If you have questions or concerns about the use of AI, please contact James Tumbridge and Robert Peake.

For further information please contact:

This article is for general information purposes only and does not constitute legal or professional advice. It should not be used as a substitute for legal advice relating to your particular circumstances. Please note that the law may have changed since the date of this article.