Thomson Reuters names eight Keystone Law partners in its Stand-out Lawyers Guide 2026
Andrea James, Andrew Darwin & Anna McKibbin
Keynote
16 Oct 2024
•6 min read
Following on from our article outlining what AI is and its risks, our technology partners James Tumbridge and Robert Peake explain what laws/regulations on AI already exist for Europe and what legal regulation we can expect.
The European Union is the first to bring forward primary legislation in the form of the EU AI Act, and the Council of Europe has brought forward an AI Convention/Treaty.
The EU AI Act defines AI as: as a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The EU AI Act prohibits some things like:
The Act classifies AI according to its risk. An unacceptable risk is prohibited (e.g. social scoring systems and manipulative AI). High-risk AI systems are to be regulated. So-called limited-risk AI systems are subject to transparency obligations so that users know when they are dealing with AI such as chatbots. The final group of AI are ‘minimal risk’ and they are unregulated (these would be things like an AI-enabled video game and spam filters). The majority of obligations fall on providers (developers) of high-risk AI systems.
Those that intend to place their service on the EU market or put into service high-risk AI systems in the EU, or use outputs of their AI in the EU, regardless of whether they are based in the EU or a third country, must comply with the Act. Users (deployers) of high-risk AI systems have obligations, though have fewer obligations than providers (developers).
General purpose AI
General purpose AI (GPAI) is an important type of AI that was hastily debated before the Act was passed. The EU offers guidance as to what they are and uses ChatGPT as an example. The European Parliamentary Research Service offers an understanding of the fears EU law makers had about GPAI when they include in their report:
“The key characteristics identified in general-purpose AI models – their large size, opacity and potential to develop unexpected capabilities beyond those intended by their producers – raise a host of questions. Studies have documented that large language models (LLMs), such as ChatGPT, present ethical and social risks. They can discriminate unfairly and perpetuate stereotypes and social biases, use toxic language (for instance inciting hate or violence), present a risk for personal and sensitive information, provide false or misleading information, increase the efficacy of disinformation campaigns, and cause a range of human computer interaction harms (such as leading users to overestimate the capabilities of AI and use it in unsafe ways). Despite engineers’ attempts to mitigate those risks, LLMs, such as GPT-4, still pose challenges to users’ safety and fundamental rights (for instance by producing convincing text that is subtly false, or showing increased adeptness at providing illicit advice), and can generate harmful and criminal content. Since general-purpose AI models are trained by scraping, analysing and processing publicly available data from the internet, privacy experts stress that privacy issues arise around plagiarism, transparency, consent and lawful grounds for data processing. These models represent a challenge …”
GPAI systems have a wide range of possible uses, both intended and unintended by their developers. They can be applied to many different tasks in various fields, but law makers are struggling to understand them and provide guidance on their acceptable use. GPAI systems are becoming increasingly useful commercially and many businesses are feeling pressure to use them, sometimes without considering fully the use case for adopting GPAI. One important feature that can drive cost savings is transfer learning (applying knowledge from one task to another). These systems are sometimes referred to as ‘foundation models’ and are characterised by their widespread use as pre-trained models – the important thing for users to ask is what data were they trained on and is it safe to use them for your purpose?
A single general purpose AI system for language processing can be used as the foundation for many other applied models and the speed of development is very attractive. GPAI systems are increasingly used in applications in medicine and healthcare, finance, life sciences and chemistry, and offer speed and reduced cost in the development of new applications and even inventions.
The Act requires all GPAI model providers to provide technical documentation, and instructions for use, and they must comply with the Copyright Directive, and publish a summary about the content used for training. Generally, free and open licence GPAI model providers only need to comply with copyright and publish the training data summary.
Providers of GPAI models must consider if they present a systemic risk, and if so, they must conduct model evaluations, adversarial testing, track and report serious incidents and ensure cybersecurity protections. The cost of this to business is not yet clear.
In addition to the Act, the European Union, USA, and UK all announced in September 2024 that they would be participating in the Artificial Intelligence (AI) convention/treaty proposed by the Council of Europe. The treaty or AI Convention’s stated aims are to address the risks AI could pose while promoting responsible global innovation. It promotes protection for human rights of people impacted by AI systems. Additionally, it has set out a legal framework that will cover the entire life cycle of AI systems. Amongst other things, the framework says AI must not undermine democratic institutions or compromise the rule of law, must have transparent oversight mechanisms, must ensure accountability and that equality must be promoted.
Some of the key treaty aims are to:
What should guide you?
You may find it helpful to adopt guidelines, or questions to always ask before you use AI, and we offer some examples here:
If you have questions or concerns about the use of AI, please contact James Tumbridge and Robert Peake.