Skip to content

Keynote

Digital Explainer: When is AI use acceptable for legal matters?

06 May 2026

8 min read

Share

AI is meant to disrupt legal services, but it has also caused many concerns. In this Keynote, James Tumbridge and Robert Peake outline what the judges in the UK think is acceptable use of AI.

In April 2026, the Chancellor of the High Court (Sir Colin Birss LJ) gave a speech on AI. As one of the UK’s most senior judges (and in fact one of its most technically able), it is the clearest steer we have on the UK judicial view on AI. Sir Colin helpfully set the scene on what makes machine learning/AI so different. He said, there are two central features:

“The first is that we now have machines which at least appear to operate on the basis that they understand English. That is new and significant. … The second is that unlike traditional computers, the way these systems appear to operate is probabilistic. … My point is that they will not answer the same question in the same way every time. There is a degree of variation. One cannot fully predict, in advance, what the machine will say or do in response to a given set of circumstances.”

This recognition goes to the heart of the hallucination risk, and it shows judges understand it. In the legal community around the world, we have heard of AI hallucinations, where someone asks AI for help and it provides a made-up case authority. This troubling trend has caused negative comments from many courts and tribunals and has led to guidance as outlined here.

Now, thanks to an immigration case from November 2025, we have a UK tribunal telling us that there is also a risk of losing legal professional privilege if you use AI. This is a very serious concern and needs to be understood. Read more about it here.

What is legal professional privilege?

Legal professional privilege is an essential safeguard in the legal system. Existing as a protection for the individual, entitling you to tell your lawyer anything without anyone else having a right to know what you discuss. The House of Lords confirmed in 2004, “[privilege] … attaches to all communications made in confidence between solicitors and their clients for the purpose of giving or obtaining legal advice even at a stage when litigation is not in contemplation.”

Privilege is conferred on communications between a client and their lawyers, as opposed to specifically confidential communications, justified by reference to the special nature of the administration of justice. When it comes to privilege, you cannot ‘pick and choose’ what is and is not disclosed: Where privilege in a communication is waived, the document in its entirety, including all admissions in it, will be put before the court: Somatra Ltd v Sinclair Roche & Temperley [2000] 1 WLR 2453.

What is the view of AI among judges?

Judicial AI Guidance was published in October 2025. It has three key aspects:

  • Judges are not prohibited from using AI.
  • Judges must take full personal responsibility for whatever goes out in their name.
  • A judge who wishes to use any kind of AI system should only use a system which they are sure is secure.

The Judicial Office, His Majesty’s Courts and Tribunals Service (HMCTS), and the Ministry of Justice’s AI unit have provided two AI systems which judges are satisfied are secure.

Judges hope to use AI for at least four purposes:

  • As a transcription tool.
  • To help produce anonymised judgments (a particular issue in family law, but not limited to that).
  • To identify internal inconsistencies in draft judgments during preparation.
  • For administrative tasks; using Copilot to find things in files more efficiently, for example.

Judges are thinking about privilege and whether AI helps or hinders courts

Courts all over the world have seen use of AI by unrepresented litigants. The Chancellor comments that AI has increased the volume of this material submitted, but also noted it is in his view a positive in terms of access to justice. AI-generated material can be very long and not always right, but his view is the litigant in person’s case is presented more clearly and coherently with help from AI.

What the Chancellor accepts, however, is that an exchange with a lawyer is privileged, but an exchange with AI by a litigant may not be. At present, the law does not recognise privilege in this, but the hint is that it might evolve. By extension, a lawyer not using a secure system risks privilege, as explained here.

Importantly, the Chancellor accepts that extending privilege to use of AI is complicated and he cited the case of R (Prudential) v Special Commissioners of Income Tax [2013], where the Supreme Court said that Parliament should decide whether to extend privilege or not.

The Chancellor also noted the important decision in UK v Secretary of State for the Home Department [2026] UKUT 81 (IAC), which is a Hamid decision. That decision was concerned with use of AI resulting in false information being submitted before the Tribunal. To find out more about the use of AI in court cases, click here. The Tribunal considered the decision of Ayinde from June 2025. For a more detailed analysis, click here. It led the Law Society to issue guidance on generative AI. The Tribunal said that it had seen a considerable increase in 2025 of fictitious authorities being used. In an effort to curb that trend, the claim form has now been amended to require a legal representative to confirm by a statement of truth that any authority cited (a) exists; (b) may be located using the citation provided; and (c) supports the proposition of law for which it is cited. A legal representative who signs a statement where there are false authorities should now expect to be referred to their regulatory body.

The Chancellor noted the importance of confidentiality being maintained and cited the Tribunal decision as to the problems; it confirms uploading confidential documents into an open-source AI tool, such as ChatGPT, is considered to have placed the information in the public domain, and thus to breach client confidentiality and it waives legal privilege. Everyone should think carefully about the kind of tool you use and the consequences for what you put into it. If you use a tool without thought, you can lose your legal privilege, meaning information you thought was confidential will become public.

If you have questions or concerns about the use of AI, please contact James Tumbridge and Robert Peake.

For further information please contact:

James Tumbridge

Partner

020 3319 3700

james.tumbridge@keystonelaw.co.uk

Robert Peake

Partner

020 3319 3700

robert.peake@keystonelaw.co.uk

Share