Artificial Intelligence (AI) has been thought of as the solution to everything for the past couple of years. The use of AI in legal disputes presents positive opportunity, but issues have already been spotted, resulting in various guidelines and rules being issued.
The first thing to think about is what is meant by AI (read more here), because not everything is AI, and not all types of AI are generative, meaning you ask it for something, and it generates an outcome. The generative product of AI is what has caused most concern in legal proceedings.
We have already seen embarrassment in US cases where in May 2023, a New York lawyer used an AI tool, ChatGPT, for legal research, but the results included made-up cases. This resulted in Judge Castel demanding the legal team explain itself.
Use of AI in English courts
English courts received guidance for judges on the use of AI in December 2023. One of the key warnings to judges was that the ‘[C]urrently available LLMs appear to have been trained on material published on the internet. Their “view” of the law is often based heavily on US law, although some do purport to be able to distinguish between that and English law.’
The English courts do not ban the use of AI but both judges and lawyers have been told clearly they are responsible for the material which is produced in their name. In England, AI can be used but the human user is responsible for its accuracy, and responsible for any errors.
England has been looking to technology and potentially AI helping with cases for some time. Back in March 2004, algorithm-based digital decision making was already working behind the scenes in the justice system. Lord Justice Birss explained then that algorithm-based decision making was already solving a problem at the online money claims service, with an algorithmic formula applied where defendants accept a debt but ask for time to pay. Birss LJ went on to say that looking to the future: “AI used properly has the potential to enhance the work of lawyers and judges enormously.” In October 2024, the Lord Chancellor and Secretary of State for Justice, Shabana Mahmood MP and the Lady Chief Justice, The Right Honourable the Baroness Carr of Walton-on-the-Hill, also echoed the potential of technology for the future of the courts and justice system. Nothing is perfect though, and alongside accuracy, there is concern about the ethics in the use of AI. On ethical AI and international standards, the UK promotes the Ethical AI Initiative, and the international standard – specifically ISO 42001, the AI management system. This may be adopted as a standard in English procedure at some point.
Use of AI in other jurisdictions
Other jurisdictions are expressing more caution. In Australia, New South Wales issued a new rule in November 2024 that comes into effect in February 2025. It states that Gen AI must not be used in generating the content of affidavits, witness statements, character references or other material that is intended to reflect the deponent or witness evidence and/or opinion, or other material tendered in evidence or used in cross-examination. The Court is keen to ensure that affidavits, witness statements, and character references should contain and reflect a person’s own knowledge, not AI-generated content. Further, Gen AI cannot be used for the purpose of altering, embellishing, strengthening or diluting or otherwise rephrasing a witness’s evidence when expressed in written form.
In September 2024, the Canadian courts received guidance from the Canadian Judicial Council on the use of AI. They first reminded everyone that judges hold exclusive responsibility for their judicial decisions. Nonetheless, judges were encouraged to leverage available support, including from AI. The objective in the guidance was said to be twofold: to establish a rationale for a consistent approach to the utilisation of AI in Canadian courts, and to shed light on both the opportunities and risks inherent in AI’s potential incursion into court administration and judicial decision-making.
The future is clear: AI will be part of the administration of justice. What is also clear so far is that there is a concern about its use. Generally, we are seeing procedural requirements for the disclosure of its use in the preparation of materials; in some cases it cannot be used for some tasks, and generally lawyers and judges must own the outcomes as their responsibility. It appears therefore that AI in our justice system will happen, but it will be on the basis of human oversight and responsibility.
If you have questions or concerns about the use of AI in court, or AI in general, please contact James Tumbridge.
This article is for general information purposes only and does not constitute legal or professional advice. It should not be used as a substitute for legal advice relating to your particular circumstances. Please note that the law may have changed since the date of this article.