Thomson Reuters names eight Keystone Law partners in its Stand-out Lawyers Guide 2026
Andrea James, Andrew Darwin & Anna McKibbin
Keynote
21 Apr 2026
•8 min read
AI is all around and it is giving many the confidence to comment on things they only know about because of an AI summary. What does this mean for traditional knowledge-based expertise? Do we need experts, and where will the future expertise come from if AI only pulls from the comments of giants? What does this mean for legal expertise?
Traditionally, we turn to legal specialists for the most challenging issues. While many problems start with an expert on a core issue, a wider team drawing on different legal expertise, both strategic and black letter law, delivers the best results. This human ability for an expert leader to appreciate what they do not know, and to identify where to find other expert contributions, is the edge that means top-level legal advice cannot come exclusivity from AI responses. Many lawyers have started to see that their opponents are in fact some form of generative AI. Telltale signs are correct points of law but not in proper alignment with the facts, heavy US law content, and the pace of replies that to a casual reader sound right, but to a seasoned lawyer are distinctly wrong.
Issues with relying on AI for legal advice
Consider the example of a person that is convinced their former business partner breached a confidentiality obligation, so they ask AI for help with a letter. The AI suggests referring to the 1969 case of Coco v A.N. Clark (Engineers) Ltd, a landmark English intellectual property case that established three key requirements for a breach of confidence claim. However, that case only helps if you understand the three-part test and correctly apply it to the facts. The equitable doctrine of breach of confidence operates to prevent the receiver of confidential information from taking unfair advantage of it.
In order to be classified as confidential, the three-stage test first set out in Coco requires the information to:
If any of these conditions is not met, the AI reply may look good to a non-lawyer, but it has little real effect. This shows understanding and experience still have a place in the provision of expert advice.
According to Dr Virginia Leavell of the Judge Business School at Cambridge University: “When process experts focus on enhancing technological ability, audiences mistake such representations for reality and thus undermine such human, expert authority. Yet when experts modulate the AI outputs and integrate this modulation into decision-making, lay audiences preserve this human authority and thus keep healthy uncertainty alive.”
Perception is important in communications, and if you want impact and reaction, then a human wisely using AI may be the better course than using AI and assuming all its utterances are ‘expert’.
When knowledge becomes a mere commodity, “its value paradoxically shifts from the content to the context” according to Ravikiran Kalluri, writing in the MIT Sloan Management Review, and you need to consider three critical transformations:
It is therefore our view that a human (especially a lawyer) with expert knowledge using AI can outsmart a human that uses AI without expert knowledge. Support for this comes from 1996–7 when IBM’s Deep Blue defeated World Chess Champion Garry Kasparov. It was an expert system designed to play chess better than a Grand Master. Yet in that moment the real learning was that human expert knowledge combined with AI improved the decision making. Deep Blue’s programmers gave it a large ‘book’ of moves that it could play automatically, but they also worked with Grand Masters and heuristic experts; this allowed the machine to think up to 30 moves ahead, beating the human Master that typically only thinks 12 moves ahead. Lawyers with experience and expertise of the law, of judges, litigation, and human behaviour, and who can work in teams and leverage tools like AI, will have an advantage; but that still requires there to be experts in the team.
The more fundamental question to our future should be, in the age of AI, where so much is available with so much ease, will we see enough people develop into experts?
If you have questions or concerns about AI, please contact James Tumbridge and Robert Peake.