You might think that with the widespread use of AI and all of the benefits that it brings, it must be risk-free. However, the use of AI technologies with all of its commercial and financial benefits can disguise the legal risks that AI presents for organisations.
Some legal liability risks for an organisation using AI include:
- personal data breaches;
- intellectual property (IP) rights infringements;
- breaches of confidence and trade secrets;
- contract breaches and negligence;
- the publishing of defamatory statements; and
- employment and discrimination law concerns.
Legal risks when using AI
You can sometimes assess the risks of AI by considering the structure of many AI programmes.
- Training the AI
To be as accurate and effective as possible, AI must first be trained on as large a dataset as possible, which (depending on the type of AI technology) can involve an organisation using a large amount of the data it already holds.
Some of the legal risks arising from this are:
- Personal data – the misuse of personal data, particularly where necessary consents have not been obtained.
- Database rights – the infringement of database rights where a substantial part of a third-party database has been extracted or reutilised without the owner’s consent.
- Copyright – the infringement of third-party copyright. The learning process may involve storing and making copies of copyright works, which without permission would likely infringe copyright. The actions of processing and/or dealing with infringing copies may also infringe copyright.
- Trade secrets and confidentiality – the breach of existing confidentiality and trade secret obligations.
- Processing stage – making the AI better
In the processing stage, the AI system will use its existing deep learning and complex algorithms, to allow it to process the data as prompted, to create new outputs.
Such is the complexity of the artificial neural networks that lie at the heart of many AI systems, it can be difficult or even impossible to explain how a system has processed data and arrived at a particular output. This is known as the “black box” issue.
One of the key risks arising from this processing concerns personal data and an organisation complying with its data protection obligations, particularly if the black box nature of AI makes it difficult for an organisation to understand how data is being processed. Other data protection obligations include data minimisation, storage limitation and purpose limitation obligations. Further, the obligation to process data with an appropriate level of security of personal data cannot be ignored.
It is also necessary to consider how much the AI system allows for rectification, erasure, data portability and a restriction on processing. There is also a right that a data subject is not subject to a decision based solely on automated processing, including profiling which significantly affects them.
Some of the consequences of inadvertently breaching data protection laws are particularly damaging for an organisation:
- The data subject may submit a complaint to the ICO, and issue a compensation claim;
- The ICO may impose a fine and issue an enforcement notice; and
- The organisation may suffer reputational harm from the adverse publicity around a public sanctioning or litigation.
- Output stage
The AI will then generate an answer to a question posed, a result from the input prompts given to it. Risks associated with this stage include:
- Copyright – there are unanswered legal questions around the ownership of copyright in content generated by AI and a ‘code of conduct’ has been promised by the UK government but is still awaited.
- There is also a risk of an organisation becoming liable for infringing existing copyright protected works if the output from the AI has copied a substantial part of an original copyright-protected work.
- Defamation: whether in response to prompts, or otherwise, AI could generate defamatory content which is published online. Defamatory statements are statements that cause, or are likely to cause, serious harm to a person’s or organisation’s reputation, irrespective of whether they were made deliberately or inadvertently. Under the existing legislation, AI is not a legal person, so cannot be a defendant; however, the courts will look to identify the publisher liable for the statements, which could be the organisation using the AI.
- Employment and discrimination: decisions taken using AI may be subject to discrimination law risks. If a recruitment decision is taken using AI which tended to skew the outcome toward a particular protected characteristic (e.g. age, race, sex), then the organisation potentially faces a discrimination claim either in the Employment Tribunal or in a civil court for unlawful discrimination. Further, decisions taken for disciplinary purposes using AI could expose the employer to other litigation risks such as unfair dismissal and breach of the implied term of trust and confidence.
Managing and mitigating the risks of AI
An initial first step can be to conduct an AI audit, to understand so far as is possible what AI is currently being used by employees and contractors (whether that use is in an official or authorised capacity or not).
Then, once the landscape is mapped in the organisation, the risks can be identified and assessed. Working with specialist advisors can ensure that the necessary measures to manage and mitigate such risks can be implemented.
An AI audit should include:
- Contractual arrangements – This includes including provisions to mitigate or exclude your liability in AI contracts with third parties including when you are licensing or buying AI from third parties.
- Data Protection – Assessing practical measures such as internal governance, fully mapping the controller and processor relationship, ensuring that necessary consents to hold and process data are in place, and ensuring due diligence on third-party suppliers.
- Database rights and copyright – Ensuring any required consents or licences are in place before any third-party IP is used.
- Defamation: To mitigate the risk of defamation claims, program the AI in a way that minimises the creation of defamatory content where feasible. Having humans in the loop to review content before it is published, to flag content which is potentially defamatory or even problematic from a commercial, ethical, or reputational perspective may be a good risk-mitigation strategy.
- Employment: Risk management can take many forms, but it is particularly important to be transparent about the use of AI with your employees. A risk assessment and an AI audit to monitor the use of AI for employment purposes and which will allow for a careful monitoring of protected characteristics to identify bias and discrimination risks is recommended.
AI is something which can be hugely exciting and productive for your organisation but when you adopt AI, organisations need to be aware of any legal risks that might apply and consider how these can be managed to mitigate the legal risks of AI.
If you have concerns about AI litigation and disputes, please contact Will Charlesworth. For questions about AI contractual and commercial arrangements, please contact Jimmy Desai. If you have questions about AI and employment law issues, please contact Sungjin Park.
This article is for general information purposes only and does not constitute legal or professional advice. It should not be used as a substitute for legal advice relating to your particular circumstances. Please note that the law may have changed since the date of this article.