In the first of a two-part series, our technology partners James Tumbridge and Robert Peake explain what AI is and what the risks of using it are.
What is AI?
We have all heard of artificial intelligence (AI), and there is a lot of talk about its importance, but sadly it is not well understood. People debate the meaning, but many agree that it means the ability of a computer to perform tasks commonly associated with intelligence in the way humans understand it. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, summarise large data sets and predict outcomes, or learn from past experience.
AI as a concept is not new. Key theories and developments date to the 1940s, and it has been a discipline study since the late 1950s. The 1990s brought us machine learning and improvements to computer decision and predictions, and more recently we have been hearing about ‘deep learning’ and neural networks. What has really shifted the understanding, and business excitement, is the ability for non-technical users to engage with AI via natural language. Natural language processing (NLP) allows programs to read, write and communicate in human languages. Advances in speech recognition, speech synthesis, machine translation, information extraction, information retrieval and question answering has suddenly shown how wide the application of AI can be. This coupled with the natural language generative AI, where, for example, you ask an AI for an image based on natural word prompts, has exploded the level of interest.
What risks does AI pose?
There is a lot of debate about the risks that AI poses, but fundamentally the issue is how comfortable are we as humans letting AI make decisions that affect our lives? Some decisions, like helping us find cheap rail tickets, might be fine, but decisions as to whether you get life insurance might be more troubling. In addition, there are specific legal issues with some types of AI use: in some cases, AI might infringe the intellectual property rights of others; there are questions as to how the law deals with AI inventions; there are concerns about the loss of data control to AI and personal data protection; and there may be unconscious biases from data sets which might breach equality legislation.
Are you using AI?
The 2024 Work Trend Index from Microsoft and LinkedIn, which surveyed 31,000 people across 31 countries, suggests that your staff might be using it and you don’t know. Apparently 75–78% of staff are bringing their own use of AI into their work. The worrying thing is that 89% of respondents said they would work around cybersecurity settings to use AI/meet a business demand (becoming so common an occurrence that it is known now as ‘Shadow AI’). So, you may want to consider how safe your IT system actually is.
Do you understand your risks?
The best way to understand your risks is to ask yourself questions like these:
- Are you comfortable with data sharing?
- Is the AI system you use one that learns from your inputs?
- Have you considered what AI is being told?
- Is there any risk of employees sharing confidential information?
- You should be particularly careful about sensitive information that an individual or a company you deal with is in your possession. Are you safe to share any of it?
- Are you sharing personal data? If so, that may require you to think about data protection compliance.
- Do you know how the AI is trained?
- If it was trained on human personal data sets, are they reflective of the people you need to consider, and do you have equality or in-built bias concerns?
- What about generative AI (GenAI)? What copyright sources is it using?
- Use of an AI system could infringe copyright laws. Are you prepared to deal with infringement of the intellectual property rights of others?
- Have you insulated yourself from that risk via contractual terms or insurance?
What can you do to mitigate risks?
Consider what guidance and training you have offered to your staff. What do your policies say about the use of AI? Will you require risk assessments before use? Have you considered adopting an ‘Operating Procedure’ to provide a framework for the use of AI, especially GenAI to cover your employees? Also think about your third-party inter-actions. What do you know about the use by contractors, developers, vendors, temporary staff, consultants or other third parties?
Consider how you might have a human in the loop overseeing an AI system. Can all information generated by GenAI be reviewed and edited for accuracy prior to use?
What are your organisation’s IT security settings? Do you allow or rely on APIs (Application Programming Interfaces), and do you know how they interact and what they share? API and plugin tools that open your IT systems & software to AI should be carefully considered and monitored to ensure you understand and can mitigate the risks of their use.
If you have questions about using AI, please contact James Tumbridge and Robert Peake.
To read more about the regulation of AI, click here.
This article is for general information purposes only and does not constitute legal or professional advice. It should not be used as a substitute for legal advice relating to your particular circumstances. Please note that the law may have changed since the date of this article.