Thomson Reuters names eight Keystone Law partners in its Stand-out Lawyers Guide 2026
Andrea James, Andrew Darwin & Anna McKibbin
Keynote
10 Dec 2021
•5 min read
There has been an upsurge in the use of technology at work during the pandemic. Virtual meetings and the use of audio-visual technologies have become second nature. There are reports of increasing use of such technologies also in recruitment processes and many employers will be inclined to digitalise their recruitment and selection processes. However, the use of artificial intelligence (AI) and other automated decision-making (ADM) technology is not without risk. Employers should tread carefully when purchasing and implementing such systems.
In this article, Audrey Williams examines the key risks and safeguards which organisations must consider and outlines the likely developments that may arise in the future.
These can range from the automatic processing and filtering of CVs and applications (which can be particularly useful where high volumes are received) to virtual interviews and augmented or virtual reality testing and assessments. As more is done via virtual meetings, it might be assumed that candidates will be comfortable and confident using automated or asynchronous video interviews, which it seems is the next key development. The advantages here include freeing up resources and interview panels, where the technology is used to “interview” a long list of candidates and select the short list.
With all such technology, employers must take steps to minimise the risk of bias and discrimination that may arise, as well as addressing data protection obligations.
To a certain extent, bias can be addressed when selecting the provider of the technology by ensuring the provider’s data and algorithms have been stress-tested for bias and discrimination. An employer will want to ensure that the provider has evidence to demonstrate that there is no risk of unfair bias, for example against candidates due to gender, race or age.
In addition to the risks of discrimination claims, particular concerns arise with disability discrimination. Given the obligation to make reasonable adjustments to remove and address disadvantage under the Equality Act, some mechanisms must exist to enable disabled candidates to flag barriers which they might face in undertaking an automated video interview or other automated process. This should enable discussion about any exemptions or adjustments needed for candidates. For example, candidates with visual or hearing impairments, those on the autism spectrum or with a facial disfigurement or a stroke, Bell’s palsy, or facial paralysis. Due to the way in which the systems are designed, such systems read or assess a candidate’s facial expression or response, the level of eye contact, voice tone and language. Disadvantages can arise here and must be addressed. Language and tone of voice can also be more difficult for some whose first language is not English, raising the risk of racial bias and discrimination or challenge those with speech impediments (again disability risks arise).
Finally, as with all technology, it will be necessary to ensure that data protection obligations have been met to ensure there is legitimate processing, protection of rights to privacy and increasingly, being able to explain and justify the results.
This is an area where equality impact assessments and data protection impact assessments should go hand in hand in the sourcing, adoption, and usage of these technologies, as well as monitoring outcomes.
Keeping up to date is particularly important when several regulators, including the Information Commissioner’s Office in the UK, the European Commission and the Financial Conduct Authority are becoming increasingly vocal about the use of AI and ADM. There is likely to be new legislation and guidance or suggestions about best practice in the near future.
In November 2021, the UK’s All Party Parliamentary Group issued a report calling for action, including new legislation. Their recommendations include an algorithms Act to ensure accountability, improved digital protection and statutory guidance.
Those operating in a global environment and who may be using this technology across the jurisdictions in which they operate should keep an eye on the European Commission’s proposal to introduce new legislation (draft regulations are currently under consideration). There are also proposals coming from the United States and individual European countries. One recent example is Germany, where a new obligation to consult with the Works’ Council (a consultative body representing workers) when introducing AI in the workplace, has been introduced.
At a practical level, the Trades Union Congress (TUC) have issued a manifesto containing recommendations, but it is also seeking changes, including the introduction of new legal protections to safeguard employees when it comes to the use of AI and ADM in the workplace. These include a focus on job applicants.
The use of AI is likely to becoming increasingly popular. Prior to integrating it into existing recruitment models, employers should take full precautions to ensure that the technology does not implement discriminatory practices. As this is an area that is expanding drastically, being informed on the implementation and regulation of the technology is vital.
If you have any questions on the use of AI/ADM in recruitment, please contact Audrey Williams