Skip to content

Keynote

What does the ICO’s warning against automated decision-making mean for AI, recruitment, and employers?

15 Apr 2026

6 min read

Share

The Information Commissioner’s Office (ICO) published its views and report on AI & automated decision-making in recruitment. This flows from its AI and biometrics strategy, published in June 2025.

The ICO spoke with 30 employers, between March 2025 and January 2026, to generate its report. The key finding is that employers do not appreciate they are making automated decisions and the safeguards that should be in place where there is minimal human involvement. It said:

“Our key finding is that many employers engaging in automated recruitment are likely relying on solely automated decisions as part of this process. This means they’re using solely automated systems without meaningful human involvement, and the decisions these systems take have legal or similarly significant effects on people. This places these decisions within the scope of the provisions on solely automated decision-making in the UK General Data Protection Regulation (UK GDPR). For this reason, a greater range of safeguards will need to apply than our evidence suggests are currently in place.”

Understanding automated decision-making (ADM), and the expectation in the law, is important to avoid fines. Organisations should have safeguards in place, such as transparency, bias monitoring, accountability, and appropriate consideration of the data subject rights. If employers do not have adequate safeguards, they can be subject to enforcement action and fines.

ADM decisions without meaningful human involvement are the key point. It is ADM unless there is active and genuine human involvement. Essentially, the employer must ensure that a human is able to influence the decision before it takes effect. They must have the authority to change the outcome.

The impact of the Data (Use and Access) Act

The Data (Use and Access) Act 2025 (DUAA) gives greater flexibility for using automated tools. Under Article 22 of the UK GDPR, which dealt with automated decision-making, the DUAA made a change. It creates a right to challenge automated decisions, supported by safeguards.

The GDPR gave data subjects meaningful rights, but the question of the level of understanding to be provided around automated decisions was debated constantly. In 2025, in C-203/22 Dun & Bradstreet Austria, the Court of Justice of the European Union confirmed that individuals are entitled to a genuine explanation of the logic and results of automated decisions. Yet, just as this ‘right to explanation’ was clarified in Europe, the UK changed its law with the DUAA, changing Article 22 of the UK GDPR and replacing it with a more permissive framework (new Articles 22A–D).

This change gives employers more scope to use automation, provided they put proper protections in place. Where special category data is involved, however, the stricter rules still apply.

Employers need to understand which of the two options they are using when it comes to ADM:

  1. Acknowledgement of automated decision-making. Employers can accept that the process lacks meaningful human involvement. They must then recognise that they are carrying out ADM and apply the required safeguards.
  2. Ensure meaningful human involvement. Employers can redesign their processes so that a human plays a genuine role in each decision for each candidate.

ADM requirements

Employers should identify a lawful basis for processing the personal data. The DUAA removes the previous limitation to consent or contractual necessity in recruitment, provided no special category data is involved. Employers can now rely on legitimate interests, but they need to be certain they can explain and justify that reliance. Perhaps the most important thing to do is demonstrate transparency: you must explain how the automated decision works and its likely effects. You do not want to hide what you are doing; it must be clear.

Regarding safeguards, people must know about the automated process. They must have the chance to make representations, request human review, and challenge the decision.

Employers should be able to demonstrate fairness and avoid bias. This includes questioning those in the supply chain about their systems and if it has been biased-tested. Employers should consider trials and reviews and monitor outcomes over time. Finally, employers must complete data protection impact assessments (DPIAs). The ICO found that many existing DPIAs lack the detail needed to meet legal requirements.

If you have questions or concerns about automated decision-making, please contact James Tumbridge.

For further information please contact:

James Tumbridge

Partner

020 3319 3700

james.tumbridge@keystonelaw.co.uk

Share