Skip to content

Keynote

What are the regulatory risks of using AI in healthcare?

17 Nov 2025

4 min read

Share

In their second Keynote, Data Protection partner Dan Tozer and Healthcare Regulatory senior associate Natasha Ricioppo explore the regulatory landscape for using AI in healthcare.

The regulatory framework governing AI in UK healthcare is complex and continues to evolve as technology advances. While this article focuses on the regulation of healthcare services, it is important to note that other bodies, such as the Medicines and Healthcare products Regulatory Agency (‘MHRA’) (responsible for regulating medical devices), are also highly relevant in this area.

In England, under the Health and Social Care Act 2008 (Regulated Activities) Regulations 2014 (as amended), activities such as triage, diagnosis, and direct patient care are defined as ‘regulated activities’ and require registration with the Care Quality Commission (‘CQC’). Regulatory approaches are not uniform across the UK, and each of the devolved nations are overseen by an equivalent healthcare regulatory body. For example, Healthcare Improvement Scotland (HIS) oversees independent healthcare services in Scotland and adopts a slightly different stance regarding the requirement for registration, such as in relation to remote health services.

The specific application of AI tools in healthcare introduces further regulatory uncertainty. Regulation has yet to fully catch up with technological innovation, creating a grey area for many providers. Whether the use of an AI tool by a provider constitutes a regulated activity is often nuanced and fact-specific. For instance, using an AI voice assistant or app to provide an administrative healthcare support or medication reminder service to patients is unlikely to fall within the scope of current healthcare regulation. However, if the same technology is used to provide clinical opinion or perform triage or therapeutic tasks, this is more likely to amount to activity which falls within healthcare regulation. Providers may find they are required either to register with a healthcare regulator or to amend an existing registration to properly cover all regulated activities undertaken. Such distinctions are not always clear, and seeking specialist advice is advisable where there is uncertainty before launching or scaling a service that involves AI.

What are the consequences for failing to comply?

The consequences of failing to comply with healthcare regulatory requirements can be significant. Providers operating without the correct registration may face increased regulatory scrutiny, the imposition of sanctions such as restrictions on practice, removal of registration, or, in serious cases, prosecution which can result in unlimited fines and/or up to 12 months’ imprisonment. The CQC has highlighted that it is actively monitoring new models of care, including those using AI, and is prepared to take action where providers do not meet regulatory standards. The approach taken by providers in engaging with regulatory requirements, and their willingness to seek clarification or advice, can have a significant influence on how regulatory issues are resolved. Proactive engagement and early consultation with specialist advisors not only facilitates constructive relationships with regulators but also demonstrates a commitment to best practice.

Meeting healthcare professional regulator expectations

The regulation of individual healthcare professionals in the UK is also developing to reflect the growing use of AI and health technology. Key healthcare professional regulators, including the General Medical Council (‘GMC’), General Pharmaceutical Council (‘GPhC’), and Health and Care Professions Council (‘HCPC’), have updated their guidance to ensure registrants understand their responsibilities when using AI and technology in practice.

GMC guidance makes clear that doctors remain accountable for all clinical decisions, including those informed by AI or decision-support technology. The GPhC reminds pharmacists that safe dispensing is always their responsibility, regardless of any automated or AI systems involved. Likewise, the HCPC requires allied health professionals to use professional judgement and consider both risks and benefits when using new technologies.

A consistent message from healthcare regulators is that AI can support clinical decision-making but it should not replace professional judgement. Healthcare regulators have the authority to investigate where concerns arise in this area and to issue formal warnings, impose conditions on practice, or, in serious cases, limit or remove a professional’s right to practise where patient safety is compromised, whether or not technology is involved. The manner in which healthcare professionals interact with their regulator, and the steps taken to document professional judgement and oversight, can positively affect the outcome of any regulatory scrutiny. Early engagement with expert advisors, particularly when uncertainties arise, can help demonstrate diligence and maintain trust with both regulators and patients.

Looking ahead: staying safe when embedding AI in healthcare

AI offers transformative potential for healthcare, and it is clear that momentum is growing. Healthcare professionals and providers will be acutely aware that embracing AI has become a necessity but maintaining legal and regulatory compliance is crucial. Risks to patients, professional reputation, and business interests can be managed by taking an AI-focused approach to:

  • Regularly reviewing data protection policies and system security;
  • Keeping abreast of guidance from relevant regulators;
  • Revisiting CQC or equivalent registration to ensure it covers your AI-enabled services;
  • Checking that the contracts with your AI suppliers contain reasonable intellectual property, data, and liability clauses that do not harm your interests;
  • Checking that any medical/professional indemnity and legal expenses insurance you hold covers claims or investigations arising from the use of AI in your work.

Healthcare remains one of the most heavily regulated sectors in the UK. As AI technologies and regulatory expectations continue to evolve, taking a proactive approach to compliance will be essential for healthcare providers committed to safe, effective, and innovative care.

Getting legal assistance proactively can help to prevent problems arising. In the event that regulatory questions or investigations do arise, promptly seeking specialist legal advice, from lawyers with a deep understanding of both the healthcare and AI sectors, will be instrumental in achieving the best possible outcome.

If you have questions or concerns about AI in health, please contact Dan Tozer or Natasha Ricioppo. Our specialist lawyers are available to provide tailored support and guidance regarding AI in healthcare and all aspects of your regulatory position.

For further information please contact:

Daniel Tozer

Partner

020 3319 3700

daniel.tozer@keystonelaw.co.uk

Natasha Ricioppo

Senior Associate

020 3319 3700

natasha.ricioppo@keystonelaw.co.uk