English Court of Appeal confirms high bar for representative data claims

Another claim seeking damages for misuse of personal data and ‘loss of control’ of personal data has failed to gain permission to proceed.   The representative claimant, Mr. Prismall, sought to bring the claim on behalf of all of those patients of the Royal Free London NHS Foundation Trust whose personal data was shared with Google’s Deepmind AI under a data sharing agreement with the hospital between 2015 and 2017.   The UK Information Commissioner’s Office previously investigated the arrangements and found them to be in breach of the Data Protection Act 1998.

An earlier attempt to have a class action style proceeding approved by the English Courts, in Lloyd v Google, also failed to gain permission to proceed.  The UK Supreme Court confirmed in its judgment in that case that mere loss of control of personal data, without more, does not entitle an individual to receive damages; there must be evidence of material damage or mental distress.

A key challenge for claims of this sort in the UK is that all claimants in the proposed ‘class’ must have the same interest.  In its judgment in the Prismall case, the Court of Appeal followed the leading line of English cases on the subject, including Lloyd, noting the high bar faced by those seeking to mount cases of a similar nature:

We consider that a representative class claim for misuse of private information is always going to be very difficult to bring. This is because relevant circumstances will affect whether there is a reasonable expectation of privacy for any particular claimant, which will itself affect whether all of the represented class have “the same interest”.

In relation to damages from ‘loss of control’ of personal data, the great difficulty for representative claims lies in establishing that every proposed claimant was likely to establish more than a de minimis level of loss (i.e. more than a nominal amount).  The English law approach stands in contrast to other jurisdictions such as the US, where in some cases statutory damages are available for a breach of privacy, without the need to establish any actual harm having been suffered.

European General Court awards damages for ‘uncertainty’ about personal data use

An individual residing in Germany, Mr. Bindl, sued the European Commission (EC) for impermissibly displaying a Facebook link on a webpage during the period in time when there was no data transfer scheme in place between the EU and the US. This followed the decision in Schrems II which had invalidated the EU-US Privacy Shield, and the current Data Privacy Framework had yet to be adopted.

In 2022, the claimant visited the website for the “Conference on the future of Europe” administered by the EC.  The webpage provided the option to register for the event using an existing Facebook account; being a Facebook user, he decided to register using that link.  In doing so, his IP address would have been transmitted to Meta in the US.

The claimant pursued a number of claims, but succeeded only in his claim for €400 in relation to the transfer of his IP address to the US in the absence of appropriate safeguards against his data being used by, for example, US security services.   The court held that Mr Bindl ‘found himself in a position of uncertainty’ as regards the processing of his personal data – at least his IP address – which would have been transferred to Meta in the US.

The court does not appear to have given particular consideration to the fact that the claimant was himself a user of Facebook with an existing account, which would seem to undermine the finding that any ‘uncertainty’ arose from his personal data being transferred to Meta in the US.  The decision to award damages of €400 without the need to show any actual harm has raised concern that it will usher in a wave of claims, likely by way of class action style proceedings, for mere technical non-compliance with data protection law, without any need to show any adverse effect on the claimants.

European Commission improperly used political opinions for targeted advertising

The European Data Protection Supervisor (EDPS) upheld a complaint against the European Commission (EC) for using the political beliefs and other special categories of personal data to target advertisements on X (formerly Twitter), without a legal basis under the EU GDPR.

The Dutch complainant was represented by NOYB, the data rights organisation of Max Schrems.  His complaint was that the EC did not have a proper legal basis for targeting advertisements on X seeking to grow support for a regulatory proposal aimed at combatting child sexual abuse.  The ad in issue was in Dutch and was accompanied by a video displaying text suggesting that there was considerable public support for the proposed measure.

The ad was targeted to X users using keywords which were intended to include certain groups of recipients and to exclude others; the result being the creation of ‘look alike’ audiences for the targeting of the ads.  The use of look-alike audiences is a common approach when targeting ads on social media; it is used by businesses and by political campaigners across the spectrum.  In the case of the complaint, the EC had sought to exclude users based on a range of political and religious tendencies by reference to keywords including: ‘Sinn Féin’, ‘English Defence League’, ‘Viktor Orban’, ‘Christianity’, and ‘Islam.’

The EC argued that it had not intended to process any special category data of the complainant or others who were shown the ads, or were excluded from seeing them.  In particular, the EC said that it did not hold any such data on any of the individuals in question.  This argument was rejected by the EDPS, confirming that special category personal data can be inferred and that the data controller (the EC) did not itself need to have access to the data; it was enough that the EC was the data controller by determining the purpose and the means of the processing of personal data.

In the circumstances, the EC additionally failed to establish a legal basis for its ad campaign in the first place, even using non special category data; so, a complete success for the complainant and for NOYB.  The facts of the matter serve as a useful reminder for those targeting campaigns online, as the need for a proper legal basis for using political and religious beliefs to target users (even where the campaigner itself does not hold the data itself).

Italian regulator finds Open AI €15M for using personal data to train AI

There has been much debate of late about the risks of using personal data scraped from the Internet for training AI models. Italy’s data regulator, the Garante, made headlines when it ordered that Open AI’s Chat GPT be blocked for use in Italy; that ban lasted only a short time before being removed, but the Garante continued to investigate.

The Garante has now concluded that the training of Chat GPT on the personal data of Italian residents lacked a proper legal basis under the GDPR, and so issued the fine against Open AI.

Typically, the only legal basis available for training an AI is ‘legitimate interests’ which requires that the processing is ‘necessary’ for achieving the interests pursued, and that the interests of individuals – such as their interest in maintaining their privacy – do not outweigh those legitimate interests.  Given the prevalence of Generative AI models in the past several years, we can expect to see further challenges to the use of personal data for training models, both regulatory and by way of legal complaints.

For more information, contact James Tumbridge and Robert Peake.