Less than eight months after the new European General Data Protection Regulation (GDPR) came into force the European regulators have levied their fourth fine. The decision this week by the French regulator, the National Commission for Information and Liberty (CNIL), to fine Google €50 million for breaching the GDPR, is the largest so far, and the first to be levied against a US tech giant.
The complaints against Google were filed by two privacy rights groups, None of Your Business (noyb) and La Quadrature du Net (LQDN), who accused Google, as well as Facebook, Instagram and WhatsApp, of not having a valid legal basis to process the personal data of users of its services, “particularly for ads personalisation purposes”.
It is clear that the EU regulators have their eye on the powerful Internet companies that are gathering and processing large amounts of personal information to monetise their services.
Google said in response to the ruling: “People expect high standards of transparency and control from us. We’re deeply committed to meeting those expectations and the consent requirements of the GDPR. We’re studying the decision to determine our next steps.”
And all eyes are on Google’s next steps and those of other tech giants who are investing heavily in figuring out how to comply with the GDPR (and other privacy laws around the world) while still being able to gather and use large quantities of personal information to make services profitable. The standards they are forced to set in response to user complaints and regulatory action will inevitably be the benchmark for all.
It remains to be seen how far these tech giants will push the boundaries given the level of fines at stake. The CNIL could have fined Google nearly €4 billion had it levied the maximum fine possible under the GDPR of 4% of gross annual worldwide turnover. There is another deterrent: when the regulator’s spotlight illuminates the regulatory compliance failures of these power houses they lose the trust of their users.
There are things which Google could have done better to gain user trust and get the essential consent to utilise the users’ data for the ad personalisation.
Firstly, the CNIL found that Google failed to fully disclose to users how their personal information is collected and what happens to it by making it too difficult for users to find essential information because it split that information across multiple documents, help pages and settings screens. Google also made it difficult for users to exercise their rights to opt out of processing of their personal information for personalised ads.
Secondly, the CNIL said that Google did not properly obtain users’ consent for the purpose of showing them personalised ads: it did not meet the standards under the GDPR that such consent be “specific” and “unambiguous”, since users were not asked specifically to opt in to ad targeting, but were asked simply to agree to the whole of Google’s terms and privacy policy – a “take it or leave it” approach to the entire service.
Google (and others such as Facebook, Instagram and WhatsApp) fund their services (and make profits from them) not by charging users but by gathering large amounts of personal information each time users use their services. By the time a user realises that the personal information gathered in this way is being used to offer other goods and services to him/her, he/she is hooked on the free service and doesn’t want to give it up but Google says if you don’t like it, then stop using our services. The complainants’ argument was that consent obtained in this way should be considered invalid given the “powerful position these companies have”.
So, what should Google have done to lawfully gather and use the information it needs to continue to be profitable while at the same time ensuring users do not feel exploited?
The blanket consent approach did not work. The message from the regulator is clear: don’t bury the information in multiple layers of documents and make people go searching to find out what you are doing with their personal information. Make sure they understand what they are consenting to and how they can withdraw that consent if they change their minds. This requires separate consent buttons for each different purpose for which personal information is used, with information about what is being consented to being provided in a clear and easily accessible form.
The problem with this approach is that service providers fear (perhaps justifiably so) that many users will simply not bother to tick the boxes.
So how do these Internet companies ensure they get the all essential consents from their users? Maybe by gaining trust. If users trust the service providers, then they are more likely to agree to provide consent to the collection and use of their personal information for ad personalisation and other purposes, particularly if the trade-off is continued use of the services for free and they fully understand that trade-off.
So how can service providers like Google gain user trust? It seems that being transparent about what data is being collected, how it is being collected and used and who it is being shared with, would be a good start. But all of that is worthless if it is hard for users to find or understand that information. Users must also feel empowered to choose when and whether to share their personal information and, above all, to be able to change their minds without it being difficult to retract previously given consent.
The key would seem to be in providing information in clear and plain language; highlighting key facts; and providing an easy way to opt out or withdraw consent. If users are able to quickly and easily get information about the implications of sharing their personal information and if they are able to see how they can control how that information is then used, they are far more likely to click “I agree” than try to skip over the consent button.
The CNIL justified the level of the fine by saying that Google’s violations were aggravated by the fact that “the economic model of the company is partly based on ads personalisation”, and that it was therefore “its utmost responsibility to comply” with the GDPR.
Translate that to: users need to be able to understand quickly and easily that Google uses their data for targeted advertising in order to generate revenue so they can continue to provide the services to end users for free, and make a profit. Some people won’t like that but if they are given the information which enables them to understand that is how it works, then they can make an informed decision about whether or not they agree. Internet companies need to find a way to get that message across in a clear and transparent way and they need to get used to giving people more control over the information that they have gathered about them. If they don’t, then the complaints will come thick and fast and the ensuing regulatory fines will boost the coffers of the supervisory authorities and eat into the profits of the service providers. Users may be the ultimate losers if the Internet companies start to charge for the services currently provided for free.
This article is for general information purposes only and does not constitute legal or professional advice. It should not be used as a substitute for legal advice relating to your particular circumstances. Please note that the law may have changed since the date of this article.